Based on updated poll numbers (per Threehundredeight.com as of September 16) - where John Tory has a commanding lead - we’re predicting that the wards to watch in the upcoming Toronto mayoral election are clustered in two areas, surprisingly, traditional strongholds for Doug Ford and Olivia Chow.
The first set are Etobicoke North & Centre (wards 1-4), traditional Ford territory. The second are in the south-west portion of downtown, traditional NDP territory, specifically Parkdale-High Park, Davenport, Trinity-Spadina (x2), and Toronto Danforth (respectively wards 14, 18-20, and 30).
As the election gets closer, we’ll provide more detailed predictions.
As with any analytical project, we invested significant time in obtaining and integrating data for our neighbourhood-level modeling. The Toronto Open Data portal provides detailed election results for the 2003, 2006, and 2010 elections, which is a great resource. But, they are saved as Excel files with a separate worksheet for each ward. This is not an ideal format for working with R.
We’ve taken the Excel files for the mayoral-race results and converted them into a data package for R called toVotes. This package includes the votes received by ward and area for each mayoral candidate in each of the last three elections.
If you’re interested in analyzing Toronto’s elections, we hope you find this package useful. We’re also happy to take suggestions (or code contributions) on the GitHub page.
In our first paper, we describe the results of some initial modeling - at a neighbourhood level - of which candidates voters are likely to support in the 2014 Toronto mayoral race. All of our data is based upon publicly available sources.
We use a combination of proximity voter theory and statistical techniques (linear regression and principal-component analyses) to undertake two streams of analysis:
Determining what issues have historically driven votes and what positions neighbourhoods have taken on those issues
Determining which neighbourhood characteristics might explain why people favour certain candidates
In both cases we use candidates’ currently stated positions on issues and assign them scores from 0 (‘extreme left’) to 100 (‘extreme right’). While certainly subjective, there is at least internal consistency to such modeling.
This work demonstrates that significant insights on the upcoming mayoral election in Toronto can be obtained from an analysis of publicly available data. In particular, we find that:
Voters will change their minds in response to issues. So, “getting out the vote” is not a sufficient strategy. Carefully chosen positions and persuasion are also important.
Despite this, the ‘voteability’ of candidates is clearly important, which includes voter’s assessments of a candidate’s ability to lead and how well they know the candidate’s positions.
The airport expansion and transportation have been the dominant issues across the city in the last three elections, though they may not be in 2014.
A combination of family size, mode of commuting, and home values (at the neighbourhood level) can partially predict voting patterns.
We are now moving on to something completely different, where we use an agent-based approach to simulate entire elections. We are actively working on this now and hope to share our progress soon.
Political campaigns have limited resources -–both time and financial - that should be spent on attracting voters that are more likely to support their candidates. Identifying these voters can be critical to the success of a candidate.
Given the privacy of voting and the lack of useful surveys, there are few options for identifying individual voter preferences:
Polling, which is large-scale, but does not identify individual voters
Voter databases, which identify individual voters, but are typically very small scale
In-depth analytical modeling, which is both large-scale and helps to ‘identify’ voters (at least at a neighbourhood level on average)
The goal of PsephoAnalytics* is to model voting behaviour in order to accurately explain campaigns (starting with the 2014 Toronto mayoral race). This means attempting to answer four key questions:
What are the (causal) explanations for how election campaigns evolve – and how well can we predict their outcomes?
What are effects of (even simple) shocks to election campaigns?
How can we advance our understanding of election campaigns?
How can elections be better designed?
Psephology (from the Greek psephos, for ‘pebble’, which the ancient Greeks used as ballots) deals with the analysis of elections.
I recently participated in a panel discussion at the University of Toronto on the career transition from academic research to public service. I really enjoyed the discussion and there were many great questions from the audience. Here’s just a brief summary of some of the main points I tried to make about the differences between academics and public service.
The major difference I’ve experienced involves a trade-off between control and influence.
As a grad student and post-doctoral researcher I had almost complete control over my work. I could decide what was interesting, how to pursue questions, who to talk to, and when to work on specific components of my research. I believe that I made some importantcontributions to my field of study. But, to be honest, this work had very little influence beyond a small group of colleagues who are also interested in the evolution of floral form.
Now I want to be clear about this: in no way should this be interpreted to mean that scientific research is not important. This is how scientific progress is made – many scientists working on particular, specific questions that are aggregated into general knowledge. This work is important and deserves support. Plus, it was incredibly interesting and rewarding.
However, the comparison of the influence of my academic research with my work on infrastructure policy is revealing. Roads, bridges, transit, hospitals, schools, courthouses, and jails all have significant impacts on the day-to-day experience of millions of people. Every day I am involved in decisions that determine where, when, and how the government will invest scarce resources into these important services.
Of course, this is where the control-influence trade-off kicks in. As an individual public servant, I have very little control over these decisions or how my work will be used. Almost everything I do involves medium-sized teams with members from many departments and ministries. This requires extensive collaboration, often under very tight time constraints with high profile outcomes.
For example, in my first week as a public servant I started a year-long process to integrate and enhance decision-making processes across 20 ministries and 2 agencies. The project team included engineers, policy analysts, accountants, lawyers, economists, and external consultants from all of the major government sectors. The (rather long) document produced by this process is now used to inform every infrastructure decision made by the province.
Governments contend with really interesting and complicated problems that no one else can or will consider. Businesses generally take on the easy and profitable issues, while NGOs are able to focus on specific aspects of issues. Consequently, working on government policy provides a seemingly endless supply of challenges and puzzles to solve, or at least mitigate. I find this very rewarding.
None of this is to suggest that either option is better than the other. I’ve been lucky to have had two very interesting careers so far, which have been at the opposite ends of this control-influence trade-off. Nonetheless, my experience suggests that an actual academic career is incredibly challenging to obtain and may require significant compromises. Public service can offer many of the same intellectual challenges with better job prospects and work-life balance. But, you need to be comfortable with the diminished control.
Thanks to my colleague Andrew Miller for creating the panel and inviting me to participate. The experience led me to think more clearly about my career choices and I think the panel was helpful to some University of Toronto grad students.
Our offices will be moving to this new space. I’m looking forward to actually working in a green building, in addition to developing green building policies.
The Jarvis Street project will set the benchmark for how the province manages its own building retrofits. The eight-month-old Green Energy Act requires Ontario government and broader public-sector buildings to meet a minimum LEED Silver standard – Leadership in Energy and Environmental Design. Jarvis Street will also be used to promote an internal culture of conservation, and to demonstrate the province’s commitment to technologically advanced workspaces that are accessible, flexible and that foster staff collaboration and creativity, Ms. Robinson explains.
I spend a fair bit of time with a locked-down Windows XP machine. Fortunately, I’m able to install Emacs which provides capabilities that I find quite helpful. I’ve had to reinstall Emacs a few times now. So, for my own benefit (and perhaps your’s) here are the steps I follow:
Download EmacsW32 patched and install in my user directory under Apps
The CaGBC maintains a list of all the registered LEED projects in Canada. This is a great resource, but rather awkward for analyses. I’ve copied these data into a DabbleDB application with some of the maps and tabulations that I frequently need to reference.
Here for example is a map of the density of LEED projects in each province. While here is a rather detailed view of the kinds of projects across provinces. There are several other views available. Are there any others that might be useful?
I was given an opportunity to propose a measure to clarify how and on what basis the federal government allocates funds to STI - a measure that would strengthen relations between the federal government and the STI community by eliminating misunderstandings and suspicions on this point. In short, my proposal was that Ottawa direct its Science, Technology and Innovation Council to do three things:
To provide an up-to-date description of how these allocation decisions have been made in the past;
To identify the principles and sources of advice on which such decisions should be based;
To recommend the most appropriate structure and process - one characterized by transparency and openness - for making these decisions in the future.
These are reasonable suggestions from Preston Manning: be clear about why and how the Federal government funds science and technology.
Of course I may not agree with the actual decisions made through such a process, but at least I would know why the decisions were made. The current process is far too opaque and confused for such critical investment decisions.
A good read on the mathematics of scaling in urban patterns. I had looked into using the Bettencourt paper (cited in this article) for making allocation decisions. The trick is moving from the general patterns observed in urban scaling to specific recommendations for where to invest in new infrastructure. This is particularly challenging in the absence of good, detailed data on the current infrastructure stock. We’ve made good progress on gathering some of this data, and it might be worth revisiting this scaling relationship.
I’m certain that paying attention to where my food comes from is important. Food production influences my health, has environmental consequences, and affects both urban and rural design. Ideally, I would develop relationships with local farmers, carefully choose organic produce, and always consider broad environmental impacts. Except, I like to spend time with my young family, try to get some exercise, and have more than enough commitments through work to actually spend this much effort on food choices. So, I’ve outsourced this process to the excellent Mama Earth Organics.
Every week a basket of fresh organic and/or local fruit and vegetables arrives on our doorstep. Part of the fun of this service is that different items arrive each week, which diversifies our weekly food routine. But, we always know what’s coming several days in advance, so we can plan our meals well ahead of time. After over a year of service, we’ve only had a single complaint about quality and this was handled very quickly by Mama Earth with a full refund plus credit.
We’ve found the small basket is sufficient for two adults and a picky four-year old. We’ve also added in some fresh bread from St. John’s Bakery, which has been consistently delicious and lasts through most of the week.
Our minister of science continues to argue that his unwillingness to endorse the theory of evolution is not relevant to science policy. As quoted by the Globe and Mail:
My view isn’t important. My personal beliefs are not important.
I find this amazing. How can the minister of science’s views on the fundamental unifying theory of biology not be important?
I don’t expect him to understand the details of evolutionary theory or to have all of his personal beliefs vetted and religious views muted. However, I do expect him – as minister – to champion and support Canadian science, especially basic research. When our minister refuses to acknowledge the fundamental discoveries of science, our reputation is diminished.
There is also a legitimate – though rather exaggerated – concern that the minister’s views on the truth can influence policy and funding decisions. The funding councils are more than sufficiently independent to prevent any undue ministerial influence here. The real problem is an apparent distrust or lack of interest in basic research from the federal government.
Death
Sentences
by Don Watson is a wonderful book – simultaneously funny, scary, and
inspiring – that describes how “clichés, weasel words, and
management-speak” are infecting public language.
The humour comes from Watson’s acerbic commentary and fantastic scorn
for phrases like:
Given the within year and budget time flexibility accorded to the
science agencies in the determination of resource allocation from
within their global budget, a multi-parameter approach to
maintaining the agencies budgets in real terms is not appropriate.
The book is scary because it makes a strong argument for the dangers
of this type of language. Citizens become confused and disinterested,
customers become jaded, and people loose their love for language.
Also, as a public servant I see this kind of language every day and
often find myself struggling to avoid banality and cliches (not to
mention bullet points). We need more forceful advocates like Don
Watson to call out politicians and corporations for abusing our
language. This book certainly makes me want to try harder. And
what’s more inspiring than struggling for a good cause against long
odds?
The book also has a great glossary of typical weasel words with possible synonyms. So, I’m keeping the book in my office for quick reference.
After seventeen years as a vegetarian, I recently switched back to an
omnivore. My motivation for not eating meat was environmental, since,
on average, a vegetarian diet requires much less land, water, and
energy. This is still the right motivation, but over the last year or
so I’ve been rethinking my decision to not eat meat.
My concern was that I’d stopped paying attention to my food choices
and a poorly considered vegetarian diet can easily yield a bad
environmental outcome. In particular, modern agriculture now takes
10 calories of fossil fuel energy to produce a single calorie of
food. This is clearly unsustainable. We cannot rely on
non-renewable, polluting resources for our food, nor can we continue
to transport food great distances – even if it is only vegetables.
My unexamined commitment to a vegetarian diet was no longer consistent
with environmental sustainability.
I think the solution is to eat local, organic food. This also
requires eating seasonal food, but Canadian winters are horrible for
local vegetables. This left me wanting to support local agriculture,
but unable to restrict my diet. Returning to my original motivation
to choose environmentally appropriate food convinced me it was time to
return to being an omnivore. My new policy is to follow Michael
Pollan’s advice: “Eat food. Mostly plants. Not too much.” In
addition, I’ll favour locally grown, organic food and include small
amounts of meat – which I hope will predominantly come from carefully
considered and sustainable sources. I’ve also deciced that when faced
with a dillema of choosing either local or organic, I’ll choose local.
We need to support local agriculture and I’ll trade this for organic
if necessary. Of course, in the majority of cases local and organic
options are available, and I’ll choose them.
This is a big change and I look forward to exploring food again.
Instapaper is an integral part of my web-reading routine. Typically, I have a few minutes early in the morning and scattered throughout the day for quick scans of my favourite web sites and news feeds. I capture anything worth reading with Instapaper’s bookmarklet to create a reading queue of interesting articles. Then with a quick update to the iPhone app this queue is available whenever I find longer blocks of time for reading, particularly during the morning subway ride to work or late at night.
I also greatly appreciate Instapaper’s text view, which removes all the banners, ads, and link lists from the articles to present a nice and clean text view of the content only. I often find myself saving an article to Instapaper even when I have the time to read it, just so I can use this text-only view.
Instapaper is one of my favourite tools and the first iPhone application I purchased.
Like most Canadians, I’ll be at the polls today for the 2008 Federal Election.
In the past several elections, I’ve cast my vote for the party with the best climate change plan. The consensus among economists is that any credible plan must set a price on carbon emissions. My personal preference is for a predictable and transparent price to influence consumer spending, so I favour a carbon tax over a cap-and-trade. Enlightening discussions of these issues are available at Worthwhile Canadian Initiative, Jeffrey Simpson’s column at the Globe and Mail, or his book Hot Air.
Until now this voting principle has meant a vote for the Green Party who support a tax shift from income to pollution. My expectation for this vote was not that the Green Party would gain any direct political power, rather their environmental plan would gain political profile and convince the Liberals and Conservatives to improve their plans. A carbon tax is now a central component of this year’s Liberal Platform with the Green Shift. Both the Conservative Pary and NDP support a limited cap-and-trade system on portions of the economy, with the Conservatives supporting dubious “intensity-based” targets.
Although I quite like the central components of the Green Shift, I’m not too keen on the distracting social engineering aspects of the plan. Furthermore, the Liberals have certainly failed to implement any of their previous climate change plans while in power. Nonetheless, I do think (hope?) they will follow through this time and I prefer supporting a well-conceived plan that may not be implemented than a poor plan. Despite my support for this plan, I think the Liberals have done a rather poor job of explaining the Green Shift and have conducted a disappointing campaign.
In the end, my principle will hold. I’m voting for the Green Shift and, reluctantly, the Liberal Party of Canada.
In this article Nassim Nicholas Taleb applies his Black Swan idea to the current financial crisis and describes the strengths and weaknesses of econometrics.
For us the world is vastly simpler in some sense than the academy, vastly more complicated in another. So the central lesson from decision-making (as opposed to working with data on a computer or bickering about logical constructions) is the following: it is the exposure (or payoff) that creates the complexity —and the opportunities and dangers— not so much the knowledge ( i.e., statistical distribution, model representation, etc.). In some situations, you can be extremely wrong and be fine, in others you can be slightly wrong and explode. If you are leveraged, errors blow you up; if you are not, you can enjoy life.
The core of any government reflects the personality of the prime minister, because everyone in the system responds to his or her ways of thinking, personality traits, political ambitions and policy preferences. Know the prime minister; know the government.
Harper has been an enigma and learning more about his personal policies and approach to governance is very useful while thinking about the upcoming election.
A general summary of the article comes from near the end:
And the long-distance runner – bright, intense, strategic, cautious and confident in every stride – has certainly got things done, from merging two parties, to winning a minority government, to fulfilling most of his campaign promises.
He also has pursued two broad changes in the nature of the federal government: giving the provinces more running room by keeping Ottawa out of some of their affairs and giving individuals a bit more money in the form of tax reductions, credits and child-care cheques.
And yet, despite these policies that he assumed would be popular, despite all the problems on the Liberal side, despite raising far more money, despite governing in mostly excellent economic times, despite stroking Quebec, despite gearing up for elections, his Conservatives have yet to break through decisively.
Reading up on the upcoming Polaris Music Prize reminded me of Patrick Watson, last year’s winner of the prize. His “Close to Paradise” album is inventive with intriguing lyrics, unique sounds, and an often driving piano track. Particular stand out tracks are Luscious Life, Drifters, and The Great Escape. The album is well worth considering and I’m looking forward to listening to the short-listed artists for this year’s prize.
As a result of actions taken in Budget 2007, Canada’s marginal effective tax rate (METR) on new business investment improved from third-highest in the G7 to third-lowest by 2011.
Fair enough, tax rates are projected to decline. But notice how they phrase the context of this reduction. Moving from third highest to third lowest is, in a list of seven countries, a change from third to fifth. Not a dramatic change – we were near the middle and we still are.
TVO’s The Agenda had an interesting show on the debate between evolutionary biology and creationism. Jerry Coyne provided a great overview of evolution and a good defence during the debate.
The debate offered a great illustration of the intellectual vacuity that characterises creationism (aka intelligent design). Paul Nelson offers up an article by Doolittle and Bapteste as proof that Darwinism is unravelling. I suspect he hopes no one will read past the abstract to discover the reasonable debate scientists are having about the universality of a single tree of life. He certainly doesn’t want you to notice that the entire article is couched within evolutionary theory and not once does it claim that Darwinism has been falsified.
Here’s the hypothesis that Doolittle and Bapteste are evaluating:
“that there should be a universal TOL [tree of life], dichotomously branching all of the way down to a single root.” p2045
They then establish that gene transfer often occurs between lineages, particularly among prokaryotes, and consequently this universal tree of life does not exist. Certainly this complicates the construction of molecular trees and shows the importance for pluralism of mechanism in biology. But they write much more about the overall significance of this work.
“To be sure, much of evolution has been tree-like and is captured in hierarchical classifications.” p2048
“…it would be perverse to claim that Darwin’s TOL hypothesis has been falsified for animals (the taxon to which he primarily addressed himself) or that it is not an appropriate model for many taxa at many levels of analysis” p2048
And the crucial quote in this context:
“Holding onto this ladder of pattern […] should not be an essential element in our struggle against those who doubt the validity of evolutionary theory, who can take comfort from this challenge to the TOL only by a willful misunderstanding of its import.” p2048
Note – This post has been updated from 2007-03-20 to describe new installation instructions.
Overview
I’ve integrated Stikkit into most of my workflow and am quite happy with the results. However, one missing piece is quick access to Stikkit from the command line. In particular, a quick list of my undone todos is quite useful without having to load up a web browser.
To this end, I’ve written a Ruby script for interacting with Stikkit. As I mentioned, my real interest is in listing undone todos. But I decided to make the script more general, so you can ask for specific types of stikkits and restrict the stikkits with specific parameters. Also, since the stikkit api is so easy to use, I added in a method for creating new stikkits.
Usage
The general use of the script is to list stikkits of a particular type, filtered by a parameter. For example,
ruby stikkit.rb --list calendar dates=today
will show all of today’s calendar events. While,
ruby stikkit.rb -l todos done=0
lists all undone todos. The use of -l instead of --list is simply a standard convenience. Furthermore, since this last example comprises almost all of my use for this script, I added a convenience method to get all undone todos
ruby stikkit.rb -t
A good way to understand stikkit types and parameters is to keep an eye on the url while you interact with Stikkit in your browser.
To create a new stikkit, use the --create flag,
ruby stikkit.rb -c 'Remember me.'
The text you pass to stikkit.rb will be processed as usual by Stikkit.
Installation
Grab the script from the Google Code project and put it somewhere convenient. Making the file executable and adding it to your path will cut down on the typing. The script reads from a .stikkit file in your path that contains your username and password. Modify this template and save it as ~/.sikkit
---
username: me@domain.org
password: superSecret
The script also requires the atom gem, which you can grab with
gem install atom
I’ve tried to include some flexibility in the processing of stikkits. So, if you don’t like using atom, you can switch to a different format provided by Stikkit. The text type requires no gems, but makes picking out pieces of the stikkits challenging.
Feedback
This script serves me well, but I’m interested in making it more useful. Feel free to pass along any comments or feature requests.
Most of my updates arrive through feeds to NetNewsWire. Since my main source of national news and analysis is the Globe and Mail, I’m quite happy that they provide many feeds for accessing their content. The problem is that many news stories are duplicated across these feeds. Furthermore, tracking all of the feeds of interest is challenging.
The new Yahoo Pipes offer a solution to these problems. Without providing too much detail, pipes are a way to filter, connect, and generally mash-up the web with a straightforward interface. I’ve used this service to collect all of the Globe and Mail feeds of interest, filter out the duplicates, and produce a feed I can subscribe to. Nothing fancy, but quite useful. The pipe is publicly available and if you don’t agree with my choice of news feeds, you are free to clone mine and create your own. There are plenty of other pipes available, so take a look to see if anything looks useful to you. Even better, create your own.
If you really want those details, Tim O'Reilly has plenty.
I find it useful to have a list of my unfinished tasks generally, but subtley, available. To this end, I’ve added my unfinished todos from Stikkit to my Gmail web clips. These are the small snippets of text that appear just above the message list in GMail.
All you need is the subscribe link from your todo page with the ‘not done’ button toggled. The url should look something like:
My experiences helping people manage their data has repeatedly shown that databases are poorly understood. This is well illustrated by the rampant abuses of spreadsheets for recording, manipulating, and analysing data.
Most people realise that they should be using a database, the real issue is the difficulty of creating a proper database. This is a legitimate challenge. Typically, you need to carefully consider all of the categories of data and their relationships when creating the database, which makes the upfront costs quite significant. Why not just start throwing data into a spreadsheet and worry about it later?
I think that DabbleDB can solve this problem. A great strength of Dabble –- and the source of its name — is that you can start with a simple spreadsheet of data and progressively convert it to a database as you begin to better understand the data and your requirements.
Dabble also has a host of great features for working with data. I’ll illustrate this with a database I created recently when we were looking for a new home. This is a daunting challenge. We looked at dozens of houses each with unique pros and cons in different neighbourhoods and with different price ranges. I certainly couldn’t keep track of them all.
I started with a simple list of addresses for consideration. This was easily imported into Dabble and immediately became useful. Dabble can export to Google Earth, so I could quickly have an overview of the properties and their proximity to amenities like transit stops and parks. Next, I added in a field for asking price and MLS url which were also exported to Google Earth. Including price gave a good sense of how costs varied with location, while the url meant I could quickly view the entire listing for a property.
Next, we started scheduling appointments to view properties. Adding this to Dabble immediately created a calendar view. Better yet, Dabble can export this view as an iCal file to add into a calendaring program.
Once we started viewing homes, we began to understand what we really were looking for in terms of features. So, add these to Dabble and then start grouping, searching, and sorting by these attributes.
All of this would have been incredibly challenging without Dabble. No doubt, I would have simply used a spreadsheet and missed out on the rich functionality of a database.
Dabble really is worth a look. The best way to start is to watch the seven minute demo and then review some of the great screencasts.
I like to believe that my brain is useful for analysis, synthesis, and creativity. Clearly it is not proficient at storing details like specific dates and looming reminders. Nonetheless, a great deal of my mental energy is devoted to trying to remember such details and fearing the consequences of the inevitable “it slipped my mind”. As counselled by GTD, I need a good and trustworthy system for removing these important, but distracting, details and having them reappear when needed. I’ve finally settled in on the new product from values of n called Stikkit.
Stikkit appeals to me for two main reasons: easy data entry and smart text processing. Stikkit uses the metaphor of the yellow sticky note for capturing text. When you create a new note, you are presented with a simple text field — nothing more. However, Stikkit parses your note for some key words and extracts information to make the note more useful. For example, if you type:
Phone call with John Smith on Feb 1 at 1pm
Stikkit realises that you are describing an event scheduled for February 1st at one in the afternoon with a person (“peep” in Stikkit slang) named John Smith. A separate note will be created to track information about John Smith and will be linked to the phone call note. If you add the text “remind me” to the note, Stikkit will send you an email and SMS message prior to the event. You can also include tags to group notes together with the keywords “tag as”.
A recent update to peeps makes them even more useful. Stikkit now collects information about people as you create notes. So, for example, if I later post:
- Send documents to John Smith john@smith.net
Stikkit will recognise John Smith and update my peep for him with the email address provided. In this way, Stikkit becomes more useful as you continue to add information to notes. Also, the prefixed “-” causes Stikkit to recognise this note as a todo. I can then list all of my todos and check them off as they are completed.
This text processing greatly simplifies data entry, since I don’t need to click around to create todos are choose dates from a calendar picker. Just type in the text, hit save, and I’m done. Fortunately, Stikkit has been designed to be smart rather than clever. The distinction here is that Stikkit relies on some key words (such as at, for, to) to mark up notes consistently and reliably. Clever software is exemplified by Microsoft Word’s autocorrect or clipboard assistant. My first goal when encountering these “features” is to turn them off. I find they rarely do the right thing and end up being a hindrance. Stikkit is well worth a look. For a great overview check out the screencasts in the forum.
I grabbed this image while preparing a new Windows machine. This seems to be an interesting comparison of the difference in design approaches between Apple and PC remotes. Both provide essentially the same functions. Clearly, however, one is more complex than the other. Which would you rather use?
Prior to general release, plantae is moving web hosts. This seems like a good time to point out that all of plantae’s code is hosted at Google Code. The project has great potential and deserves consistent attention. Unfortunately, I can’t continue to develop the code. So, if you have an interest in collaborative software, particularly in the scientific context, I encourage you to take a look.
I recently helped someone process a text file with the help of Unix command line tools. The job would have been quite challenging otherwise, and I think this represents a useful demonstration of why I choose to use Unix.
In this case the only important information is the second number of each line that begins with “sample:”. Of course, one option is to manually process the file, but there are thousands of lines, and that’s just silly.
We begin by extracting only the lines that begin with “sample:”. grep will do this job easily:
grep "^sample" input.txt
grep searches through the input.txt file and outputs any matching lines to standard output.
Now, we need the second number. sed can strip out the initial text of each line with a find and replace while tr compresses any strange use of whitespace:
sed 's/sample: //g' | tr -s ' '
Notice the use of the pipe (|) command here. This sends the output of one command to the input of the next. This allows commands to be strung together and is one of the truly powerful tools in Unix.
Now we have a matrix of numbers in rows and columns, which is easily processed with awk.
awk '{print $2;}'
Here we ask awk to print out the second number of each row.
So, if we string all this together with pipes, we can process this file as follows: