Longform

Investigating voter turnout

Turnout is often seen as (at least an easy) metric of the health of a democracy – as voting is a primary activity in civic engagement. However, turnout rates continue to decline across many jurisdictions[i]. This is certainly true in Canada and Ontario.

From the PsephoAnalytics perspective – namely, accurately predicting the results of elections (particularly when using an agent-based model (ABM) approach) – requires understanding what it is that drives the decision to vote at all, instead of simply staying home.

If this can be done, we would not only improve our estimates in an empirical (or at least heuristic) way, but might also be able to make normative statements about elections. That is, we hope to be able to suggest ways in which turnout could be improved, and whether (or how much) that mattered.

In a new paper we start to investigate the history of turnout in Canada and Ontario, and review what the literature says about the factors associated with turnout, in an effort to help “teach” our agents when and why they “want” to vote. More work will certainly be required here, but this provides a very good start.

[i] See the OECD social indicators or International IDEA voter turnout statistics

Comparing our predictions to the actual votes for the Toronto mayoral election

We value constructive feedback and continuous improvement, so we’ve taken a careful look at how our predictions held up for the recent mayoral election in Toronto.

The full analysis is here. The summary is that our estimates weren’t too bad on average: the distribution of errors is centered on zero (i.e., not biased) with a small standard error. But, on-average estimates are not sufficient for the types of prediction we would like to make. At a ward-level, we find that we generally overestimated support for Tory, especially in areas where Ford received significant votes.

We understood that our simple agent-based approach wouldn’t be enough. Now we’re particularly motivated to gather up much more data to enrich our agents' behaviour and make better predictions.

Try, try again...

The results are in, and our predictions performed reasonably well on average (we averaged 4% off per candidate). Ward by ward predictions were a little more mixed, though, with some wards being bang on (looking at Tory’s results), and some being way off – such as northern Scarborough and Etobicoke. (For what it’s worth, the polls were a ways off in this regard too.) This mostly comes down to our agents not being different enough from one another. We knew building the agents would be the hardest part, and we now have proof!

Regardless, we still think that the agent-based modeling approach is the most appropriate for this kind of work – but we obviously need a lot more data to teach our agents what they believe. So, we’re going to spend the next few months incorporating other datasets (e.g., historical federal and provincial elections, as well as councillor-level data from the 2014 Toronto election). The other piece that we need to focus on is turnout. We knew our turnout predictions were likely the minimum for this election, but aren’t yet able to model a more predictive metric, so we’ll be conducting a study into that as well.

Finally, we’ll provide detailed analysis of our predictions once all the detailed official results become available.

Final predictions

Our final predictions have John Tory winning the 2014 mayoral election in Toronto with a plurality 46% of the votes, followed by Doug Ford (29%) and Olivia Chow (25%). We also predict turnout of at least 49% across the city, but there are differences in turnout among each candidate’s supporters (with Tory’s supporters being the most likely to vote by a significant margin - which is why our results are more in his favour than recent polls). We predict support for each candidate will come from different pockets of the city, as can be seen on the map below.

These predictions were generated by simulating the election ten times, each time sampling one million of our representative voters (whom we created) for their voting preferences and whether they intend to vote.

Each representative voter has demographic characteristics (e.g., age, sex, income) in accordance with local census data, and lives in a specific ‘neighbourhood’ (i.e., census tract). These attributes helped us assign them political beliefs – and therefore preferences for candidates – as well as political engagement scores that come from various studies of historical turnout (from the likes of Elections Canada). The latter allows us to estimate the likelihood of each specific agent actually casting a ballot.

We’ll shortly also release a ward-by-ward summary of our predictions.

In the end, we hope this proof-of-concept proves to be a more refined (and therefore useful in the long-term) than polling data. As the model becomes more sophisticated, we’ll be able to do scenario testing and study other aspects of campaigns.

Final predictions by ward

As promised, here is a ward-by-ward breakdown of our final predictions for the 2014 mayoral election in Toronto. We have Tory garnering the most votes in 33 wards for sure, plus likely another 5 in close races. Six wards are “too close to call”, with three barely leaning to Tory (38, 39, and 40) and three barely leaning to Ford (8, 35, and 43). We’re not predicting Chow will win in any ward, but will come second in fourteen.

Ward	Tory	Ford	Chow	Turnout
1	41%	36%	23%	48%
2	44%	34%	22%	50%
3	49%	31%	20%	51%
4	50%	31%	19%	51%
5	49%	32%	19%	50%
6	46%	33%	21%	50%
7	43%	36%	21%	49%
8	39%	39%	22%	47%
9	42%	37%	21%	50%
10	45%	35%	20%	50%
11	40%	36%	24%	49%
12	40%	36%	23%	49%
13	55%	13%	32%	49%
14	48%	17%	35%	47%
15	43%	36%	21%	50%
16	57%	29%	14%	50%
17	43%	33%	24%	49%
18	47%	16%	37%	47%
19	48%	15%	36%	45%
20	49%	16%	36%	44%
21	56%	12%	32%	49%
22	57%	12%	31%	48%
23	45%	34%	21%	48%
24	48%	33%	20%	50%
25	55%	30%	14%	50%
26	42%	23%	35%	49%
27	52%	14%	34%	46%
28	48%	17%	35%	47%
29	46%	21%	33%	50%
30	52%	14%	34%	48%
31	42%	23%	35%	49%
32	57%	12%	31%	49%
33	45%	35%	20%	49%
34	46%	34%	21%	50%
35	38%	41%	21%	49%
36	44%	37%	19%	50%
37	41%	38%	21%	50%
38	40%	39%	21%	49%
39	40%	39%	21%	50%
40	41%	39%	20%	50%
41	41%	38%	21%	50%
42	41%	38%	21%	48%
43	40%	40%	21%	50%
44	49%	35%	16%	50%

Making agents

The first (and long) step in moving towards agent-based modeling is the creation of the agents themselves. While fictional, they must be representative of reality – meaning they need to behave like actual people might.

In developing a proof of concept of our simulation platform (which we’ll lay out in some detail soon), we’ve created 10,000 agents, drawn randomly from the 542 census tracts (CTs) that make up Toronto per the 2011 Census, proportional to the actual population by age and sex. (CTs are roughly “neighbourhoods”.) So, for example, if 0.001% of the population of Toronto are male, aged 43, living in a CT on the Danforth, then roughly 0.001% of our agents will have those same characteristics. Once the basic agents are selected, we assign (for now) the median household income from the CT to the agent.

But what do these agents believe, politically? For that we take (again, for now) a weighted compilation of relatively recent polls (10 in total, having polled close to 15,000 people, since Doug Ford entered the race), averaged by age/sex /income group/region combinations (420 in total). These give us average support for each of the three major candidates (plus “other”) by agent type, which we then randomly sample (by proportion of support) and assign a Left-Right score (0-100) as we did in our other modeling.

This is somewhat akin to polling, except we’re (randomly) assigning these agents what they believe rather than asking, such that it aggregates back to what the polls are saying, on average.

Next, we take the results of an Elections Canada study on turnout by age/sex that allows us to similarly assign “engagement” scores to the agents. That is, we assign (for now) the average turnout by age/sex group accordingly to each agent. This gives us a sense of likely turnout by CT (see map below).

There is much more to go here, but this forms the basis of our “voter” agents. Next, we’ll turn to “candidate” agents, and then on to “media” agents.

Happy thanksgiving!

End of September predictions

Our most recent analysis shows Tory still in the lead with 44% of the votes, followed by Doug Ford at 33% and Olivia Chow at 23%.

Our analytical approach allows us to take a closer, geographical look. Based on this, we see general support for Tory across the city, while Ford and Chow have more distinct areas of support.

This still based on our original macro-level analysis, but gives a good sense of where our agents support would be (on average) at a local level.

Moving to Agent-Based Modeling

Given the caveats we outlined re: macro-level voting modeling, we’re moving on to a totally different approach. Using something called agent-based modeling (ABM), we’re hoping to move to a point where we can both predict elections, but also use the system to conduct studies on the effectiveness of various election models.

ABM can be defined simply as an individual-centric approach to model design, and has become widespread in multiple fields, from biology to economics. In such models, researchers define agents (e.g., voters, candidates, and media) each with various properties, and an environment in which such agents can behave and interact.

Examining systems through ABM seeks to answer four questions:

We’ll start to provide updates on our progress on the development on our system in the coming weeks.

Wards to watch

Based on updated poll numbers (per Threehundredeight.com as of September 16) - where John Tory has a commanding lead - we’re predicting that the wards to watch in the upcoming Toronto mayoral election are clustered in two areas, surprisingly, traditional strongholds for Doug Ford and Olivia Chow.

The first set are Etobicoke North & Centre (wards 1-4), traditional Ford territory. The second are in the south-west portion of downtown, traditional NDP territory, specifically Parkdale-High Park, Davenport, Trinity-Spadina (x2), and Toronto Danforth (respectively wards 14, 18-20, and 30).

As the election gets closer, we’ll provide more detailed predictions.

Toronto election data

As with any analytical project, we invested significant time in obtaining and integrating data for our neighbourhood-level modeling. The Toronto Open Data portal provides detailed election results for the 2003, 2006, and 2010 elections, which is a great resource. But, they are saved as Excel files with a separate worksheet for each ward. This is not an ideal format for working with R.

We’ve taken the Excel files for the mayoral-race results and converted them into a data package for R called toVotes. This package includes the votes received by ward and area for each mayoral candidate in each of the last three elections.

If you’re interested in analyzing Toronto’s elections, we hope you find this package useful. We’re also happy to take suggestions (or code contributions) on the GitHub page.

First attempt at predicting the 2014 Toronto mayoral race

In our first paper, we describe the results of some initial modeling - at a neighbourhood level - of which candidates voters are likely to support in the 2014 Toronto mayoral race. All of our data is based upon publicly available sources.

We use a combination of proximity voter theory and statistical techniques (linear regression and principal-component analyses) to undertake two streams of analysis:

  1. Determining what issues have historically driven votes and what positions neighbourhoods have taken on those issues
  2. Determining which neighbourhood characteristics might explain why people favour certain candidates

In both cases we use candidates’ currently stated positions on issues and assign them scores from 0 (‘extreme left’) to 100 (‘extreme right’). While certainly subjective, there is at least internal consistency to such modeling.

This work demonstrates that significant insights on the upcoming mayoral election in Toronto can be obtained from an analysis of publicly available data. In particular, we find that:

We are now moving on to something completely different, where we use an agent-based approach to simulate entire elections. We are actively working on this now and hope to share our progress soon.

What is PsephoAnalytics?

Political campaigns have limited resources -–both time and financial - that should be spent on attracting voters that are more likely to support their candidates. Identifying these voters can be critical to the success of a candidate.

Given the privacy of voting and the lack of useful surveys, there are few options for identifying individual voter preferences:

The goal of PsephoAnalytics* is to model voting behaviour in order to accurately explain campaigns (starting with the 2014 Toronto mayoral race). This means attempting to answer four key questions:

  1. What are the (causal) explanations for how election campaigns evolve – and how well can we predict their outcomes?
  2. What are effects of (even simple) shocks to election campaigns?
  3. How can we advance our understanding of election campaigns?
  4. How can elections be better designed?

Psephology (from the Greek psephos, for ‘pebble’, which the ancient Greeks used as ballots) deals with the analysis of elections.

Public service vs. Academics

I recently participated in a panel discussion at the University of Toronto on the career transition from academic research to public service. I really enjoyed the discussion and there were many great questions from the audience. Here’s just a brief summary of some of the main points I tried to make about the differences between academics and public service.

The major difference I’ve experienced involves a trade-off between control and influence.

As a grad student and post-doctoral researcher I had almost complete control over my work. I could decide what was interesting, how to pursue questions, who to talk to, and when to work on specific components of my research. I believe that I made some important contributions to my field of study. But, to be honest, this work had very little influence beyond a small group of colleagues who are also interested in the evolution of floral form.

Now I want to be clear about this: in no way should this be interpreted to mean that scientific research is not important. This is how scientific progress is made – many scientists working on particular, specific questions that are aggregated into general knowledge. This work is important and deserves support. Plus, it was incredibly interesting and rewarding.

However, the comparison of the influence of my academic research with my work on infrastructure policy is revealing. Roads, bridges, transit, hospitals, schools, courthouses, and jails all have significant impacts on the day-to-day experience of millions of people. Every day I am involved in decisions that determine where, when, and how the government will invest scarce resources into these important services.

Of course, this is where the control-influence trade-off kicks in. As an individual public servant, I have very little control over these decisions or how my work will be used. Almost everything I do involves medium-sized teams with members from many departments and ministries. This requires extensive collaboration, often under very tight time constraints with high profile outcomes.

For example, in my first week as a public servant I started a year-long process to integrate and enhance decision-making processes across 20 ministries and 2 agencies. The project team included engineers, policy analysts, accountants, lawyers, economists, and external consultants from all of the major government sectors. The (rather long) document produced by this process is now used to inform every infrastructure decision made by the province.

Governments contend with really interesting and complicated problems that no one else can or will consider. Businesses generally take on the easy and profitable issues, while NGOs are able to focus on specific aspects of issues. Consequently, working on government policy provides a seemingly endless supply of challenges and puzzles to solve, or at least mitigate. I find this very rewarding.

None of this is to suggest that either option is better than the other. I’ve been lucky to have had two very interesting careers so far, which have been at the opposite ends of this control-influence trade-off. Nonetheless, my experience suggests that an actual academic career is incredibly challenging to obtain and may require significant compromises. Public service can offer many of the same intellectual challenges with better job prospects and work-life balance. But, you need to be comfortable with the diminished control.

Thanks to my colleague Andrew Miller for creating the panel and inviting me to participate. The experience led me to think more clearly about my career choices and I think the panel was helpful to some University of Toronto grad students.

From brutal brooding to retrofit-chic

Our offices will be moving to this new space. I’m looking forward to actually working in a green building, in addition to developing green building policies.

The Jarvis Street project will set the benchmark for how the province manages its own building retrofits. The eight-month-old Green Energy Act requires Ontario government and broader public-sector buildings to meet a minimum LEED Silver standard – Leadership in Energy and Environmental Design. Jarvis Street will also be used to promote an internal culture of conservation, and to demonstrate the province’s commitment to technologically advanced workspaces that are accessible, flexible and that foster staff collaboration and creativity, Ms. Robinson explains.

From brutal brooding to retrofit-chic

Emacs Installation on Windows XP

I spend a fair bit of time with a locked-down Windows XP machine. Fortunately, I’m able to install Emacs which provides capabilities that I find quite helpful. I’ve had to reinstall Emacs a few times now. So, for my own benefit (and perhaps your’s) here are the steps I follow:

  1. Download EmacsW32 patched and install in my user directory under Apps

    Available from http://ourcomments.org/Emacs/EmacsW32.html

  2. Set the environment variable for HOME to my user directory

    Right click on My Computer, select the Advanced tab, and then Environment Variables.

    Add a new variable and set Variable name to HOME and Variable value to C:\Documents and Settings\my_user_directory

  3. Download technomancy’s Emacs Starter Kit

    Available from http://github.com/technomancy/emacs-starter-kit

    Extract archive into .emacs_d in %HOME%

    Copy my specific emacs settings into .emacs_d\my_user_name.el

Canada LEED projects

The CaGBC maintains a list of all the registered LEED projects in Canada. This is a great resource, but rather awkward for analyses. I’ve copied these data into a DabbleDB application with some of the maps and tabulations that I frequently need to reference.

Here for example is a map of the density of LEED projects in each province. While here is a rather detailed view of the kinds of projects across provinces. There are several other views available. Are there any others that might be useful?

Every day is ‘science day'

I was given an opportunity to propose a measure to clarify how and on what basis the federal government allocates funds to STI - a measure that would strengthen relations between the federal government and the STI community by eliminating misunderstandings and suspicions on this point. In short, my proposal was that Ottawa direct its Science, Technology and Innovation Council to do three things:

To provide an up-to-date description of how these allocation decisions have been made in the past;

To identify the principles and sources of advice on which such decisions should be based;

To recommend the most appropriate structure and process - one characterized by transparency and openness - for making these decisions in the future.

These are reasonable suggestions from Preston Manning: be clear about why and how the Federal government funds science and technology.

Of course I may not agree with the actual decisions made through such a process, but at least I would know why the decisions were made. The current process is far too opaque and confused for such critical investment decisions.

Math and the City

judson.blogs.nytimes.com/2009/05/1…

A good read on the mathematics of scaling in urban patterns. I had looked into using the Bettencourt paper (cited in this article) for making allocation decisions. The trick is moving from the general patterns observed in urban scaling to specific recommendations for where to invest in new infrastructure. This is particularly challenging in the absence of good, detailed data on the current infrastructure stock. We’ve made good progress on gathering some of this data, and it might be worth revisiting this scaling relationship.

Mama Earth Organics

I’m certain that paying attention to where my food comes from is important. Food production influences my health, has environmental consequences, and affects both urban and rural design. Ideally, I would develop relationships with local farmers, carefully choose organic produce, and always consider broad environmental impacts. Except, I like to spend time with my young family, try to get some exercise, and have more than enough commitments through work to actually spend this much effort on food choices. So, I’ve outsourced this process to the excellent Mama Earth Organics.

Every week a basket of fresh organic and/or local fruit and vegetables arrives on our doorstep. Part of the fun of this service is that different items arrive each week, which diversifies our weekly food routine. But, we always know what’s coming several days in advance, so we can plan our meals well ahead of time. After over a year of service, we’ve only had a single complaint about quality and this was handled very quickly by Mama Earth with a full refund plus credit.

We’ve found the small basket is sufficient for two adults and a picky four-year old. We’ve also added in some fresh bread from St. John’s Bakery, which has been consistently delicious and lasts through most of the week.

Goodyear's Religious Beliefs vs. Evolution

Our minister of science continues to argue that his unwillingness to endorse the theory of evolution is not relevant to science policy. As quoted by the Globe and Mail:

My view isn’t important. My personal beliefs are not important.

I find this amazing. How can the minister of science’s views on the fundamental unifying theory of biology not be important?

I don’t expect him to understand the details of evolutionary theory or to have all of his personal beliefs vetted and religious views muted. However, I do expect him – as minister – to champion and support Canadian science, especially basic research. When our minister refuses to acknowledge the fundamental discoveries of science, our reputation is diminished.

There is also a legitimate – though rather exaggerated – concern that the minister’s views on the truth can influence policy and funding decisions. The funding councils are more than sufficiently independent to prevent any undue ministerial influence here. The real problem is an apparent distrust or lack of interest in basic research from the federal government.

Death Sentences Review

Death Sentences by Don Watson is a wonderful book – simultaneously funny, scary, and inspiring – that describes how “clichés, weasel words, and management-speak” are infecting public language.

The humour comes from Watson’s acerbic commentary and fantastic scorn for phrases like:

Given the within year and budget time flexibility accorded to the science agencies in the determination of resource allocation from within their global budget, a multi-parameter approach to maintaining the agencies budgets in real terms is not appropriate.

The book is scary because it makes a strong argument for the dangers of this type of language. Citizens become confused and disinterested, customers become jaded, and people loose their love for language. Also, as a public servant I see this kind of language every day and often find myself struggling to avoid banality and cliches (not to mention bullet points). We need more forceful advocates like Don Watson to call out politicians and corporations for abusing our language. This book certainly makes me want to try harder. And what’s more inspiring than struggling for a good cause against long odds?

The book also has a great glossary of typical weasel words with possible synonyms. So, I’m keeping the book in my office for quick reference.

Omnivore

After seventeen years as a vegetarian, I recently switched back to an omnivore. My motivation for not eating meat was environmental, since, on average, a vegetarian diet requires much less land, water, and energy. This is still the right motivation, but over the last year or so I’ve been rethinking my decision to not eat meat.

My concern was that I’d stopped paying attention to my food choices and a poorly considered vegetarian diet can easily yield a bad environmental outcome. In particular, modern agriculture now takes 10 calories of fossil fuel energy to produce a single calorie of food. This is clearly unsustainable. We cannot rely on non-renewable, polluting resources for our food, nor can we continue to transport food great distances – even if it is only vegetables. My unexamined commitment to a vegetarian diet was no longer consistent with environmental sustainability.

I think the solution is to eat local, organic food. This also requires eating seasonal food, but Canadian winters are horrible for local vegetables. This left me wanting to support local agriculture, but unable to restrict my diet. Returning to my original motivation to choose environmentally appropriate food convinced me it was time to return to being an omnivore. My new policy is to follow Michael Pollan’s advice: “Eat food. Mostly plants. Not too much.” In addition, I’ll favour locally grown, organic food and include small amounts of meat – which I hope will predominantly come from carefully considered and sustainable sources. I’ve also deciced that when faced with a dillema of choosing either local or organic, I’ll choose local. We need to support local agriculture and I’ll trade this for organic if necessary. Of course, in the majority of cases local and organic options are available, and I’ll choose them.

This is a big change and I look forward to exploring food again.

Instapaper Review

Instapaper is an integral part of my web-reading routine. Typically, I have a few minutes early in the morning and scattered throughout the day for quick scans of my favourite web sites and news feeds. I capture anything worth reading with Instapaper’s bookmarklet to create a reading queue of interesting articles. Then with a quick update to the iPhone app this queue is available whenever I find longer blocks of time for reading, particularly during the morning subway ride to work or late at night.

I also greatly appreciate Instapaper’s text view, which removes all the banners, ads, and link lists from the articles to present a nice and clean text view of the content only. I often find myself saving an article to Instapaper even when I have the time to read it, just so I can use this text-only view.

Instapaper is one of my favourite tools and the first iPhone application I purchased.

Election 2008

Like most Canadians, I’ll be at the polls today for the 2008 Federal Election.

In the past several elections, I’ve cast my vote for the party with the best climate change plan. The consensus among economists is that any credible plan must set a price on carbon emissions. My personal preference is for a predictable and transparent price to influence consumer spending, so I favour a carbon tax over a cap-and-trade. Enlightening discussions of these issues are available at Worthwhile Canadian Initiative, Jeffrey Simpson’s column at the Globe and Mail, or his book Hot Air.

Until now this voting principle has meant a vote for the Green Party who support a tax shift from income to pollution. My expectation for this vote was not that the Green Party would gain any direct political power, rather their environmental plan would gain political profile and convince the Liberals and Conservatives to improve their plans. A carbon tax is now a central component of this year’s Liberal Platform with the Green Shift. Both the Conservative Pary and NDP support a limited cap-and-trade system on portions of the economy, with the Conservatives supporting dubious “intensity-based” targets.

Although I quite like the central components of the Green Shift, I’m not too keen on the distracting social engineering aspects of the plan. Furthermore, the Liberals have certainly failed to implement any of their previous climate change plans while in power. Nonetheless, I do think (hope?) they will follow through this time and I prefer supporting a well-conceived plan that may not be implemented than a poor plan. Despite my support for this plan, I think the Liberals have done a rather poor job of explaining the Green Shift and have conducted a disappointing campaign.

In the end, my principle will hold. I’m voting for the Green Shift and, reluctantly, the Liberal Party of Canada.

A Map of the Limits of Statistics

In this article Nassim Nicholas Taleb applies his Black Swan idea to the current financial crisis and describes the strengths and weaknesses of econometrics.

For us the world is vastly simpler in some sense than the academy, vastly more complicated in another. So the central lesson from decision-making (as opposed to working with data on a computer or bickering about logical constructions) is the following: it is the exposure (or payoff) that creates the complexity —and the opportunities and dangers— not so much the knowledge ( i.e., statistical distribution, model representation, etc.). In some situations, you can be extremely wrong and be fine, in others you can be slightly wrong and explode. If you are leveraged, errors blow you up; if you are not, you can enjoy life.

Via Arts and Letters Daily

Globe and Mail: Incremental man

A detailed and fascinating portrait of Stephen Harper. As the article points out:

The core of any government reflects the personality of the prime minister, because everyone in the system responds to his or her ways of thinking, personality traits, political ambitions and policy preferences. Know the prime minister; know the government.

Harper has been an enigma and learning more about his personal policies and approach to governance is very useful while thinking about the upcoming election.

A general summary of the article comes from near the end:

And the long-distance runner – bright, intense, strategic, cautious and confident in every stride – has certainly got things done, from merging two parties, to winning a minority government, to fulfilling most of his campaign promises.

He also has pursued two broad changes in the nature of the federal government: giving the provinces more running room by keeping Ottawa out of some of their affairs and giving individuals a bit more money in the form of tax reductions, credits and child-care cheques.

And yet, despite these policies that he assumed would be popular, despite all the problems on the Liberal side, despite raising far more money, despite governing in mostly excellent economic times, despite stroking Quebec, despite gearing up for elections, his Conservatives have yet to break through decisively.

Patrick Watson

Reading up on the upcoming Polaris Music Prize reminded me of Patrick Watson, last year’s winner of the prize. His “Close to Paradise” album is inventive with intriguing lyrics, unique sounds, and an often driving piano track. Particular stand out tracks are Luscious Life, Drifters, and The Great Escape. The album is well worth considering and I’m looking forward to listening to the short-listed artists for this year’s prize.

Stuck in the middle

A recent press release from the federal government entitled “Making a Strong Canadian Economy Even Stronger” contains a sentence that struck me as odd.

As a result of actions taken in Budget 2007, Canada’s marginal effective tax rate (METR) on new business investment improved from third-highest in the G7 to third-lowest by 2011.

Fair enough, tax rates are projected to decline. But notice how they phrase the context of this reduction. Moving from third highest to third lowest is, in a list of seven countries, a change from third to fifth. Not a dramatic change – we were near the middle and we still are.

Creationists and their old tricks

TVO’s The Agenda had an interesting show on the debate between evolutionary biology and creationism. Jerry Coyne provided a great overview of evolution and a good defence during the debate.

The debate offered a great illustration of the intellectual vacuity that characterises creationism (aka intelligent design). Paul Nelson offers up an article by Doolittle and Bapteste as proof that Darwinism is unravelling. I suspect he hopes no one will read past the abstract to discover the reasonable debate scientists are having about the universality of a single tree of life. He certainly doesn’t want you to notice that the entire article is couched within evolutionary theory and not once does it claim that Darwinism has been falsified.

Here’s the hypothesis that Doolittle and Bapteste are evaluating:

“that there should be a universal TOL [tree of life], dichotomously branching all of the way down to a single root.” p2045

They then establish that gene transfer often occurs between lineages, particularly among prokaryotes, and consequently this universal tree of life does not exist. Certainly this complicates the construction of molecular trees and shows the importance for pluralism of mechanism in biology. But they write much more about the overall significance of this work.

“To be sure, much of evolution has been tree-like and is captured in hierarchical classifications.” p2048

“…it would be perverse to claim that Darwin’s TOL hypothesis has been falsified for animals (the taxon to which he primarily addressed himself) or that it is not an appropriate model for many taxa at many levels of analysis” p2048

And the crucial quote in this context:

“Holding onto this ladder of pattern […] should not be an essential element in our struggle against those who doubt the validity of evolutionary theory, who can take comfort from this challenge to the TOL only by a willful misunderstanding of its import.” p2048

Stikkit from the command line

Note – This post has been updated from 2007-03-20 to describe new installation instructions.

Overview

I’ve integrated Stikkit into most of my workflow and am quite happy with the results. However, one missing piece is quick access to Stikkit from the command line. In particular, a quick list of my undone todos is quite useful without having to load up a web browser. To this end, I’ve written a Ruby script for interacting with Stikkit. As I mentioned, my real interest is in listing undone todos. But I decided to make the script more general, so you can ask for specific types of stikkits and restrict the stikkits with specific parameters. Also, since the stikkit api is so easy to use, I added in a method for creating new stikkits.

Usage

The general use of the script is to list stikkits of a particular type, filtered by a parameter. For example,

ruby stikkit.rb --list calendar dates=today

will show all of today’s calendar events. While,

ruby stikkit.rb -l todos done=0

lists all undone todos. The use of -l instead of --list is simply a standard convenience. Furthermore, since this last example comprises almost all of my use for this script, I added a convenience method to get all undone todos

ruby stikkit.rb -t

A good way to understand stikkit types and parameters is to keep an eye on the url while you interact with Stikkit in your browser. To create a new stikkit, use the --create flag,

ruby stikkit.rb -c 'Remember me.'

The text you pass to stikkit.rb will be processed as usual by Stikkit.

Installation

Grab the script from the Google Code project and put it somewhere convenient. Making the file executable and adding it to your path will cut down on the typing. The script reads from a .stikkit file in your path that contains your username and password. Modify this template and save it as ~/.sikkit


     ---
     username: me@domain.org 
     password: superSecret 

The script also requires the atom gem, which you can grab with

gem install atom

I’ve tried to include some flexibility in the processing of stikkits. So, if you don’t like using atom, you can switch to a different format provided by Stikkit. The text type requires no gems, but makes picking out pieces of the stikkits challenging.

Feedback

This script serves me well, but I’m interested in making it more useful. Feel free to pass along any comments or feature requests.