Political psychologists have long held that over-simplified “rational” models of voters do not help accurately predict their actual behavior. What most behavioural researchers have found is the decision-making (e.g., voting) often boils down to emotional, unconscious factors. So, in attempting to build up our voting agents, we will need to at least:
include multiple issue perspectives, not just a simple evaluation of “left-right”;
include data for non-policy factors that could determine voting; and
not prescribe values to our agents beyond what we can empirically derive.
Given that we are unable to peek into voters’ minds (and remember: we are trying to avoid using polls[1]), we need data for (or proxies for) factors that might influence someone’s vote. So, we gathered (or created) and joined detailed data for the 2006, 2008, and 2011 Canadian federal elections (as well as the 2015 election, which will be used for predictions).
In a new paper, we discuss what influence multiple factors, such as “leader likeability”, incumbency, “star” status, demographics and policy platforms, may have on voting outcomes, and use these results to predict the upcoming federal election in Toronto ridings.
At a high-level, we find that:
Almost all variables are statistically significant.
Being either a star candidate or an incumbent can boost a candidates share of the vote by 21%, but being both an incumbent and a star candidate does not give a candidate an incremental increase. The two effects are equivalent to belonging to a party (21%).
Leader likeability is associated with a 0.3% change in the proportion of votes received by a candidate. So, a leader that essentially polls the same as their party yields their Toronto-based candidates about 14 points.
The relationships between age, gender, family income, and the proportion of votes vary widely across the parties (as expected). For example, family income tends to increase support for Conservatives (0.005/$10,000) while decreasing for the other two major parties by roughly the same magnitude.
Policy matters, but only slightly, and only economic and environmental issues overall.
With our empirical results, we can turn to predicting the 2015 federal election in Toronto ridings.
It turns out that our Toronto-wide results are fairly in line with recent Toronto-specific polling results (weighted by age and sample size) – though we’ll see how right we all are come election day – which means that there may some inherent truth in the coefficients we have found.
Given that we haven’t used polls or included localized details or party platforms, these results are surprisingly good. The seeming shift from Liberal to Conservative is something that we’ll need to look into further. It is likely highlighting an issue with our data: namely, that we only have three years of detailed federal elections data, and these elections have seen some of the best showings for the Conservatives (and their predecessors) in Ontario since the end of the second world war (the exceptions being in the late 1950s with Diefenbaker, 1979 with Joe Clark, and 1984 with Brian Mulroney), with some of the worst for the Liberals over the same time frame. That is, we are not picking up a (cyclical) reversion to the mean in our variables, but might investigate the cycle itself.
Nonetheless, given we set out to understand (both theoretically and empirically) how to predict an election while significantly limiting the use of polls, and it appears that we are at least on the right track.
[1] This is true for a number of reasons: first, we want to be able to simulate elections, and therefore would not always have access to polls; second, we are trying to do something fundamentally different by observing behaviour instead of asking people questions, which often leads to lying (e.g., social desirability biases: see the “Bradley effect”); third, while polls in aggregate are generally good at predicting outcomes, individual polls are highly volatile.
Given that we are unable to peek into voters’ minds (remember: we are trying to avoid using polls as much as possible), we need data (or proxies) for factors that might influence someone’s vote. We gathered (or created) and joined data for the 2006, 2008, and 2011 Canadian federal elections (as well as the 2015 election, which will be used for predictions) for Toronto ridings.
We’ll be explaining all this in more detail next week, but for now, here are some basics:
We’ve assigned leader “likeability” scores to the major party leaders in each election, using polls that ask questions about leadership characteristics and formulaically compare them to party-level polls around the same time. This provides a value for (or at least a proxy of) how much influence the party leader was having on their party’s showing in the polls, and should account for much of the party variation that we see from year to year. (We also use party identifiers, to identify a “base”.)
For all 366 candidates across the three elections, we identify two things: are they an incumbent, and are they a “star” candidate, by which we mean would they be generally known outside of their riding? This yields 64 candidate-year incumbents (i.e., an individual could be an incumbent in all three elections) and 29 candidate-year stars.
Regressing these data against the proportion of votes received across ridings yields some interesting results. First: party, leader likeability, star candidate, and incumbency are all statistically significant (as is the interaction of star candidate and incumbency). This isn’t a surprise, given the literature around what it is that drives voters’ decisions. (Note that we haven’t yet included demographics or party platforms.)
Breaking down the results: Being a star candidate or an incumbent (but not both) adds about 20 points right off the top, so name recognition obviously matters a lot. Likeability matter too; a leader that essentially polls the same as their party yields candidates about 14 points. (As an example of what this means, Stephane Dion lost the average Liberal candidate in Toronto about 9 points relative to Paul Martin. Alternatively, in 2011, Jack Layton added about 16 points more to NDP candidates in Toronto than Michael Ignatieff did for equivalent Liberal candidates.) Finally, party base matters too: for example, being an average Liberal candidate in Toronto adds about 17 points over the equivalent NDP candidate. (We expect some of this will be explained with demographics and party platforms.)
To be clear, these are average results, so we can’t yet use them effectively for predicting individual riding-level races (that will come later). But, if we apply them to all 2015 races in Toronto and aggregate across the city, we would predict voting proportions very similar to the results of a recent poll by Mainstreet (if undecided voters split proportionally):
Given that we haven’t used polls or included localized details or party platforms, these results are amazing, and give us a lot of confidence that we’re making fantastic progress in understanding voter behaviour (at least in Toronto).
Analyzing the upcoming federal election requires collecting and integrating new data. This is often the most challenging part of any analysis and we’ve committed significant efforts to obtaining good data for federal elections in Toronto’s electoral districts.
Clearly, the first place to start was with Elections Canada and the results of previous general elections. These are available for download as collections of Excel files, which aren’t the most convenient format. So, our toVotes package has been updated to include results from the 2006, 2008, and 2011 federal elections for electoral districts in Toronto. The toFederalVotes data frame provides the candidate’s name, party, whether they were an incumbent, and the number of votes they received by electoral district and poll number. Across the three elections, this amounts to 82,314 observations.
Connecting these voting data with other characteristics requires knowing where each electoral district and poll are in Toronto. So, we created spatial joins among datasets to integrate them (e.g., combining demographics from census data with the vote results). Shapefiles for each of the three federal elections are available for download, but the location identifiers aren’t a clean match between the Excel and shapefiles. Thanks to some help from Elections Canada, we were able to translate the location identifiers and join the voting data to the election shapefiles. This gives us close to 4,000 poll locations across 23 electoral districts in each year. We then used the census shapefiles to aggregate these voting data into 579 census tracts. These tracts are relatively stable and give us a common geographical classification for all of our data.
This work is currently in the experimental fed-geo branch of the toVotes package and will be pulled into the main branch soon. Now, with votes aggregated into census tracts, we can use the census data for Toronto in our toCensus package to explore how demographics affect voting outcomes.
Getting the data to this point was more work than we expected, but well worth the effort. We’re excited to see what we can learn from these data and look forward to sharing the results with you.
A number of people have been asking whether we are going to analyze the upcoming federal election on October 19, like we did for the Toronto mayoral race last year. The truth is, we never stopped working after the mayoral race, but are back with a vengeance for the next five weeks.
We have gathered tonnes of new data and refined our methodology. We have also established a new domain name: psephoanalytics.ca. You can still subscribe to email updates here, or follow us on twitter @psephoanalytics. Finally, if you’d like to chat directly, please email us psephoanalytics@gmail.com.
Nonetheless, stay tuned for lots of updates over the coming weeks, culminating in some predictions for Toronto ridings prior to October 19.
The first (and long) step in moving towards agent-based modeling is the creation of the agents themselves. While fictional, they must represent reality – meaning they need to behave like actual people. The main issue in voter modeling, however, is that since voting is private we do not know how individuals behave, only collections of voters – and we do not want them all to behave the exact same way. That is why one of the key elements of our work is the ability to create meaningful differences among our agents – particularly when it comes to the likes of issue positions and political engagement.
The obvious difficulty is how to do that. In our model, many of our agents’ characteristics are limited to values between 0 and 1 (e.g., political positions, weights on given issues). Many standard distributions, such as the normal, would be cut off at these extremes, creating unrealistic “spikes” of extreme behaviour. We also cannot use uniform distributions, as the likelihood of individuals in a group looking somewhat the same (i.e., more around an average) seems much more reasonable than them looking uniformly different.
Which brings us to the β distribution. In a new paper, we discuss applying this family of distributions to voter characteristics. While there is great diversity in the potential shapes of these distributions - granting us the flexibility we need - in (likely) very extreme cases, the shape will not “look like” what we would expect. Therefore, one of our goals will be to somewhat constrain our selection of fixed values for α and β, based on as much empirical data as possible, to ensure we get this balance right.
A selection α-β combinations that generate “useful” distributions:
As the next Federal General Election gets closer, we’re turning our analytical attention to how the election might play out in Toronto. The first step, of course, is to gather data on prior elections. So, we’ve updated our toVotes data package to include the results of the 2008 and 2011 federal elections for electoral districts in Toronto.
This dataset includes the votes received by each candidate in each district and poll in Toronto. We also include the party affiliation of the candidate and whether they are an incumbent. These data are currently stored in a separate dataset from the mayoral results, since the geography of the electoral districts and wards aren’t directly comparable. We’ll work on integrating these datasets more closely and adding in further election results over the coming weeks.
Hopefully the general availability of cleaned and integrated datasets, such as this one, will help generate more analytically-based discussions of the upcoming election.
Turnout is often seen as (at least an easy) metric of the health of a democracy – as voting is a primary activity in civic engagement. However, turnout rates continue to decline across many jurisdictions[i]. This is certainly true in Canada and Ontario.
From the PsephoAnalytics perspective – namely, accurately predicting the results of elections (particularly when using an agent-based model (ABM) approach) – requires understanding what it is that drives the decision to vote at all, instead of simply staying home.
If this can be done, we would not only improve our estimates in an empirical (or at least heuristic) way, but might also be able to make normative statements about elections. That is, we hope to be able to suggest ways in which turnout could be improved, and whether (or how much) that mattered.
In a new paper we start to investigate the history of turnout in Canada and Ontario, and review what the literature says about the factors associated with turnout, in an effort to help “teach” our agents when and why they “want” to vote. More work will certainly be required here, but this provides a very good start.
We value constructive feedback and continuous improvement, so we’ve taken a careful look at how our predictions held up for the recent mayoral election in Toronto.
The full analysis is here. The summary is that our estimates weren’t too bad on average: the distribution of errors is centered on zero (i.e., not biased) with a small standard error. But, on-average estimates are not sufficient for the types of prediction we would like to make. At a ward-level, we find that we generally overestimated support for Tory, especially in areas where Ford received significant votes.
We understood that our simple agent-based approach wouldn’t be enough. Now we’re particularly motivated to gather up much more data to enrich our agents' behaviour and make better predictions.
The results are in, and our predictions performed reasonably well on average (we averaged 4% off per candidate). Ward by ward predictions were a little more mixed, though, with some wards being bang on (looking at Tory’s results), and some being way off – such as northern Scarborough and Etobicoke. (For what it’s worth, the polls were a ways off in this regard too.) This mostly comes down to our agents not being different enough from one another. We knew building the agents would be the hardest part, and we now have proof!
Regardless, we still think that the agent-based modeling approach is the most appropriate for this kind of work – but we obviously need a lot more data to teach our agents what they believe. So, we’re going to spend the next few months incorporating other datasets (e.g., historical federal and provincial elections, as well as councillor-level data from the 2014 Toronto election). The other piece that we need to focus on is turnout. We knew our turnout predictions were likely the minimum for this election, but aren’t yet able to model a more predictive metric, so we’ll be conducting a study into that as well.
Finally, we’ll provide detailed analysis of our predictions once all the detailed official results become available.
Our final predictions have John Tory winning the 2014 mayoral election in Toronto with a plurality 46% of the votes, followed by Doug Ford (29%) and Olivia Chow (25%). We also predict turnout of at least 49% across the city, but there are differences in turnout among each candidate’s supporters (with Tory’s supporters being the most likely to vote by a significant margin - which is why our results are more in his favour than recent polls). We predict support for each candidate will come from different pockets of the city, as can be seen on the map below.
These predictions were generated by simulating the election ten times, each time sampling one million of our representative voters (whom we created) for their voting preferences and whether they intend to vote.
Each representative voter has demographic characteristics (e.g., age, sex, income) in accordance with local census data, and lives in a specific ‘neighbourhood’ (i.e., census tract). These attributes helped us assign them political beliefs – and therefore preferences for candidates – as well as political engagement scores that come from various studies of historical turnout (from the likes of Elections Canada). The latter allows us to estimate the likelihood of each specific agent actually casting a ballot.
We’ll shortly also release a ward-by-ward summary of our predictions.
In the end, we hope this proof-of-concept proves to be a more refined (and therefore useful in the long-term) than polling data. As the model becomes more sophisticated, we’ll be able to do scenario testing and study other aspects of campaigns.
As promised, here is a ward-by-ward breakdown of our final predictions for the 2014 mayoral election in Toronto. We have Tory garnering the most votes in 33 wards for sure, plus likely another 5 in close races. Six wards are “too close to call”, with three barely leaning to Tory (38, 39, and 40) and three barely leaning to Ford (8, 35, and 43). We’re not predicting Chow will win in any ward, but will come second in fourteen.
The first (and long) step in moving towards agent-based modeling is the creation of the agents themselves. While fictional, they must be representative of reality – meaning they need to behave like actual people might.
In developing a proof of concept of our simulation platform (which we’ll lay out in some detail soon), we’ve created 10,000 agents, drawn randomly from the 542 census tracts (CTs) that make up Toronto per the 2011 Census, proportional to the actual population by age and sex. (CTs are roughly “neighbourhoods”.) So, for example, if 0.001% of the population of Toronto are male, aged 43, living in a CT on the Danforth, then roughly 0.001% of our agents will have those same characteristics. Once the basic agents are selected, we assign (for now) the median household income from the CT to the agent.
But what do these agents believe, politically? For that we take (again, for now) a weighted compilation of relatively recent polls (10 in total, having polled close to 15,000 people, since Doug Ford entered the race), averaged by age/sex /income group/region combinations (420 in total). These give us average support for each of the three major candidates (plus “other”) by agent type, which we then randomly sample (by proportion of support) and assign a Left-Right score (0-100) as we did in our other modeling.
This is somewhat akin to polling, except we’re (randomly) assigning these agents what they believe rather than asking, such that it aggregates back to what the polls are saying, on average.
Next, we take the results of an Elections Canada study on turnout by age/sex that allows us to similarly assign “engagement” scores to the agents. That is, we assign (for now) the average turnout by age/sex group accordingly to each agent. This gives us a sense of likely turnout by CT (see map below).
There is much more to go here, but this forms the basis of our “voter” agents. Next, we’ll turn to “candidate” agents, and then on to “media” agents.
Our most recent analysis shows Tory still in the lead with 44% of the votes, followed by Doug Ford at 33% and Olivia Chow at 23%.
Our analytical approach allows us to take a closer, geographical look. Based on this, we see general support for Tory across the city, while Ford and Chow have more distinct areas of support.
This still based on our original macro-level analysis, but gives a good sense of where our agents support would be (on average) at a local level.
Given the caveats we outlined re: macro-level voting modeling, we’re moving on to a totally different approach. Using something called agent-based modeling (ABM), we’re hoping to move to a point where we can both predict elections, but also use the system to conduct studies on the effectiveness of various election models.
ABM can be defined simply as an individual-centric approach to model design, and has become widespread in multiple fields, from biology to economics. In such models, researchers define agents (e.g., voters, candidates, and media) each with various properties, and an environment in which such agents can behave and interact.
Examining systems through ABM seeks to answer four questions:
Empirical: What are the (causal) explanations for how systems evolve?
Heuristic: What are outcomes of (even simple) shocks to the system?
Method: How can we advance our understanding of the system?
Normative: How can systems be designed better?
We’ll start to provide updates on our progress on the development on our system in the coming weeks.
Based on updated poll numbers (per Threehundredeight.com as of September 16) - where John Tory has a commanding lead - we’re predicting that the wards to watch in the upcoming Toronto mayoral election are clustered in two areas, surprisingly, traditional strongholds for Doug Ford and Olivia Chow.
The first set are Etobicoke North & Centre (wards 1-4), traditional Ford territory. The second are in the south-west portion of downtown, traditional NDP territory, specifically Parkdale-High Park, Davenport, Trinity-Spadina (x2), and Toronto Danforth (respectively wards 14, 18-20, and 30).
As the election gets closer, we’ll provide more detailed predictions.
As with any analytical project, we invested significant time in obtaining and integrating data for our neighbourhood-level modeling. The Toronto Open Data portal provides detailed election results for the 2003, 2006, and 2010 elections, which is a great resource. But, they are saved as Excel files with a separate worksheet for each ward. This is not an ideal format for working with R.
We’ve taken the Excel files for the mayoral-race results and converted them into a data package for R called toVotes. This package includes the votes received by ward and area for each mayoral candidate in each of the last three elections.
If you’re interested in analyzing Toronto’s elections, we hope you find this package useful. We’re also happy to take suggestions (or code contributions) on the GitHub page.
In our first paper, we describe the results of some initial modeling - at a neighbourhood level - of which candidates voters are likely to support in the 2014 Toronto mayoral race. All of our data is based upon publicly available sources.
We use a combination of proximity voter theory and statistical techniques (linear regression and principal-component analyses) to undertake two streams of analysis:
Determining what issues have historically driven votes and what positions neighbourhoods have taken on those issues
Determining which neighbourhood characteristics might explain why people favour certain candidates
In both cases we use candidates’ currently stated positions on issues and assign them scores from 0 (‘extreme left’) to 100 (‘extreme right’). While certainly subjective, there is at least internal consistency to such modeling.
This work demonstrates that significant insights on the upcoming mayoral election in Toronto can be obtained from an analysis of publicly available data. In particular, we find that:
Voters will change their minds in response to issues. So, “getting out the vote” is not a sufficient strategy. Carefully chosen positions and persuasion are also important.
Despite this, the ‘voteability’ of candidates is clearly important, which includes voter’s assessments of a candidate’s ability to lead and how well they know the candidate’s positions.
The airport expansion and transportation have been the dominant issues across the city in the last three elections, though they may not be in 2014.
A combination of family size, mode of commuting, and home values (at the neighbourhood level) can partially predict voting patterns.
We are now moving on to something completely different, where we use an agent-based approach to simulate entire elections. We are actively working on this now and hope to share our progress soon.
Political campaigns have limited resources -–both time and financial - that should be spent on attracting voters that are more likely to support their candidates. Identifying these voters can be critical to the success of a candidate.
Given the privacy of voting and the lack of useful surveys, there are few options for identifying individual voter preferences:
Polling, which is large-scale, but does not identify individual voters
Voter databases, which identify individual voters, but are typically very small scale
In-depth analytical modeling, which is both large-scale and helps to ‘identify’ voters (at least at a neighbourhood level on average)
The goal of PsephoAnalytics* is to model voting behaviour in order to accurately explain campaigns (starting with the 2014 Toronto mayoral race). This means attempting to answer four key questions:
What are the (causal) explanations for how election campaigns evolve – and how well can we predict their outcomes?
What are effects of (even simple) shocks to election campaigns?
How can we advance our understanding of election campaigns?
How can elections be better designed?
Psephology (from the Greek psephos, for ‘pebble’, which the ancient Greeks used as ballots) deals with the analysis of elections.