Lately, our DistrictBuilder software, a tool that allows people to easily participate in creating election districts, has gotten some additional attention. We recently received an Outstanding Software Development Award from the American Political Science Association (given by the Information Technology & Politics Section) and a Data Innovation Award given by the O’Reilly Strata Conference (for data with social impact). And just last week, we had the opportunity to present our work to the government of Mexico at the invitation of the Instituto Federal Electoral, as part of their International Colloquium on Redistricting.
During this presentation, I was able to reflect on the interplay of algorithms and public participation. and it became even clearer to me that applications like DistrictBuilder exemplify the ability of information science to improve policy and politics.
Redistricting in Mexico is particularly interesting, since it relies heavily on facially neutral geo-demographic criteria and optimization algorithms, which represents a different sort of contribution from information science. Thus, it was particularly interesting to me to consider the interplay between algorithmic approaches to problem solving and “wisdom of crowd” approaches, especially for problems in the public sphere.
It’s clear that complex optimization algorithms are an advance in redistricting in Mexico, and have an important role in public policy. However, they also have a number of limitations:
- Algorithmic optimization solutions often depend on a choice of (theoretically arbitrary) ‘starting values’ from which the algorithm starts its search for a solution.
- Quality algorithmic solutions typically rely on accurate input data.
- Many optimization algorithms embed particular criteria or particular constraints into the algorithm itself.
- Even where optimization algorithms are nominally agnostic to the criteria used for the goal, some criteria are more tractable than others; and some are more tractable for particular algorithms.
- In many cases, when an algorithm yields a solution, we don’t know exactly (or even approximately, in any formal sense) how good that solution is.
I argue that explicitly incorporating a human element is important for algorithmic solutions in the public sphere. In particular:
- Use open documentation and open (non-patented, or open-licensed) to enable external replication of algorithms.
- Use open source to enable external verification of the implementation of particular algorithms.
- Incorporate public input to improve the data (especially describing local communities and circumstances) in algorithm driven policies.
- Incorporate crowd-sourced solutions as candidate “starting values” for further algorithmic refinement.
- Subject algorithmic output to crowd-sourced public review to verify the quality of the solutions produced.
You can see the slides, which include more detail and references below. For much such slides, refer to our PublicMapping project site.