Participative technology has succeeded beyond our expectations. The number of legally viable, publicly submitted plans has grown by a factor of a hundred since the last decade. These plans demonstrate a qualitative difference in public participation and have produced many examples of better ways of redistricting.
Baker v. Carr’s elevation of new population equality criteria for redistricting over old geographic-based criteria reflected an evolution in how the courts and society understood the principles of representation. Twenty-first century principles of redistricting should reflect modern understandings of representation and good government—and also reflect the new opportunities and constraints made possible through advancing technology and data collection.
Gerrymandering is a form of political boundary delimitation, or redistricting, in which the boundaries
are selected to produce an outcome that is improperly favorable to some group. The name “gerrymander” was first used by the Boston Gazette in 1812 to describe the shape of Massachusetts Governor Elbridge Gerry’s redistricting plan, in which one district was said to have resembled a salamander. In the United States, congressional and legislative redistricting occurs every 10 years, following the decennial census. The aim of redistricting is to assign voters to equipopulous geographical districts from which they will elect representatives, in order to reflect communities of interest and to improve representation.
Data quality criteria implied by the candidate frameworks are neither easily harmonized nor readily quantified. Thus, a generalized systematic approach to evaluating data quality seems unlikely to emerge soon. Fortunately, developing an effective approach to digital curation that respects data quality does not require a comprehensive definition of data quality. Instead, we can appropriately address “data quality” in curation by limiting our consideration to a narrower applied questions: Which aspects of data quality are (potentially) affected by (each stage of) digital curation activity? And how do we keep invariant data quality properties at each curation stage? A number of approaches suggest seem particularly likely to bear fruit: Incorporate portfolio diversification in selection and appraisal. Support validation of preservation quality attributes such as authenticity, integrity, organization, and chain of custody throughout long-term preservation and use — from ingest through delivery and creation of derivative works. Apply semantic fingerprints for quality evaluation during ingest, format migration and delivery. These approaches have the advantage of being independent of the content subject area, the domain of measure, and the particular semantics content of objects and collections — so they are broadly applicable. By mitigating these broad-spectrum threats to quality, we can improve the overall quality of curated collections and their expected value to target communities.