We show herein how to develop fundable proposals to support your research. Although the proposal strategy we discuss is commonly used in successful proposals, most junior faculty (and many senior scholars) in political science and other social sciences seem to be unaware of it. We dispel myths about funding and discuss how to find funders and target funding programs. We then outline how to write a proposal and detail how to respond to reviews.
This article comprises reflections on the changes to the Henry A. Murray Research Archive, catalyzed by involvement with the NDIIPP partnership, and the accompanying introduction of next-generation digital library software.
Digital libraries are collections of digital content and services selected by a curator for use by a particular user community. Digital libraries offer direct access to the content of a wide variety of intellectual works, including text, audio, video, and data; and may offer a variety of services supporting search, access, and collaboration. In the last decade digital libraries have rapidly become ubiquitous because they offer convenience, expanded access, and search capabilities not present in traditional libraries. This has greatly altered how library users find and access information, and has put pressure on traditional libraries to take on new roles. However, information professionals have raised compelling concerns regarding the sizeable gaps in the holdings of digital libraries, about the preservation of existing holdings, and about sustainable economic models.
This article discusses an algorithm (called "UNF") for verifying digital data matrices. This algorithm is now used in a number of software packages and digital library projects. We discuss the details of the algorithm, and offer an extension for normalization of time and duration data.
In order to identify the open research questions related to information technology and politics, the ITP section convened its first ever working group. This working group drew on the hundreds of presentations at the annual meeting relating technology and politics as well as on previous surveys of information technology research questions (such as Altman & Klass  and Berman and Brady ), in order to identify important open research questions in this rapidly evolving area. Together these questions illuminate a research agenda that explores the interaction of information technology with the core political science concerns of power, political deliberation, authority, legitimacy, security, democracy, and justice.
Most empirical social scientists are surprised that low-level numerical issues in software can have deleterious effects on the estimation process. Statistical analyses that appear to be perfectly successful can be invalidated by concealed numerical problems. We have developed a set of tools, contained in accuracy, a package for R and S-plus, to diagnose problems stemming from numerical and measurement error and to improve the accuracy of inferences. The tools included in accuracy include a framework for gauging the computational stability of model results, tools for comparing model results, optimization diagnostics, and tools for collecting entropy for true random numbers generation.
An essential aspect of science is a community of scholars cooperating and competing in the pursuit of common goals. A critical component of this community is the common language of and the universal standards for scholarly citation, credit attribution, and the location and retrieval of articles and books. We propose a similar universal standard for citing quantitative data that retains the advantages of print citations, adds other components made possible by, and needed due to, the digital form and systematic nature of quantitative data sets, and is consistent with most existing subfield-specific approaches. Although the digital library field includes numerous creative ideas, we limit ourselves to only those elements that appear ready for easy practical use by scientists, journal editors, publishers, librarians, and archivists.
Simply put, all current quantitative methods are deeply flawed: Threshold rules based on indicia that are hypothesized to be correlated with gerrymanders, such as compactness, margins of competition, and estimated electoral responsive, are at best effective only locally and at worst literally impossible to satisfy. Automatic maximization rules using these indicia or other automatable algorithms universally ignore the political context in which they are applied and thus yield politically biased results despite the appearance of neutrality. The most sophisticated methods, which use computationally-intensive sampling from real districting populations, avoid these problems, but suffer from intractable computational issues and (often) from implausible formulation of the "null" hypothesis. We place evaluating intent as a motive behind a redistricting plan into a formal quantitative micro-economic framework to evaluate existing and emerging methods, and find that these methods are statistically flawed. In place of classical statistical tests, we formalize a method of revealed preferences to probe intent by comparing aspects of plans that were feasible, but not selected. This method has been used in an informal, ad-hoc, manner in redistricting cases, but is not well documented and has never been rigorously analyzed. Our method has five advantages. First, it is easily interpretable. Second, it can be applied using only the data available to the original planners and does not require estimating the outcomes of hypothetical elections. Third, lacking sophisticated optimization technology, the basic method can be applied using hand drawn maps. Fourth, it is more consistent with the knowledge that distracters had than statistical methods because it does not implicitly assume that a districting authority was aware of all possible plans. Finally, it is the only quantitative method for determining intent, so far proposed, that is statistically sound.
We read with interest David C. Earnest's recent (July 2006) PS article about the pedagogical challenges surrounding the statistical computation of pseudo-random numbers (PRNGs). We write to clarify some issues regarding the testing and setting of PRNG seeds, and to direct readers' attention to a set of resources for configuring computationally accurate simulations and statistical analyses.
Critical components of the scholarly and library community are use of a common language and universal standards for scholarly citations and credit attribution, to enable the location and retrieval of articles and books. We present a proposal for a similar universal standard for citing quantitative data that retains
the advantages of print citations, adds other components made possible by, and needed due to, the digital form and systematic nature of quantitative datasets, and is consistent with most existing subfield-specific approaches. Although the digital library field includes numerous creative ideas, we limit ourselves to only those elements that appear ready for easy practical use by scientists, journal editors, publishers, librarians, and archivists.
All statistical techniques place limitations on the types of data and the range of inferences that can be accomodated. All computional implementations of these statistical techniques impose further limitations due to algorithmic and low-level computational implementations. Failure to understand these issues can lead to gross misperceptions and seriously incorrect inferences. In this work we examine the numerical accuracy of King's (1997) approach, to ecological inference by using data perturbation, error analysis, and comparative reliability assessment.
Although researchers have yet to achieve consensus on the broad impact of information technology on our understanding of the practice of politics, the broad outlines of a research agenda are emerging. In this overview, we discuss the current work, and identify important research questions that remain to be addressed.
Following the most recent round of redistricting, observers across the political spectrum warned that computing technology had fundamentally changed redistricting, for the worse. They are concerned that computers enable the creation of finely crafted redistricting plans that promote partisan and career goals, to the detriment of electoral competition, and that, ultimately, thwart voters' ability to express their will through the ballot box. In this article, we provide an overview of the use of computers in redistricting, from the earliest reports of their utilization, through today. We then report responses to our survey of state redistricting authorities'computer use in 1991 and 2001. With these data, we assess the use of computers in redistricting, and the fundamental capabilities of computer redistricting systems.