Data citation is rapidly emerging as a key practice supporting data access, sharing and reuse, as well as sound and reproducible scholarship. Consensus data citation principles, articulated through the Joint Declaration of Data Citation Principles , represent an advance in the state of the practice and a new consensus on citation
Data citation is rapidly emerging as a key practice in support of data access, sharing, reuse, and of sound and reproducible scholarship. In this article we review the evolution of data citation standards and practices – to which Sue Dodd was an early contributor – and the core principles of data citation that have emerged through a collaborative synthesis. We then discuss an example of the current state of the practice, and identify the remaining implementation challenges.
The 2014 National Agenda for Digital Stewardship highlights emerging technological trends, identifies gaps in digital stewardship capacity, and provides funders and decision‐makers with insight into the work needed to ensure that today's valuable digital content remains accessible, useful and comprehensible in the future, supporting a thriving economy, a robust democracy, and a rich cultural heritage. It is meant to inform, rather than replace, individual organizational efforts, planning, goals, or opinions. It offers inspiration and guidance and suggests potential directions and areas of inquiry for research and future work in digital stewardship.
The importance of long-term access to and preservation of data for research and educational use is now widely recognized. In addition, the Federal Records Act covers data records created by federal agencies or their contractors, and requires a plan for their long-term disposition. Good practice is clear – data producers should plan for archiving of data early, so that data are available for future research and policy analysis. The successes of the Data-PASS project reflect the importance of building a partnership that drew together experienced digital archives to identify, acquire, curate, and preserve a broad range of digital content. The partnership enabled us to agree on standards, work together on technology, and share the responsibility for identifying, acquiring, and preserving the content in our field of activity. The tangible result is a significant amount of digital content preserved, which constitutes one of the core goals of the NDIIPP program. Perhaps more importantly, the partnership showed a way toward the future of digital preservation, which has been an even more fundamental goal of NDIIPP. Data-PASS demonstrated how to preserve an ever-larger share of digital social science data, and to do so in a structure that is sustainable for the very long term.
This article comprises reflections on the changes to the Henry A. Murray Research Archive, catalyzed by involvement with the NDIIPP partnership, and the accompanying introduction of next-generation digital library software.
This article discusses an algorithm (called "UNF") for verifying digital data matrices. This algorithm is now used in a number of software packages and digital library projects. We discuss the details of the algorithm, and offer an extension for normalization of time and duration data.
Digital libraries are collections of digital content and services selected by a curator for use by a particular user community. Digital libraries offer direct access to the content of a wide variety of intellectual works, including text, audio, video, and data; and may offer a variety of services supporting search, access, and collaboration. In the last decade digital libraries have rapidly become ubiquitous because they offer convenience, expanded access, and search capabilities not present in traditional libraries. This has greatly altered how library users find and access information, and has put pressure on traditional libraries to take on new roles. However, information professionals have raised compelling concerns regarding the sizeable gaps in the holdings of digital libraries, about the preservation of existing holdings, and about sustainable economic models.
The Virtual Data Center software is an open-source, digital library system for quantitative data. The authors discuss what the software does, how it provides an infrastructure for the management and dissemination of distributed collections of quantitative data, and the replication of results derived from these data.