Apr 21, 7:23pm

Lucy Taylor,  who is a Graduate Research in the program, reflects on software curation at the recent LibrePlanet Conference:

LibrePlanet 2016, Software Curation and Preservation

This year’s LibrePlanet conference, organized by the Free Software Foundation, touched on a number of themes that relate to research on software curation and preservation taking place at MIT’s Program on Information Science.

The two day conference hosted at MIT aimed to “examine how free software creates the opportunity of a new path for its users, allows developers to fight the restrictions of a system dominated by proprietary software by creating free replacements, and is the foundation of a philosophy of freedom, sharing, and change.” In a similar way, at the MIT program on Information Science, we are investigating the ways in which sustainable software might positively impact academic communities and shape future scholarly research practices. This was a great opportunity to compare and contrast the concerns and goals of the Free Software movement with those who use software in research.

A number of recurring themes emerged over the course of the weekend that could inform research on software curation. The event kicked off with a conversation between Edward Snowden and Daniel Kahn Gillimor. They tackled privacy and security, and spoke at length about how current digital infrastructures limit our freedoms. Interestingly, they also touched on how to expand the Free Software community and raise awareness with non technical folks about the need to create, and use, Free Software. A lack of incentives for “newbies” inhibits the growth of the Free Software movement; Free Software needs to compete with proprietary software’s low entry levels and user experience. Similarly, the growth of sustainable, reusable, academic software through better documentation, storage, and visibility is inhibited by a lack of incentives for researchers and libraries to improve software development practices and create curation services.

The talks “Copyleft for the next decade: a comprehensive plan” by Bradley Kuhn and “Will there be a next great Copyright Act?” by Peter Higgins both examined the ways in which licensing and copyright are impacting the Free Software movement. The future seems somewhat bleak for GPL licensing and copyleft  with developers being discouraged from using this license, and instead putting their work under more permissive licenses which then allow companies to use and profit from other’s software. In comparison, research gateways like NanoHub and HubZero encounter the same difficulties in encouraging researchers to make their software freely available to others to use and modify. As both speakers touched on, the general lack of understanding, and also fear, surrounding copyright needs to be remedied. Scihub was also mentioned as an example of a tool that, whilst breaking copyright law, is also revolutionary in nature in that no library has ever aggregated more scientific literature on one platform. How can we create technologies that make scholarly communication more open in the future? Will the curation of software contribute to these aims? Within wider discussions on open access, it is also worthwhile to think about how software can often be a research object in its own right that merits the same curation and concern as journal papers and datasets.

The ideas discussed in the session “Getting the academy to support free software and open science” had many parallels to the research being carried out here at the MIT Program on Information Science. The three speakers spoke about Free Software activities within their home institutions and the barriers that are created by the heavy use of proprietary software at universities. Not only does the continued use of this software result in high costs and the perpetuation of the “centralized web” that relies on companies like Google, Microsoft, and Apple, but this also encourages students to think passively about the technologies they use. Instead, how can we encourage students to think of software as something they can build on and modify through the use of Free Software? Can we develop more engaged academic communities who think and use software critically through the development of software curation services and sustainable software practices? This was a really interesting discussion that explored problematic infrastructures in higher education.

Finally, Alison Macrina and Nima Fatemi’s talk on the “Library Freedom Project: the long overdue partnership between libraries and free software” put the library front and centre in the role of engaging the wider community in Free Software and advocating for better privacy and more freedom. The Library Freedom Project not only educates librarians and patrons on internet privacy but has also rolled out Tor browsers in a few public libraries. What can academic libraries do to build on this important work and to increase awareness about online freedom within our communities?

The conference was a great way to gain insight into the wider activities of the software community and to talk with others from a multitude of different disciplines. It was interesting to think about how research on software curation services could be informed by these broader discussions on the future of Free Software. Academic librarians should also think about how they can advocate for Free Software in their institutions to encourage better understanding of privacy and to foster environments in which software is critically evaluated to meet user needs. Can libraries embrace the Free Software movement as they have the Open Access movement?

Mar 18, 8:24am

Ophir Frieder, who holds the Robert L. McDevitt, K.S.G., K.C.H.S. and Catherine H. McDevitt L.C.H.S. Chair in Computer Science and Information Processing at Georgetown University and is Professor of Biostatistics, Bioinformatics, and Biomathematics at the Georgetown University Medical Center,  gave this talk on  Searching in Harsh Environments as part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, Ophir  rebuts the myth that “Google has solved search”, and discusses the challenges of searching for complex objects, through hidden collections, and in harsh environments

In his abstract, Ophir summarizes as follows:

Many consider “searching” a solved problem, and for digital text processing, this belief is factually based.  The problem is that many “real world” search applications involve “complex documents”, and such applications are far from solved.  Complex documents, or less formally, “real world documents”, comprise of a mixture of images, text, signatures, tables, etc., and are often available only in scanned hardcopy formats.   Some of these documents are corrupted.  Some of these documents, particularly of historical nature, contain multiple languages.  Accurate search systems for such document collections are currently unavailable.

The talk discussed three projects. The first project involved developing methods to search collections of complex digitized documents which varied in format, length, genre, and digitization quality; contained diverse fonts, graphical elements, and handwritten annotations; and were subject to errors due to document deterioration and from the digitization process. A second project involved developing methods to enable searchers who arrive with sparse, fragmentary, error-ridden clues  about places and people to successfully find relevant  connected  information in the Archives Section of the United States Holocaust Memorial Museum. A third project involved monitoring Twitter for public health events without relying on a prespecified hypothesis.

Across these projects, Frieder raised a number of themes:

  • Searching on complex objects is very different from searching the web. Substantial portions of complex objects are invisible to current search. And current search engines do understand the semantics of relationships within and among objects — making the right answers hard to find.
  • Searching across most online content now depends on proprietary algorithms, indices, and logs.
  • Researchers need to be able to search collections of content that may never be made available publicly online by Google or other companies.

Despite the increasing amount of born digital material, I speculate that these issues will become more salient to research, and that libraries have a role to play in addressing them.

While much of the “scholarly record” is currently being produced in the form of “pdf”s, which are amenable to the Google searching approach, much web-based content is dynamically generated and customized, and scholarly publications are increasingly incorporating dynamic and interactive features. Searching these will effectively will require engaging with scientific output as complex objects

Further, some areas of science, such as the social sciences, increasingly rely on proprietary collections of big data from commercial sources. Much of this growing evidence base is currently accessible only through proprietary API’s. To meet the heightened requirements for transparency and reproducibility, stewards are needed for these data who can ensure nondiscriminatory long-term research access.

More generally, it is increasingly well recognized that the evidence base of science not only includes published articles, community datasets (and benchmarks); but also may extends to scientific software, replication data, workflows, and even electronic lab notebooks. The article produced at the end is simply a summary description of one pathway the evidence reflected in theses scientific objects. Validating, reproducing, and building on science may increasingly require access to, search over, and understanding of this entire complex set.  

Mar 04, 4:17pm

Julia Flanders, who is the Director of the Digital Scholarship Group in the Northeastern University Library, and a Professor of Practice in Northeastern’s English Department gave a talk on  Jobs, Roles, Skills, Tools: Working in the Digital Academy as part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, Julia  discusses the evolving landscape of digital humanities (and digital scholarship more broadly) and considers the relationship between technology, tool development, and professional roles.

In her abstract, Julia summarizes as follows:

Twenty-five years ago, jobs in humanities computing were largely interstitial: located in fortuitous, anomalous corners and annexes where specific people with idiosyncratic skill profiles happened to find a niche. One couldn’t train for such jobs, let alone locate them in a market. The emergence of the field of “digital humanities” since that time may appear to be a disciplinary and methodological phenomenon, but it also has to do with labor: with establishing a new set of jobs for which people can be trained and hired, and which define the contours of the work we define as “scholarship.”

In the research described in her talk Julia identifies seven different roles involved in digital humanities scholarship: developer, administrator, manager, scholar, analyst, data creator, and information manager. She then describes the various skills and metaknowledge required for each and how these roles interact.

(I will note here that the libraries and press have conducted complementary research and engaged in standardization around describing contributorship roles. For more information on this see the Project CREDIT site.)

The talk notes the tensions that develop when these roles are out of balance in a project, and particularly the need for balance among scholar, developer, and anlayst roles. Her talk notes that a combination of scholar, developer, and analyst in a single person is very productive but rare. More typically, early career researchers start as data creators/coders, learn a particular tool set, and evolve into scholars. In the absence of a strong analyst role this creates “a peculiar relationship with tools: a kind of distance (on the scholar’s part) and on the other hand an intensive proximity (on the coder’s part) that may not yet have critical distance or meta-knowledge: the awareness needed to use the tools in a fully knowing way.”

  Observing commercial and research software development projects over thirty years — one of the most common causes of catastrophic failure is the gap between the developer’s understanding of the problem being solved and the customer’s understanding of the same problem. A good analyst (often holding a “product manager” title in the corporate world) has the skills to understand both the business and technical domains sufficiently to probe for these misunderstandings and ensure that discussion converges to a common understanding. In addition the analyst aids in abstracting both the technical and domain problems so that the eventual software solution not only meets the needs of the small number of customers in the loop, but is broad enough for a target community.  Moreover, librarians often have knowledge in components of the technical domain and in the subject domain — which can serve libraries with particular competitive advantage in developing people in these critical bridge roles.

Feb 25, 2:54pm

Chaoqun Ni,  who is an Assistant Professor in the School of Library Science at Simmons, presented a talk  on  Transformative Interactions in the Scientific Workplace as part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, Chaoqun uses bibliometric data to  analyze the sociality, equality, and dynamicity of the scientific workforce.

In her abstract, Chaoqun describes her argument as follows:

I argue that, for a country to be scientifically competitive, it must maximize its human intellectual capital-base and support this workforce equitably and efficiently. I propose here a large-scale and heterogeneous analysis of the sociality, equality, and dynamicity of the scientific workforce through novel computational models for understanding and predicting the career trajectory of scientists based on their transformative interactions, gender, and levels of funding. This analysis will be able to isolate factors that contribute to the health and well-being of the scientific workforce. The computational models will quantify the impact of those transformative events and interactions and provide models to predict the career trajectory of scientists based on their gender, the size and position of the social network, and other demographic factors.

According to the talk,  there are three types of events that are particularly likely to transform scholarly careers: being mentored, publishing, and receiving grants. Of these, mentoring occurs earliest in a scholar’s career and has a persistent effect on publication and grants. The relationship is not simple and automatic — mentees do not automatically inherit their mentors success in publication and grant funding. Instead the mentoring relationship is mediated by transfer of knowledge, norms, advice, and connections. And gender disparities are persistent and visible.

This talk resonated with a number of areas in which the Program and Library engage:

First, diversity is a core library value, and this research suggests ways in which the libraries can support a more diverse academic community.  The success of early career scholars depends in part on developing a substantial number of specialized career skills that are not part of a specific scientific discipline — including, among many other things (see for example, these slides on reputation and communication), navigating the scholarly publishing process, writing grant proposals, managing bibliographies, and curating data. Much of this knowledge is tacit — it is not explicitly taught but instead transferred through personal mentoring.  Libraries are one of the rare parts of the university that are able to successfully capture this tacit knowledge and make it more widely available across the community. The libraries IAP courses  are an excellent example of this.

Second, most of the data used for this research is based on Library-mediated collections — citations drawn from journal collections and metadata from dissertation collections. Further, as there is increasing pressure on universities for quantitative evaluation, and increasing desire to actively catalyze collaboration and productivity, there is an increasing need for rich access to Library collections as data, for guidance on tools and approaches (see, for  an overview our class on citation analysis), and for expert assistance.  Since few researchers have methodological or domain expertise related to bibliometric and scientometric data, this presents an unusual opportunity for libraries to be entrepreneurial in collaborating on new research.

Third, during this talk, Chaoqun noted that that the most laborious and time-consuming phase of the research was the data cleaning and linking phase — particularly dealing with name disambiguation. ORCID, in which the library serves a leadership role (and which MIT has adopted), aims to eliminate this problem. ORCID has spread widely — and just within this month over a dozen major publishers announced their intent to require ORCID’s for journal submissions.


Jul 06, 9:48am

Kim Dulin,  who is director of the Harvard Library Innovation Lab and Associate Director for Collection Development and Digitization for the Harvard Law School Library, and former co-director of the Harvard Library Innovation Lab, presented a talk  on Taking on Link Rot: Harvard Innovation Lab’s Perma.CC  as part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, Kim  discusses how libraries can mitigate link rot in legal scholarship by coordinating on digital preservation.

In her abstract, Kim describes her talk as follows: ( is a web archiving platform and service developed by the Harvard Library Innovation Lab (LIL) to help combat link rot. Link rot occurs when links to websites point to web pages whose content has changed or disappeared. allows authors and editors to create permanent links for citations to web sources that will not rot. Upon direction from an author, will retrieve and save the contents of a cited web page and assign it a permanent link. The link is then included in the author’s references. When users later follow those references, they will have the option of proceeding to the website as it currently exists or viewing the cached version of the website as the creator of the link saw it. Regardless of what happens to the website in the future, the content will forever be accessible for scholarly and educational purposes via

According to the talk, link rot in law publications is very high — approximately fifty percent of links in Supreme Court of the US opinions are rotten, and the situation is worse in law journals. has been successful in part because durability is a very important selling point for attorneysA signal of this is that the latest edition of the official editorial manual for law publications (the “blue book“), now recommends that links included in legal publication be archived.

Perma provides a workable solution to a problem of concern, centered on libraries. In her talk Kim focuses on the diverse role that libraries play. Libraries act as gatekeepers for the content to be preserved; as long-term custodians of the content (and technically as mirrors); and as direct access points.

(I will also note that libraries are critical in conducting research and develop standards in this area.  The MIT Library is engaged in developing practices for collaborative stewardship as a member of the National Digital Stewardship Alliance, and the  Program is engaged in research on data management and stewardship.)

The talk discusses a number of new directions for perma, including perma-link plugins for Word and wordpress; an API to the service; creating a private LOCKSS network to replicate archival content;  and establishing a formal structure of governance, archival policies, and sustainability (funding and resources),

These directions resonates with me. is currently a  project that has been very successful  at approaching the very general problem of link rot within a specific community of practice.  The success of the project in part has to do with the knowledge of, connections with, and adaptation to a specific community. It will be interesting to see how governance and sustainability evolves to enable the transition from a project to community-supported infrastructure . 

Jun 18, 9:04am
It is an old saw that science is founded reproducibility… However, the truth is that, reproducibility has always been more difficult than generally assumed — even where the underlying phenomenon and are robust. Since Ioannidis’s PLOS article  in 2005, there has been increasing attention in the medical research to the issue of reproducibility; and attention has been unprecedented in the last two years, with even the New York Times commenting on  “jarring” instances of irreproducible, unreliable, or fraudulent research results.
Scientific reproducibility is most viewed through a methodological or statistical lens, and increasingly, through a computational lens. (See for example, our book on reproducible statistical computation.)  Over the last several years, I’ve taken part in collaborations to that approach reproducibility from the perspective of informatics: as a flow of information across a lifecycle that spans collection, analysis, publication, and reuse.
I had the opportunity to present a sketch of this approach at a recent workshop on reproducibility at the National Academy of Sciences, and at one our Program on Information Science brown bag talks.
The slides from the brown bag talk discuss some definitions of reproducibility, and outline a model for understanding reproducibility as an an information flow:
(Also see these videos from workshop on informatics approaches, and other definitions of reproducibility)
In the talk, the talk shows how  reproducibility claims as generally discussed in science, are not crisply defined, and the same reproducibility terminology is used to refer to very different sorts of assertions about the world, experiments, and systems. I outline an approach which takes each type of reproducibility claim and assesses: What are the use cases involving this claim? What does each type of reproducibility claim imply for  information properties, flow and systems? What are proposed or potential interventions in information systems that would strengthen the claims?
For example, a set of reproducibility issues is associated with validation of results. There are several distinct use cases and claims embedded in this — one of which I label as “fact-checking” because of its similarities to the eponymous journalistic use case:
  • Use Case: Post-publication reviewer wants to establish that published claims correspond to analysis method performed.
  • Reproducibility claim: Given public data identifier & analysis algorithm, an independent application of the algorithm yields a new estimate that is within the originally reported uncertainty.
  • Some potential supporting informatics claims:
    1. Instance of data retrieved via identifier is semantically equivalent to instance of data used to support published claim
    2. analysis algorithm is robust to choice of reasonable alternative implementation
    3. implementation of algorithm is robust to reasonable choice of execution details and context
    4. published direct claims about data are semantically equivalent to subset of claims produced by authors previous application of analysis
  • Some potential informatic interventions:
    • In support of claim 1:
      • Detailed provenance history for data from collection through analysis and deposition.
      • Automatic replication of direct data claims from deposited source
      • Cryptographic evidence
        (e.g. cryptographic signed {analysis output including, cryptographic hash of data} & {cryptographic hash of data retrieved via identifier})
    • In support of claim 2:
      • Standard implementation, subject to community review
      • Report of results of application of implementation on standard testbed
      • Availability of implementation for inspection
Overall, my conjecture is that if we wish to support reproducibility  broadly in information systems there are a number of properties/design principles for of information systems that will enhance reproducibility. Within information systems I conjecture that we should designing to maintain properties of: transparency, auditability, provenance, fixity, identification, durability, integrity, repeatability, non-repudiation, and self-documentation. When designing the policies, incentives, and human interactions with these systems we should consider: barriers to entry, ease of use, support for intellectual communities of practice, personalization, credit and attribution, security, performance, sustainability,cost, and trust engineering.

Apr 29, 12:55pm

David Weinberger,  who is a Shorenstein Fellow at Harvard University, and former co-director of the Harvard Library Innovation Lab, presented a talk  on Libraries as Platforms: Enabling Libraries to Become Community Centers of Meaning part of the Program on Information Science Brown Bag Series.

In the talk, illustrated by the slides below, David  discusses how libraries can increase their relevance in a networked world by creating information platforms that enable communities to locate, create, and discuss contextually relevant connections among information resources.

In his abstract, David describes his talk as follows:

Libraries are in a unique position to reflect a community back to itself, enabling us to see what matters, and to use that information so that the community learns from itself. This is one of the primary use cases for developing and widely deploying library platforms. But becoming a community center of meaning can easily turn into creating an echo chamber. The key is developing interoperable systems that let communities learn from one another. We’ll look at one proposal for a relatively straightforward way of doing so that’s so dumb that it just might work.

David describes libraries as a “black hole on the Net” — the knowledge and culture that only libraries have entrusted with is generally not available on the web. He claims that the core institutional advantage of libraries is not only access, but an understanding of what matters to specific communities paired with incentives that are fully aligned with those communities.

His talk that … Meaning comprises a set of connections that are important to a community. Libraries have always been aligned with user communities and helped them discover and make sense of meaningful information. And changes in internet and communication technology create an opportunity for libraries to help communities create and reflect back community meaning.

The talk suggested that Libraries can move toward this by creating API’s that enable open access to their open content, and metadata (broadly defined) related both to content and to the local use of that content; and conjectured that linked data approaches are necessary for integrating platforms and metadata at scale.

David discussed StackLife as an example. StackLife  uses circulation metadata to provides a private  a shareable, public (aggregated) normalized measure of physical book usage in several libraries. It enables , and is shareable — allowing for comparisons across libraries.

David not discuss privacy  in detail but noted it as a key issue —  to build successful platform for creating meaning, libraries will need to rethink their approach to of patron privacy: Rather than discarding information on patron behaviors and use of services, we need to collect it, and use it in service of the community. In this vision, libraries will provide the infrastructure — and new tools using this infrastructure will be written by people outside of libraries (as well as within).

I will note that the Program is engaged in research toward creating a modern approach to privacy concepts and controls. I will also note that to maintain a platform will require digital sustainability   and organizational sustainability. Realizing the former will require designing systems with a view towards supporting long term access. Realizing the latter will require identifying stakeholders that have mutually reinforcing incentives to create digital stuff, use digital stuff created by others, and maintain platforms for such stuff. (Typically, in sciences, such stakeholders are clustered around sets of domain problems…)

A recurring theme of David’s talk was that “libraries won’t invent their own future.”: Libraries can now see and participate in the cultural appropriation by their communities of the work entrusted to libraries. And open platforms will enable the world to integrate library knowledge into sites, tools, and services that libraries on their own might not have envisioned or have had the resources to develop.

This resonates with me, and I will add that any successful platform will almost certainly require using tools and infrastructure neither built by nor for the libraries. It will also require us to collaborate with organizations far beyond our boundaries. 

Apr 23, 7:45am

Kasey Sheridan,  who is a Graduate Research in the program, reflects on active learning [1] at the recent NERCOMP Conference:

 “We’re ‘In IT’ Together:” Active Learning at NERCOMP ‘15

NERCOMP ‘15 was held in Providence, Rhode Island, at the Rhode Island Convention Center, from March 30th until April 1st. There were over 700 attendees present (including presenters and exhibitors) and over 50 vendors with representation from companies like Adobe, Microsoft, and McGraw-Hill Education.

The theme of this year’s NERCOMP (NorthEast Regional Computing Program) conference was “We’re ‘In IT’ Together” — when we come together, no matter our titles, “we can do transformative things.”

One of the most memorable sessions for me was “Building a Disco: Active Learning in a Library Discovery and Collaboration Space.” My inspiration peaked during this presentation, as I realized that active learning and collaboration spaces was to be a central theme of the proceedings for the day.

Elizabeth and Patricia dazzled audience members with the story of their library renovation that took place between May and August of 2014. They began with an underutilized space in their library that was intended for collaborative and active learning, but was not designed to be optimal for either of those purposes.

We were then shown the finished product: their beautiful new library space, named the Brian J. Flynn Discover & Collaboration Space (or the “DisCo” — there’s even a disco ball hanging from the ceiling). The new room has features like movable furniture to facilitate an adaptable environment that is conducive to collaboration and active learning and movable whiteboards that are used for collaboration and to define space (students immediately began using them to section off areas of the room to work on projects).

The most impactful change in the DisCo is all of the new technology that was purchased. The room is outfitted with projection capabilities using Crestron AirMedia technology, which offers wireless presentation functionality from any device. Twenty-six HP Elite tablets (complete with keyboards and mice) and four HDTVs (that have AirMedia as well as cable TV capabilities) were also purchased.

The theme of collaborative and active learning spaces and the technology that we use within them reverberated throughout the day: from the gamification of the student work environment at the Nease Library to the 3D printer in the exhibit hall (I won a 3D printed compass charm — NERCOMP’s logo), library spaces were definitely on my mind.

NERCOMP left me with a few questions: How can we best use new technologies in active learning spaces? In the DisCo, several technology issues arose immediately after the space was opened, and a LibGuide was created in response to those issues. This has some implications for launching an active learning space: perhaps faculty and student training sessions should take place before [or soon after] launch, and as much technology should be tested as possible during the planning stage (this was a direct recommendation from Elizabeth and Patricia and something they wish they had known beforehand).

What will active learning in libraries look like in the future? There is a wide range of new technologies that can be appropriated for active learning use. They can be designed to support a myriad of library services and needs: for example, tablets can serve as personal computing devices, for collaboration purposes, or for presentations. Spaces equipped with networking, projection, and appropriate seating can also support students and faculty in conducting presentations, and in collaborating both formally and informally.

Further, these spaces can support active learning: for example, a librarian could more easily conduct an information literacy workshop in the space using projection equipment and tablets to allow the students to share their answers simultaneously.

Will active learning spaces become a staple in academic libraries? I believe so, but the challenge is to design spaces that support substantial specific needs so that they are used regularly for their intended purpose. In my opinion, one way to overcome this is to engage in a participatory design process. If patrons are heavily included in the process from the beginning, they will be left feeling enthusiastic about using the space and advocating for it.

These are only a few considerations; each library space will have its own unique populations to serve and obstacles to overcome. I am of the opinion that if we remain collaborative in nature when discussing and designing active learning spaces, we will be successful in providing active learning spaces for our patrons.



[1]  The Program on Information Science is engaged in a number of research efforts related to active learning — including investigations of Makerspaces, and measurements of attention in massive online education systems. The program website links to classes, resources, and publications in these and other areas.

Apr 09, 8:49am

Caren Torrey,  who is a Graduate Research in the program, reflects on the recent ACRL Conference:

ACRL 2015 – Gut Churn

Jad Abumrad, host and creator of RadioLab, gave a fantastic keynote speech at the ACRL conference on  Thursday afternoon titled “Gut Churn.”  He used this term to describe the moment when creativity goes into a dark space: when you lose your perspective, maybe give up a little hope, when you are not sure of yourself, when your creative process fills you with anxiety.  Gut churn is Abumrad’s term for being uncertainty.  Abumrad feels that this part of the creative process is key to overcoming hurdles and breaking through to an innovative answer.

Gut churn was echoed at the ACRL conference (held March 25th-28th in Portland, OR).  Academic librarians embraced his speech; the term was repeated throughout the event.  This feeling of creativity and embracing challenges was important for the conference theme of sustaining community.

I was rejuvenated after hearing Abumrad’s speech.  Not only was I surprised that the keynote was like a private version of RadioLab — just for us (!), but I was relieved to hear that most successful, creative people also are apprehensive when trying new approaches to old concepts.  As I navigate my path into the professional libraries world, I am embracing my own feelings of gut churn.

What resonated with me most about this conference were the specific challenges that libraries are currently facing: embracing new technology, outreach to faculty and students, education and information literacy, and demonstrating value.  Each of these issues was discussed in the context of the academic library.  In each case, there is a need for innovation and creativity that can only be accomplished by pushing through uncertainty.  The uncertainty that libraries are facing include funding, use and lack of space, accelerated advances in technology, and the evolving role of librarians.

Many of the sessions that I attended discussed bringing new applications and e-resources to the library and the implementation of open educational resources. Librarians seemed excited about these changes.  They want to incorporate new ways of presenting, accessing and finding information.

I could also sense the gut churn in the room during these presentations. The questions that were  about implementing and training: How do you get new technologies in your library? How do you fund technology?  How do you keep up?  The anxiety and excitement expressed comes hand-in-hand with bringing innovation into the workplace.

As a profession, librarians are excited about information.  We enjoy the feeling of wonder, the search for information, and the joy of finding the perfect answer.  Librarians should embrace our collective “gut churn” to seek out new paths for finding solutions to our environmental challenges.  Creative marketing and outreach to faculty and students can be approached in as collaborative exercise for all.  Using new methods of interactive technology is going to be vital to accelerate education and information literacy.  Our biggest challenge is demonstrating our value to our communities, perhaps we incorporate open data and open resources to track our impact.

As a graduate research intern at MIT’s Program on Information Science, I am conducting research on early career scientists.  This includes investigating the way in which researchers advance their scholarly reputation early in their careers.  I am exploring various methods of and technologies for sharing research and communicating oneself and one’s professional life via scholarly communication and social media.  This research has been extremely valuable as an incoming academic librarian.  Although the challenges vary by profession, building a name for yourself and your research is vital to a lasting, satisfying career.

My attendance of sessions for students and new professionals also echoed the overall feeling of the conference.  The gut churn and excitement in these sessions was similar to my own feelings: where do we fit in in the moment of change?  How do we effectively lead change as we enter the workplace?  Is the millennial generation of librarians really that different than the current professionals?  Am I going to get a job?

Overall, the ACRL conference felt like a success.  I learned that in order to really make effective change, you have to embrace your uncertainty and learn from it.  After all, there are only two outcomes: success and failure.  Failure isn’t the end, it is a new beginning.

Feb 13, 10:39am

Kendra Albert,  who has served as research associate at the Harvard Law School; as an intern at the Electronic Frontier Foundation; as a fellow at the Berkman Center for Internet & Society; and is  now completing her J.D. at Harvard Law,  presented this talk  as part of the Program on Information Science Brown Bag Series.

Kendra brings a fresh perspective developed through collaborating with librarians and archivists on projects such as as, EFF’s response to DMCA 1201, and our PrivacyTools project.

In her talk, Kendra discusses the intersection of law, librarianship and advocacy, focuses on the following question:

Archival institutions and libraries are often on the front lines of battles over ownership of digital content and the legality of ensuring copies are preserved. How can institutions devoted to preservation use their expertise to advocate for users? 


A number of themes ran through Kendra’s presentation:

  • Libraries have substantial potential to affect law and policy by advocating for legal change
  • Libraries enjoy a position of trust as an information source, and as an authority on long-term access for posterity
  • Intellectual property law that is created for the purpose of limiting present use may have  substantial unintended consequences for long-term access and cultural heritage.

Reflecting on Kendra’s talk, and on the subsequent discussions…

The courts have sometimes recognized preservation as having value — explicitly in formulating DMCA exceptions, and implicitly, in adopting But, the gaps between the private value of the content to the controller in the short term, and its value to the public in the long-term value  is both a strength and a weakness for preservation efforts.

For example, Kendra’s talk noted that the lack of a market for older games is an important factor for determining that distribution of that content is fair use —  which works in favor of preservation. The talk also mentioned that the game companies short-term focus on the next release was a barrier to collaborating on preservation activities. These points seem to me connected — the companies would become interested if there were a market… but this would, in turn, weaken the fair use consideration. Effective public preservation efforts must walk a tightrope — supporting access and use that is of value, but not either impinging on private value in the short term, or creating so much of a market for access, that there is political pressure to re-privatize the market.

Furthermore, it is well recognized that institutional legal counsel tends to be conservative … both to minimize risks to the institution as a whole, and to avoid the risk of setting precedent with bad cases.  It is clear from Kendra’s talk that librarians dealing with projects that use intellectual property in new ways should both engage with their institution’s legal counsel early in the process, and have some independent legal expertise on the library team in order to generate possible new approaches.

For more information you can see some of the outputs of Kendra’s work here: