Feb 26, 12:51pm

Alex Chassanoff is a CLIR/DLF Postdoctoral Fellow in the Program working to identify, understand and  describe baseline characteristics about software creation, use, and reuse in research libraries/archives, grounded in cases found across MIT.   

Below are a (growing) compendium of resources related to software curation for collecting institutions. 

What’s missing? Email me here!

I. Collecting/Acquiring/Appraising Software

Data Management, Planning & Policies

Cornell's Guide to Writing "Readme" Style Metadata: Templates/best practice/guidance for creating "readme" files to accompany data sets/software.

Data Management Planning Tool (2011-present): An online application that helps researchers create data management plans.

Depsy (2015-present): Depsy helps users investigate impact metrics for scientific software, tracking research software packages hosted on CRAN (software repo for R programming language) or PyPI (software repo for Python-language software).

GNU Ethical Repository Criteria: Criteria for "hosting parts of the GNU operating system"; can also be used to evaluate other repositories hosting free source code (and optionally executable programs too)

1st IEEE Workshop on Future of Research Curation and Research Reproducibility (2016): Summarizes workshop discussions and recommendations related to curation of research data, software, and related artifacts.

IFLA Key Issues for E-Resources Collection Development: A Guide for Libraries (2012): Overview for libraries that addresses some key issues in collecting “e-resources.”

Springer Nature Research Data Policies (2016): FAQ by researchers about data policies, data repositories, and sharing data.

Guidelines & Tools

Collecting Software: A New Challenge for Archives and Museums

Guidelines for Transparency and Openness Promotion in Journal Policies: "Established by the Open Science Framework The TOP Guidelines provide a template to enhance transparency in the science that journals publish. With minor adaptation of the text, funders can adopt these guidelines for research that they fund."

How to Appraise and Select Research Data for Curation (2010): Discussion of appraisal concepts; geared towards research data but provides insight into practices for appraising software.

Media Stability Ratings (2018):  Assigns a "media stability rating" to different media formats, in attempt to mitigate loss.

Stewardship of E-Manuscripts (2009): Compilation of tools that can be used in acquisition & stewarding of born-digital materials.

Timbus Debian Software Extractor  (2015): Tool to extract metadata for debian software packages, developed as part of the Timbus Context Project.

II. Describing Data/Software/Environments

Descriptive Standards & Definitions

Asset Description Metadata Schema for Software: A metadata schema and vocabulary to describe software making it possible to more easily explore, find, and link software on the Web.

Best Practices for Cataloging Video Games using RDA & Marc21 (2015):

DataCite (2016-present): A metadata schema for the publication and citation of research data.

Data Documentation Initiative (2011-present): Standard to describe the data produced by surveys and other observational methods in the social, behavioral, economic, and health sciences.

DDI-RDF Discovery Vocabulary (2013): RDF vocabulary to support the discovery of micro-data sets (aka "raw data") and related metadata using RDF technologies.

Force 11 Software Citation Principles (2016): A consolidated set of citation principles that may encourage broad adoption of a consistent policy for software citation across disciplines and venues.

Software Ontology (2011): A resource for describing software tools, their types, tasks, versions, provenance and data associated.

Trove Software Map: Classifies software by the following 9 attributes: development status, environment, intended audience, name, natural language, operating system, programming language, and topic.

User Studies

Software Search is Not a Science, Even Among Scientists (2016): Survey of how researchers search for software, including criteria they use to evaluate software results (e.g., how easy is the software to learn)

Examples of Cataloged Software/Data Sets/Repositories

JHU's Data Archive: Data and Software associated with Seviour et al

Computer History's Source Code for FORTRAN II compiler

re3data: Registry of research data repositories

III. Preserving Software

Case Studies & Reports

A Case Study in Preserving a High Energy Physics Application with Parrot (2015): Describes the development of Parrot, an application dependency capture program for complex environments.

Exploring Curation-Ready Software (2017): Report 1 by the Curation-Readiness Working Group at the Software Preservation Network.

Heritage.exe (2016): Cross-comparison case study of software preservation strategies at three US institutions.

Improving Curation-Readiness (2017): Report 2 by the Curation-Readiness Working Group at the Software Preservation Network.

Preserving and Emulating Digital Art Objects (2015): Reports on the results of an NEH-funded research project "to create contemporary emulation environments for artworks selected from the archive, to classify works according to type and document research discoveries regarding the preservation effort."

Preserving Virtual Worlds I, II (2007-2010; 2011-2013): The Preserving Virtual Worlds projects I and II explore methods for preserving digital games and interactive fiction.

Preserving.Exe: Toward a National Strategy for Software Preservation (2013): A report from the National Digital Information Infrastructure and Preservation Program of the Library of Congress, focused on identifying valuable and at-risk software.

SPN Metadata Survey (2017): Survey results on how institutions with digital preservation programs are using metadata to aid in preserving software.

Research Initiatives

The Digital Curation Sustainability Model(DCSM) (2015): JISC-funded project to highlight the key concepts, relationships and decision points for planning how to sustain digital assets into the future.

National Software Reference Library (NSRL): The NSRL is designed to collect software from various sources and incorporate file profiles computed from this software into a Reference Data Set (RDS) of information.

PERSIST (2012-present): UNESCO hosted initiative to "ensure long-term access to the World’s Digital Heritage by facilitating development of effective policies, sustainable technical approaches, and best preservation practices."

Software Preservation Network (SPN) (2013-present): Community of practitioners and researchers, working to address the problems of how to preserve software.

Software Heritage Network (2016-present): "The goal of the SHN is to collect all publicly available software in source code form, replicate it massively to ensure its preservation, and make it available to everyone who needs it."

Tools, Applications, Best Practices & Standards

Library of Congress Recommended Format Statement for Software: "Identifies hierarchies of the physical and technical characteristics of software which will best meet the needs of all concerned, maximizing the chances for survival and continued accessibility of creative content well into the future."

National Archives' Strategy for Preserving Digital Archival Materials (2017): Overview of strategies used by NARA to preserve digital materials.

Obsolescence Ratings (2018): "This list categorizes the ease with which a range of formats that have been, or are, in common use in their fields can be read, in terms of the equipment available to do so."

Pericles Extraction Tool (2015-present): Extraction of significant environment information from live environments, to better support object use and reuse, in the scope of long term preservation of data.

Preservation Quality Tool  (2016-present): "This tool will provide for reuse of preserved software applications, improve technical infrastructure, and build on existing data preservation services."

Software Independent Archival of Relational Databases (SIARD) (2007): An open file format developed by the Swiss Federal Archives for the long-term archiving of relational databases; data can be stored long-term independently of the original software.

Feb 06, 10:35am

Alex Chassanoff is a CLIR/DLF Postdoctoral Fellow in the Program. She has been conducting interviews with scholars across MIT's campus who create, use, and/or reuse software to understand more about their scholarly practices.  Below are snippets from an interview with Nick Montfort, a professor of digital media in the Comparative Media Studies and Writing section at MIT.  Nick is also an interactive fiction writer, computational poet, and code studies scholar.    

On Reconstructing Code

“So software or creative computing programs or research programs….these are the areas I work in.  There are different sorts of outcomes and some of them are important software produced at MIT, like Joseph Weizenbaum’s Eliza which is a very frequently cited research system and highly influential – Janet Murray named it the first “computer character.” It’s a simulated parody of a Rogerian psychotherapist….asking for you to speak about yourself, and then reflecting that back for you to hear.

One of the interesting things about this system from my perspective is that the original code doesn’t exist, but there’s a paper that describes its function in great detail.  So there are many, many re-implementations of it.  You can run it on the Commodore 64 and BASIC – there are programs to implement an Eliza-like system for that. So there’s not really a canonical Eliza in the way that there is a canonical Adventure.   The lack of preservation for software doesn’t always mean that– if you don’t have the original code or object– it doesn’t always mean that its not influential, important, able to be cited, able to be part of the intellectual discourse.  Of course, it presumably doesn’t HURT to have access to those works in any case.”

On Emulation as Software Preservation

“An emulator is a software version of a computer. Some people find it very distasteful that the emulator is not the authentic hardware which is interesting to note….the way we see it, you can think about it as a particular edition OF a computer. In fact, the Commodore 64 that’s over there (points)  running that program right now is one edition but there are different editions of the C64 with different hardware.  So for example, there’s been a ROM revision to the Commodore 64, so it behaves a little bit differently depending upon which ROM revision you have.  So, in fact even when say, ‘the hardware,  it’s running on the hardware’…there’s more than one ‘the hardware!’ I think that’s even more obvious today.   So, for example, when you have a PlayStation 3 that is supposed to be compatible with a PlayStation 1 or 2 initially, but then that feature is dropped as they refine the production of it….”

A Close Reading of a Commodore 64 Keyboard


“You can see a lot about the layout of the keyboard which is different from modern keyboards.  So if you tried to type in this program that I initially typed in, one thing that you might find funny about it is that if you press shift plus…you need the shift to type plus on a modern typewriter…you get this large cross symbol that doesn’t work – it’s not a plus sign – it’s a special graphical character… the keyboard layout is different in several ways…you have a pi symbol on the keyboard, you don’t have curly braces, you have the arrow keys are in the bottom right and you need to press shift to move up and shift to move left….so maybe these are all curiosities, but when you start to use the system, they change your experience of it.  The other thing is that these graphical characters, including the ones you see on here, are characters you can just type, along with other graphical characters.  You can type them into a program or directly at the BASIC interpreter – you can deal with it quite easily…

The thing about the hardware version then is just from the standpoint of the keyboard, you can see the keyboard is different.  It wasn’t standardized in the way that our Mac and PC keyboards are today, but it also provided these extra facilities like the curious character set of the Commodore 64 was exposed to you because it was actually visibleon the keyboard – you could see what the different characters were.  And when you work in an emulator….well, first of all you have to figure out how you want your key mappings to be.  For example, if you’re a Commodore 64 touch typist, you might want your keyboard to be set up in the same physical layout as the C64, but mostly people chose a logical layout where, for instance, if you press shift plus on your keyboard its going to correspond to the plus sign on the commodore 64.  So, you have these issues with setting up the keyboard – that’s one of the reasons why emulation is better suited for joystick games, where it’s a pretty straightforward mapping than using the keyboard in elaborate ways.  On the other hand, if you do want to use an emulator, it provides these extra facilities.  So, you can save the full state of the machine at any point.  So if you look at something more intricate and wanted to show how a word processor or GEOS (the Macintosh-like operating system for the C64) or an elaborate game that has a lot of state….if you want to show how these things worked, then you probably want to save a particular point and you might not always have the capability for doing this within the software itself, but the emulator would allow you to say, ‘Ok, we’ll just take the full machine state’ and will allow a classroom working together or students individually or scholars to come back to that.” 

On Temporality and Games

A_Mind_Forever_Voyaging_Coverart.png“I don’t go very often to play old games, actually…I fear I’m more of a collector (laughs) although I am interested in the ability for people to use these, rather than for their preciousness and economic value.  When people came to play A Mind Forever Voyaging, we did some videography.  It’s a 1985 InfoCom game and it’s very easily played on modern day computers.  But what I did is I set up for a group of four people the first official Infocom edition of the game to run on the Apple IIC.  And then over on this large screen, I connected a computer with the most recent (although it’s pretty old) official Infocom release.  Activision released this Masterpieces of Infocom for MS-DOS Windows 3.1/ Windows 95 at some point in the late 90s.  And I had this running in DOSBox essentially.  So they had their choice between playing these…or both of these…and the group decided they wanted to play on the Apple II and they remarked on some specific material differences there.

One of the things that’s interesting is that the pace of play is different – you don’t have a multi-tasking machine, it’s not connected to the internet, you can’t go and look for hints…you can go and look on your phone, of course, but you don’t have it easily available to you. Additionally, you don’t have the same very rapid pace.  I watched students playing interactive fiction recently and not stopping to read the text outputs, just sort of powering through typing commands.   On the Apple 2 when you type a command, there would be a little pause before you get a response.  If you type something that’s completely not understood or not useful, you would get a response back fairly quickly.  And then if you did something interesting that changed the state of the game or required disk access, then there would be a longer pause — the disk would spin up, and for players, what I remember and what people report is that there is this moment of anticipation – like ‘Oh Something Is Going to Happen Now! It’s So Exciting!’ So the material qualities of the system there make some sort of difference in play.  I think it’s also why people would play interactive fiction pieces that took maybe ten or twelve hours to work through in the 1980s.  People spend that much time playing games, but interactive fiction specifically is much more abbreviated in comparison to that.  Now people make 2 hour 15 minute games that are for briefer play – people still enjoy engaging with the form – eighty games were released at the IF Competition this year.”

On Authenticity and Networked Everything

“At a classic gaming expo, there was this setup with a big wood-grained cathode ray tube television, and like a really ugly 1970s couch with Atari cartridges on a coffee table and a system in front… and of course it’s in the middle of a convention center, not in someone’s house and you could sit down and play the games in this reconstructed sort of context. So people can always build more or less context around things, to give different sorts of ideas.  We can’t reconstruct even the 70s or 80s in great detail and certainly as you go further back in the history of material texts or literary or gaming or cultural history, its very tough to do.  So I think that there are certain things that people are going to encounter because of historical interest and as scholars.  Their engagement with it might be limited and that’s fine, they also might bring ideas back into the mainstream. So for instance, one of my points in showing people the Commodore 64 is that you can turn it on, you can write a one line program like this… it’s not just historical curiosity about the Commodore 64.  There are a bunch of reasons for this.  It didn’t come with a disk drive, you needed to purchase it separately which allowed for the up-selling of it.  And it allowed for lower cost of that one unit that didn’t have moving parts and so forth.  But it did have BASIC built in, which was the case with essentially all home computers at the time and that programming language did facilitate this immediate exploration of what you could do with computing, being able to do very small scale programs.  Some people would type in pages-long programs from magazines or books and not have any way to save them! So when you turned off your computer it was gone! But it took a long time to type this in, and you might make mistakes and have to go correct it, and then you could play the game afterwards, but as soon as you turned the computer off it was gone but the whole process of doing this engaged you with programming and computing in ways that aren’t as possible now.  

Of course, there are people who did engage with the early World Wide Web that way, they went to ‘view source’,  they looked at how HTML was put together and that’s how they learned.  There’s no view source in the App store…there was ‘view source’ in the 90s, there still is, and this ability to turn something on and immediately type in a short program and make changes to it, work with it, is not something that I bring up…when people come in and sometimes students say I’d like to take your course and it says no programming experience is required but I’m worried that I don’t have programming experience, and I say, ‘Well, sit down at the Commodore 64 and let’s program some.’ And in fact it’s not that much of a challenge when it’s posed that way.  So, it’s still something that is useful today and it’s still also useful as a design critique of current computers. While we’ve added a lot of capabilities, certainly the Commodore 64 is not better at accessing social networks, video editing, etc… but we’ve lost some of the ability to work with computation in direct and useful and powerful ways.  And I’m not sure that an emulator accomplishes that – I think sitting down at a Commodore 64 accomplishes that in a different way, because by the time you have installed the emulator and opened it up and your keyboard doesn’t match etc., you now have made things into a much harder problem then they originally were.”  

On Curating Software-Driven Works: Autofolio Babel 

“This is Autofolio Babel or Portfolio Babel, you could also say, it’s based on Jorge Luis Borges’ Library of Babel – there are a lot of computational projects on this.  One of the things about this piece is that Borges defines quite specifically how the books are supposed to look: that they are 80 characters wide and 40 characters tall, arranged in a square… and Borges specifies a 24-character alphabet with some punctuation symbols. Instead of using this alphabet, I used a unigram distribution of Borges’ story itself in Spanish.  So the most likely thing that one would see coming up on the screen would be a page from Borges’ story, and if you look closely you can probably see, because of accent marks maybe if you study it for a while,  you can tell that it’s Spanish language text in its origin.



Screenshot, Una página de Babel

So there’s a piece of software, each of these screens is driven by a Rasperry Pi Zero and this is just a program, it goes much slower than if it runs on a standard, much larger computer – I’ve rotated the screen at the HTML level – the material aspects of this are a bit different – we have a folio here (two screens), and here (two computers), it’s one folio that generates another folio, although this folio is powering this folio down here. They really generate each other.

One of the ways in which this work might be presented is on a table, possibly in front of a chair, or at a lectern, in a way that is suitable to its nature as a book object rather than some other type of screen.  So it would be similar to the kind of curation that people do with video art and to have that kind of care with a piece like this.  There are elements of these pieces that will wear out.  And thinking about if you were curating [Nam June Paik’s] Electronic Superhighway – it has like 170 CRTs and you can’t just say I’ll throw in a flat panel if one of them goes out…most people can, but not people who curate video art.

It’s not really a software concern at this point, but rather a system concern for a system that includes software.  And having Babel as the software component work – that’s more or less a subset.  I wouldn’t want someone to take video of this and put that video out as a ‘preservation method.’  This needs to be a functioning computing machine for this to work, so the software preservation would be part of it from my standpoint.

So I would want the ability to actively compute and recombine…and then one could do various things…in the same way that if your book wears out, you have some type of manuscript or print codex that is damaged or something, you can think about how you would restore this if it were a book? So you can obviously rebind books, in this example, maybe it would be the opposite of binding — maybe you replace the screens, but keep the casing and power apparatus if there were some problem there.  Certainly, if you needed to replace capacitors, most people wouldn’t say that would be problematic. It sort of gets into being a Ship of Theseus problem… of how much replacement effaces the original. This is an interesting case, but it’s something I would consider within book arts/art curation.  I would say librarians and special collections have a particular perspective on it, and art curators would have another.”  

Describing Autofolio Babel (currently in the Trope Tank at MIT)

“Autofoilo Babel consists of these two Dell displays. They are the same model, logos in the front are covered with gaffers tape, these are salvaged…everything here is salvaged…I bought the Raspberry Pis at some point but not for the purpose of making this particular piece. So this is a type of bricolage maybe…one of the ways you could describe the media of the piece is reused electronics.  These have two monitors that are detachable from these stands, but they are both on the stands that come with them. There are two Mini-HDMI to HDMI male to male cables. There are two micro USB to USB male to male cables. There are two Raspberry Pi Zeros – a very early model.  There’s 8 GB SD cards, two of those.  There’s two of everything because it’s a folio.  And these are bound together with two wire twist ties – and there are two power cords which go from the monitors to a standard 125 volt power supply.  So the SD card has a Raspberry Pi image and that’s an image that is set up to automatically start.  It’s a fairly standard image, but there are a few important changes that are made so it starts a browser.  In this case, it starts Chromium in a particular mode where it doesn’t pester and ask you about unlocking your password and stuff; and it sets it to full screen and runs. It also turns off screen blanking, power saving, and screen saving.  So this will run as long as this is on and then the piece itself that’s in there is a free software piece – it’s a single webpage that is almost the same as the one that’s online at –the change really is just rotating this page. 

If I were to sell this to a collector, for instance, they would….I’m trying to think of what the licensing situation would be…there is a slight customization I’ve made to a free software piece, but there’s nothing that the collector would be able to do that would restrict the basic software from being freely available as it is now…and also able to be modified.  People can make their own versions, they can make their own work out of it, as has happened at least once.  So I’ll just show you…..this is just an operating system, that’s Chromium… I haven’t hooked up a mouse, just hooked up a keyboard, but in fact you don’t really need a mouse because you can get to most things on the keyboard here.  So this doesn’t have networking – it’s not on the network and this particular piece is to be read in a certain way for certain values of reading.  This is easier to manage since this is not a networked artifact – it doesn’t receive updates – there are not security issues with it – you can go in and mount this card read only and go through the whole image if you wanted and get the information you wanted or copy it and go through that it.”

On Authorship and Code Modification

“For my dissertation, I created a research interactive fiction system called Curveship with its own domain – so you could do everything you expect to do with interactive fiction, but it wasn’t deployable.  You couldn’t make a game you could give to other people.  So for that reason or other reasons, it never took off for people to use.  But that’s a larger system with thousands of lines of code – in theory it would be a platform for work.  Most of my work is considerably smaller- a page or line of code – these are online for people to use and modify.   Taroko Gorge is an example of something I wrote in Python when I was in Taiwan years ago, and after that made a JavaScript version of it.  And people began to modify that JavaScript version and put in their own words, without having a lot of expertise as programmers or identifying as programmers.  And they started to make their own “remixes” of that work, so there’s dozens of those that are available online.   To me, they don’t really threaten the integrity of the original work. I suppose there’s a possibility that someone could be confused that someone’s later modification might be something I did somehow.  But given the whole context of computing, the real concern is that people are intimidated and don’t think that things are open to modification – I see that it’s much more urgent to make that work available.  

I have a project called Memory Slam which is a slowly-growing collection of classic systems – classic and simple versions that I’ve re-implemented.  So, I’ve made Python versions and I’ve made JavaScript versions…there’s six of those pieces now.  I created this so that people could study and modify these systems but they are not close material re-makings of the systems.  So David Link took an exhibit on tour where he rebuilt Ferranti Mark 1 (the world’s first commercially-available electronic computer) and had things functioning very much like the original Christopher Strachey Loveletter Generator and for the people who got to go to that exhibit, great, but there’s another experience to be able to study and modify the way that code like that functions.  So, for example, could you make a love-letter generator into something that expresses dislike or hatred of someone? Could you make a love-letter generator about food? To what extent are the formal properties of the system susceptible to various changes?  


Screenshot, Love Letter Generator,

So when I redid these, the point was mainly to make them available for that type of study, modification, play…I think they are good formal models of those original systems, but they are not capturing all the material qualities.  And the reason I mention all this about Memory Slam is that probably it would make sense to put new versions of that code up – I have Python 2 code – and it might be useful to add Python 3 or somehow find something that could work in both versions.  I could make cleaner html and JavaScript versions. And if I do this – is there a point to keeping the original version and how would that be kept?”  

Dear Reader, I Was Hoping He Would Tell Me

“So one thing I could do is include the git repository in the directory itself that’s available to anyone – so if you really care to know the history of it…you can review that.  When I worked on Curveship, I used Subversion.  Sometimes, it’s rather heavy and sometimes you don’t know whether you will be done with something in 30 minutes or whether it might be a project of several weeks. And you don’t know with a small scale work, do you want to create a branch where you are exploring that you might merge in? This version control perspective is often quite elaborate for very small scale projects.”

On Distributional Poetics

“This 10-print program, which is a random maze generator, is an example of a particular type of distributional poetics, where you see there’s two symbols and in this case, picking from them is equally likely… and that’s a concrete poem or visual art piece that’s made that way. You can make things with words or with lines or syntactically with phrases as well.  There is a shift both as a reader or appreciator of this work from an aesthetic perspective, and as a maker of this work.  It’s that both perspectives need to be…it’s only meaningful if they are attuned to the distributional nature of the work.

So Borges’ description of the Library of Babel is one in which you have an exhaustive library and some pages might be ripped but there’s always a page that is one character different somewhere else in the library, right? So the idea of an exhaustive library in which every possible page like this, every volume containing these pages is represented, and this is a distribution of analog…it’s important also that even though you don’t see this in the work, on the web it makes more sense but these are pages…they are web-pages…so that is something that metaphorically connects through the web to Borges’ idea. So if you come to this thinking ‘it’s a loop of video’ rather than ‘its producing every possible arrangement of these letters’ then I don’t see how your aesthetic perspective on it would be particularly useful – or would allow you the fullest appreciation of it.  I think there are ways in which we are readers of distributions and ways in which we are writers of distributions, and this is keeping things fairly simple, because if you start with existing stores of text and process them, that’s something else. But here we are just talking about a simple distribution system and just processing them right? So the poetics question is – how do we present this in such a way and how do we make this in such a way that it has the inter-textual connections and the metaphorical connections? It is a page, it connects to the description, it implements the specification of Borges’ story in one way but not in another way…and so forth.

So the poetics of this piece have to do with the physical organization of it, what’s shown to someone who is viewing it.  There are certain things… it has a title that evokes something about book arts, for instance, and so a person who knows something about digital media art and something about book arts might know there are things that appear on screens that aren’t videos and might be more aesthetically prepared to receive this.” 

Preservation as Play Back?

“So there’s also the ability to document things.  Compared to documenting a play, it would be significantly harder to have video documentation of a play in part because when you get video documentation it interferes with the production of the play – with the actors putting it on. Here you can just go and take video of this and see what the piece looks like, pretty much, as documentation, but you are not preserving the object any more than taking a good photograph of a painting is preserving a painting.  The archival perspective is often coming from record keeping…in this case, the informational content or the record content is maybe not the main thing going on.”

What is the Scholarly Object? What Should we Preserve? 

“Let’s make a distinction between traditional scholarship and creative practice – so in this piece (Una página de Babel) the software component is referred to by Álvaro Seiça in his PhD and some of his work was actually modifying this piece. So from that standpoint, it enters traditional scholarship, just as there has been practice-based scholarship with other pieces of mine. So in order to follow the arguments that Álvaro makes, in order to follow the discussions in the “great conversation” – what types of software preservation should be done….well, this goes back to Joseph Weizenbaum. The version we have for his system is a LISP implementation that some people call the original, but he didn’t write it in LISP, he wrote it in Michigan Algorithm Decoder, this system called MAD, the code may be around….it might be in the archives….but the core of what was needed was his representation of how that system worked in his paper.  Now could we learn more about the specifics of this — the type of implementation he did, what his process of development was–  if we had that code….yes, of course, that would be very useful.  And we have snippets of example interactions.  But at some point there were lots of these and they were on Teletypes so they were actually in a medium that, if that hadn’t been discarded…there could be a box of transcripts with Eliza that is sitting in the Institute Archives right now.”


Dec 22, 12:39pm

Ada van Tine is a Graduate Research Intern in the Program on Information Science, researching the area of library privacy.


Our Libraries and Neurodiversity

By Ada van Tine

Andover-Harvard Theological Library Stacks by Ada van Tine

It is a quiet day the library where you work, you find it peaceful. But that is not the case for everyone. One of your patrons, Anna, is an 18 year old woman who falls on the autism spectrum. She needs to do research for her college final paper on W.E.B. Du Bois. She lives with her parents nearby the school and library, but their house is noisy and full of visiting relatives right now. However Anna doesn’t consider the library to be a calm alternative and is very nervous about going to the library because the fluorescent lights highly irritate her, their buzzing endlessly permeating her brain, causing nausea. To cope with this she often does repetitive movements with her hands. In the past, librarians and other patrons have been really awkward with her because of her hand movements and reaction to the lights. But she really needs to get these books for her paper, what will you do as a librarian to help this patron meet her needs? For individuals who are members of a neurominority, libraries can be extremely stressful, upsetting, and in the worst cases traumatic.

In libraries, we understand that we need to accommodate people who are different, but the problem is that sometimes we are not aware of who we might be failing to serve and why. If Anna gives feedback about the library in a suggestion box, the you might well schedule a replacement of the fluorescent lights as part of the library’s renovations. That is a small step toward progress, however we should not wait around for an invitation to make our libraries more bearable, leaving the chance that some patrons might be suffering in silence in the meantime. Librarians need to be radically proactive so as not to make their spaces only welcoming to the part of the population with neurotypical leanings. The solution, however, is not merely a focus on those who are “different” and need some kind of special accommodation.

Rather, the researchers and advocates who talk about neurodiversity now stress that neurodiversity is “the idea that neurological differences like autism and ADHD are the result of normal, natural variation in the human genome.” (Robinson, What is Neurodiversity?) Simply said: all humans fall on neurological spectra of traits, and all of us have our own variances from the norm. For each person in the world there exists a different way of perceiving and interacting with other people and information. For instance, people with dyslexia, people with autism, people with ADHD, and people who have not had a good night’s sleep all perceive the world and the library differently. The concept of Neurodiversity is another way to recognize that.

Furthermore, new research is continually helping us to evolve our ideas about neurodiversity. Therefore, libraries should stay abreast of advancements in technology for the neurodiverse population because they will benefit every patron. “Actively engaging with neurodiversity is not a question of favoring particular personal or political beliefs; rather, such engagement is an extension of librarians’ professional duties insofar as it enables the provision of equitable information services” (Lawrence, Loud Hands in the Library, 106-107). Librarians are called through the ALA Core Values of Access and Diversity to make all information equitably available to all patrons. To not recognize the existence of neurodiversity would be to ignore a segment of the whole society which we are called to serve.

There are immediate ways that your library can better serve a larger portion of the neurodiverse population. For example, below are some relatively low cost interventions:

  • For dyslexic individuals have a small reading screen available. esearch has shown that those with dyslexia can read more easily and quickly off of smaller screens with small amounts of text per page (Schneps).
  • Audiobooks, text-to-speech, and devices that can show text in a color gradient also help dyslexic patrons with their information needs.
  • For people who are on the autism spectrum replace the older fluorescent lights in the library, and don’t focus solely on open collaborative spaces in the library layout (Lawrence, Loud Hands, 105). Also train yourself and your employees to recognize and know how to react properly with autistic individuals who may express non verbal body language such as repetitive movements (Lawrence, Loud Hands, 105).
  • For people with ADHD, have quiet private rooms available so they can better concentrate at the library as well as audio books and text-to-speech programs so that they can listen to their research and reading while doing other things (Hills, Campbell, 462).
  • Train staff to never touch a person who is on the autism spectrum without their explicit permission, be aware of their sensory needs and hold the interview in a quiet place with no background noise such as an office fountain, and with no fluorescent lights. Some people on the autism spectrum are also smell sensitive, so notify staff to refrain from wearing perfume. (

New technologies and findings in cognitive science are being developed to better adapt to those individuals who are members of a neurominority. For example, a new reading program is being developed by Dr. Matthew Schneps that combines a reading acceleration program with compressed text-to-speech and visual modifications which has so far proven to drastically increase the speed of dyslexic and non dyslexic readers alike (Shneps). There are many studies on the ways in which modern technology can be used to better communicate with and educate autistic students. The future is hopeful.

Addressing neurodiversity in our libraries and in our societies is not a solved problem. For example there is research and development being done to reframe digital programs to be viewed as an ever growing ecosystem, never in stasis, so that they may better adapt to every user’s need as well as be transparent about the metadata of programs so that users can know which parts of the system are enabling or disabling their assistive technology (Johnson, 4). There are many steps to take that can help make the library more friendly to a neuro diverse audience, but the most important thing to keep in mind is that we must all plan to change and adapt now and over time to make our society a better, more liveable place for everyone. So that maybe when Anna comes to research the library and staff will be prepared to be a little more welcoming than she expected, and maybe she’ll even want to come back.

What to do next:


You may feel overwhelmed by the vast and complicated nature of this important task. The first step is always to educate yourself and get a grounding in basic literature about a subject. Many resources are included in the next section to aid in this discovery process.

You may wish to start off by learning about neurodiversity in general (What is Neurodiversity?,Definition of Neurodiversity). If you’ve identified a specific population need in your community — you may want to dig in deeper with resources specific to that neurominority, here are a few. (Autism Spectrum, ADHD, Dyslexia).

There are some good books and articles specifically about neurodiversity and libraries included in the resources. (Library Services for Youth with Autism Spectrum Disorders, Programming for Children and Teens with Autism Spectrum Disorder,

Loud Hands in the Library, Neurodiversity in the Library).

As it turns out, there is a lack of literature relating to best practices and programming in libraries in reference to neurodiversity. However, to understand and engage with this topic and community librarians should consider attending events and workshops — a number held by advocacy and research organizations are included below. (ADHD, Dyslexia, The A11Y project, International Society for Augmentative and Alternative Communication, The Center for AAC and Autism).



Reference List

The American Association of People with Disabilities. Retrieved from

Autistic Self Advocacy Network. Retrieved from

The A11Y project. Retrieved from

Campbell, I., Hills, K. (2011). College Programs and Services. In M. DeVries, S. Goldstein, & J. Naglieri (Eds), Learning and Attention Disorders in Adolesence and Adulthood (457-466). Hoboken, New Jersey: John Wiley & Sons, Inc.

The Center for AAC and Autism. Retrieved from

Children and Adults with Hyperactive Attention Deficit/Hyperactivity Disorder. Retrieved from

Eng, A. (2017). Neurodiversity in the Library: One Librarian’s Experience. In The Library With The Lead Pipe, 1.

Farmer, L. S. J. (2013). Library Services for Youth with Autism Spectrum Disorder. Chicago: American Library Association.

How Educators Can Help Autistic People by Sensory Accommodations. Retrieved from

International Dyslexia Association. Retrieved from

International Society for Augmentative and Alternative Communication. Retrieved from

Johnson, Rick. (2017, Sept 25). Accessibility: Ensuring that Edtech Systems Work Together to Serve All Students. Educause Review. Retrieved from


Klipper, B. (2014). Programming for Children and Teens with Autism Spectrum Disorder. Chicago: American Library Association.

Lawrence, E. (2013). Loud Hands in the Library. Progressive Librarian, (41), 98-109.

Neurodiversity. Retrieved from

Ploog, B. O., Scharf, A., Nelson, D., & Brooks, P. J. (2013). Use of computer-assisted technologies (CAT) to enhance social, communicative, and language development in children with autism spectrum disorders. Journal Of Autism And Developmental Disorders, (2), 301. doi:10.1007/sl0803-012-1571-3

Robison, John Elder. (2013, Oct 7). What is Neurodiversity? Psychology Today. Retrieved from

Schneps, Matthew H. (2015). Using Technology to Break the Speed Barrier of Reading. Scientific American. Retrieved from

Dec 01, 9:09am

A History of the Internet : Commentary on Scott Bradner’s Program on Information Science Talk

Scott Bradner is a Berkman Center affiliate who worked for 50 at Harvard in the areas of computer programming, system management, networking, IT security, and identity management. Scott Bradner was involved in the design, operation and use of data networks at Harvard University since the early days of the ARPANET and served in many leadership roles in the IETF. He presented the talk recorded below, entitled, A History of the Internet — as part of Program on Information Science Brown Bag Series:

Bradner abstracted his talk as follows:

In a way the Russians caused the Internet. This talk will describe how that happened (hint it was not actually the Bomb) and follow the path that has led to the current Internet of (unpatchable) Things (the IoT) and the Surveillance Economy.

The talk contained a rich array of historical details — far too many to summarize here. Much more detail on these projects can be found in the slides and video above; from his publications, and from his IETF talks. (And for those interested in recent Program on Information Science research on related issues of open information governance, see our published reports.)

Bradner describes how the space race, exemplified by the launch of Sputnik, spurred national investments in research and technology — and how the arms race created the need for a communication network that was decentralized and robust enough to survive a nuclear first-strike.

Bradner argues that the internet has been a parent revolution, in part because of its end-to-end design. The internet as a whole was designed so that most of the “intelligence” is encapsulated at host endpoints, connected by a “stupid” network carrier that just transports packets. As a result, Bradner argues, the carrier cannot own the customer, which, critically, enables customers to innovate without permission.

ARPANET, as originally conceived, was focused on solving what was then a grand challenge in digital communications research: To develop techniques and obtain experience on interconnecting computers in such a way that a very broad class of interactions are possible, and to improve and increase computer research productivity through resource sharing.

Bradner argues that the internet succeeded because, despite the scope of the problem, solutions were allowed to evolve chaotically: ARPA was successful in innovating because it required no peer review. The large incumbent corporations in the computing and networking field ignored internet because they believed it couldn’t succeed (and they believed it couldn’t succeed because its design did not allow for the level of control and reliability that the incumbents believed to be necessary to making communications work). And since the Internet was was viewed as irrelevant, there were no efforts to regulate it. It was not until after the Internet achieved success, and catalyzed disruptive innovation that policymakers deemed it, “too important to leave to the people that know how it works.”

Our upcoming Summit supported by a generous grant from the Mellon Foundation, will probe for grand challenge questions in scholarly discovery, digital curation and preservation, and open scholarship. Is it possible that the ideas that could catalyze innovation in these areas are, like the early Internet, currently viewed as impractical or irrelevant? .

Nov 07, 8:55am

Jefferson Bailey is Director of Web Archiving at Internet Archive. Jefferson joined Internet Archive in Summer 2014 and manages Internet Archive’s web archiving services including Archive-It, used by over 500 institutions to preserve the web. He also oversees contract and domain-scale web archiving services for national libraries and archives around the world. He works closely with partner institutions on collaborative technology development, digital preservation, data research services, educational partnerships, and other programs. He presented the talk recorded below, entitled, Safety Nets: Rescue And Revival For Endangered Born-digital Records — as part of Program on Information Science Brown Bag Series:

Bailey abstracted his talk as follows:

The web is now firmly established as the primary communication and publication platform for sharing and accessing social and cultural materials. This networked world has created both opportunities and pitfalls for libraries and archives in their mission to preserve and provide ongoing access to knowledge. How can the affordances of the web be leveraged to drastically extend the plurality of representation in the archive? What challenges are imposed by the intrinsic ephemerality and mutability of online information? What methodological reorientations are demanded by the scale and dynamism of machine-generated cultural artifacts? This talk will explore the interplay of the web, contemporary historical records, and the programs, technologies, and approaches by which libraries and archives are working to extend their mission to preserve and provide access to the evidence of human activity in a world distinguished by the ubiquity of born-digital materials.

Bailey eloquently stated the importance of web archiving: “No future scholarship can study our era without considering materials published (only) on the web.” Further, he emphasized the importance of web archiving for social justice: Traditional archives disproportionately reflect social architectures of power, and the lived experiences of the advantaged. Web crawls capture a much broader (although not nearly complete) picture of the human experience.

The talk ranged over an impressively wide portfolio of initiatives — far too many to do justice discussing in a single blog post. Much more detail on these projects can be found in the slides and video above, Bailey’s professional writings, the Archive blog, and experiments page, and archive-it blog for some insights into these.

A unified argument ran through the Bailey’s presentation. At the risk of oversimplifying, I’ll restate the premises of the argument here:

  1. Understanding our era will require research, using large portions of the web, linked across time.
  2. The web is big — but not too big to collect (a substantial portion of) it.FOOTNOTE: Footnote
  3. Providing simple access (e.g. retrieval, linking) is more expansive than collection;
    enabling discovery (e.g. search) is much harder than simple access;
    and supporting computational research (which requires analysis at web-scale, and over time) —
    is much, much harder than discovery.
  4. Research libraries should help with this (hardest) part.

I find the first three parts of the argument largely convincing. Increasingly, new discoveries in social science are based on analysis of massive collections of data that areis generated as a result of people’s public communications, and depends on tracing these actions and their consequences over time. The Internet Archive’s success to date establishes that much of these public communications can be collected and retained over time. And the history of database design (as well as my and my colleagues experiences in archiving and digital libraries) testifies to the challenges of effective discovery and access at scale.

I hope that we, as research libraries, will be step up to the challenges of enabling large-scale, long-term research over content such as this. Research libraries already have a stake in this problem because most of the the core ideas and fundamental methods (although not the operational platforms) for analysis of data at this scale comes from research institutions with which we are affiliated. Moreover if libraries lead the design of these platforms, participation in research will be far more open and equitable than if these platforms are ceded entirely to commercial actors.

For this among other reasons, we are convening a Summit on Grand Challenges in Information Science & Scholarly Communication, supported by a generous grant from the Mellon Foundation. During this summit we develop community research agendas in the areas of scholarly discovery at scale; digital curation and preservation; and open scholarship. For those interested in these questions and related areas of interest, we have published Program on Information Science reports and blog posts on some of the challenges of digital preservation at scale.

Oct 06, 9:17am

Labor And Reward In Science: Commentary on Cassidy Sugimoto’s Program on Information Science Talk

Cassidy Sugimoto is Associate Professor in the School of Informatics and Computing, Indiana University Bloomington, where researches within the domain of scholarly communication and scientometrics, examining the formal and informal ways in which knowledge producers consume and disseminate scholarship. She presented this talk, entitled Labor And Reward In Science: Do Women Have An Equal Voice In Scholarly Communication? A Brown Bag With Cassidy Sugimoto, as part of the Program on Information Science Brown Bag Series.

In her talk, illustrated by the slides below, Sugimoto highlights the roots of gender disparities in science.


Sugimoto abstracted her talk as follows:

Despite progress, gender disparities in science persist. Women remain underrepresented in the scientific workforce and under rewarded for their contributions. This talk will examine multiple layers of gender disparities in science, triangulating data from scientometrics, surveys, and social media to provide a broader perspective on the gendered nature of scientific communication. The extent of gender disparities and the ways in which new media are changing these patterns will be discussed. The talk will end with a discussion of interventions, with a particular focus on the roles of libraries, publishers, and other actors in the scholarly ecosystem..

In her talk, Sugimoto stressed a number of patterns in scientific publication:

  • Demise of single authorshop complicates notions of credit, rewards, labor, and responsibility
  • There are distincted patterns of gender disparity in scientific publications: Male-authored publications predominate in most field (with a few exceptions such as Library Science); women collaborating more domestically than internationally on publication; and woman-authored publications tend to be cited less (even within the same tier of journals).
  • Looking across categories of contribution — the most isolated is performing the experiment. And Women are most likely to fill this role. Further, if we look across male-and-female led teams, we see that the distribution of work across these teams varies dramatically.
  • When surveying teams — women tended to value all of the forms of contributions more than men with one exception. Women judge technical work, which is more likely to be conducted by women, as less valuable.
  • Composition of authorship has consequences for what is studied. Womens’ research focuses more often than men on areas relevant to both genders or to women.

Sugimoto notes that these findings are consistent with pervasive gender discrimination. Further, women as well as men frequently discriminate against other women — for example, in evaluation of professionalism, evaluation of work, and in salary offers

Much more detail on these points can be found in Sugimoto professional writings.

Sugimoto’s talk drew on a variety of sources: publication data in the Web of Science; from acknowledgement and authorship statements in PLOS journals. Open bibliometric data, such as that produced by PLOS, the Initiative for Open Citation, and various badging initiatives can help us to more readily bring disparities to light.

At the conclusion of her talk, Sugimoto suggested the following roles for librarians:

Sugimoto’s talk drew on a variety of sources: publication data in the Web of Science; from acknowledgement and authorship statements in PLOS journals. Open bibliometric data, such as that produced by PLOS, the Initiative for Open Citation, and various badging initiatives can help us to more readily bring disparities to light.

  • Use and promote open access in training sessions
  • Provide programming that lessens barriers to participation for women and minorities
  • Advocate for contributorship models which recognize the diversity of knowledge production
  • Approach new metrics with productive skepticism
  • Encourage engagement between students and scholars
  • Evaluate and contribute to the development of new tools

Reflecting the themes of Sugimato’s talk, the research we conduct here, in the Program on Information Science is strongly motivated by issues of diversity and inclusion — particularly on approaches to bias-reducing systems design. Our previous work in participative mapping aimed at increasing broad public participation in electoral processes. Our current NSF-supported work in educational research focuses on using eye-tracking and other biological signals to track fine-grained learning across populations of neurologically diverse learners. And, under a recently-awarded IMLS award, we will be hosting a workshop to develop principles for supporting diversity and inclusion through information architecture in information systems. For those interested in these and other projects, we have published blog posts and reports in these areas.

Sep 27, 3:51pm

Alex Chassanoff is a CLIR/DLF Postdoctoral Fellow in the Program on Information Science and continues a series of posts on software curation.

In this blog post, I am going to reflect upon potential strategies that institutions can adopt for making legacy software curation-ready.  The notion of “curation-ready” was first articulated as part of the “Curation Ready Working Group”, which formed in 2016 as part of the newly emerging Software Preservation Network (SPN).  The goal of the group was to “articulate a set of characteristics of curation-ready software, as well as activities and responsibilities of various stakeholders in addressing those characteristics, across a variety of different scenarios”[1].  Drawing on inventory at our own institutions, the working group explored different strategies and criteria that would make software “curation-ready” for representative use cases.  In my use case, I looked specifically at the GRAPPLE software program and wrote about particular use and users for the materials.

This work complements the ongoing research I’ve been doing as a Software Curation Fellow at MIT Libraries [2] to envision curation strategies for software.  Over the past six months, I have conducted an informal assessment of representative types of software in an effort to identify baseline characteristics of materials, including functions and uses.

Below, I briefly characterize the state of legacy software at MIT.

  • Legacy software often exists among hybrid collections of materials, and can be spread across different domains.
  • Different components(e.g., software dependencies, hardware) may or may not be co-located.
  • Legacy software may or may not be accessible on original media. Materials are stored in various locations, ranging from climate-controlled storage to departmental closets.
  • Legacy software may exist in multiple states with multiple contributors over multiple years.
  • Different entities (e.g., MIT Museum, Computer Science and Artificial Intelligence Laboratory, Institute Archives & Special Collections) may have administrative purview over legacy software with no centralized inventory available.
  • Collected materials may contain multiple versions of source code housed in different formats (e.g., paper print outs, on multiple diskettes) and may or may not consist of user manuals, requirements definitions, data dictionaries, etc.
  • Legacy software has a wide range of possible scholarly use and users for materials. These may include the following: research on institutional histories (e.g., government-funded academic computing research programs), biographies (e.g., notable developers and/or contributors of software),  socio-technical inquiries (e.g., extinct programming languages, implementation of novel algorithms), and educational endeavors (e.g., reconstruction of software).

We define curation-ready legacy software as having the following characteristics: being discoverable, usable/reusable, interpretable, citable, and accessible.  Our approach views curation as an active, nonlinear, iterative process undertaken throughout the life (and lives) of a software artifact.

Steps to increase curation-readiness for legacy software

Below, I briefly describe some of the strategies we are exploring as potential steps in making legacy software curation-ready.  Each of these strategies should be treated as suggestive rather than prescriptive at this stage in our exploration.

Identify appraisal criteria. Establishing appraisal criteria is an important first step that can be used to guide decisions about selection of relevant materials for long-term access and retention. As David Bearman writes, “Framing a software collecting policy begins with the definition of a schema which adequately depicts the universe of software in which the collection is to be a subset.”[3]  It is important to note that for legacy software, determining appraisal criteria will necessarily involve making decisions about both the level of access and preservation desired.  Decision-making should be guided by an institutional understanding of what constitutes a fully-formed collection object. In other words, what components of software should be made accessible? What will be preserved? Does the software need to be executable? What levels of risk assessment should be conducted throughout the lifecycle?  Making these decisions institutionally will in turn help guide the identification of appropriate preservation strategies (e.g., emulation, migration, etc) based on desired outcomes.

Identify, assemble, and document relevant materials. A significant challenge with legacy software lies in the assembling of relevant materials to provide necessary context for meaningful access and use.  Locating and inventorying related materials (e.g., memos, technical requirements, user manuals) is an initial starting point. In some cases, meaningful materials may be spread across the web at different locations.  While it remains a controversial method in archival practice, documentation strategy may provide useful framing guidance on principles of documentation [4].

Identify stakeholders. Identifying the various stakeholders, either inside or outside of the institution, can help ensure proper transfer and long-term care of materials, along with managing potential rights issues where applicable.  Here we draw on Carlson’s work developing the Data Curation Profile Toolkit and define stakeholders as any group, organizations, individuals or others having an investment in the software, that you would feel the need to consult regarding access, care, use, and reuse of the software[5].

Describe and catalog materials. Curation-readiness can be increased by thoroughly describing and cataloging select materials, with an emphasis on preserving relationships among entities. In some cases, this may consist of describing aspects of the computing environment and relationships between hardware, software, dependencies, and/or versions. Although the software itself may not be accessible, describing related materials (i.e., printouts of source code, technical requirements documentation) adequately can provide important points of access. It may be useful to consider the different conceptual models of software that have been developed in the digital preservation literature and decide which perspective aligns best with your institutional needs [6].

Digitize and OCR paper materials. Paper printouts of source code and related documentation can be digitized according to established best practice workflows[7].  The use of optical character recognition (OCR) programs produces machine-readable output, enabling easy indexing of content to enhance discoverability and/or textual transcriptions.  The latter option can make historical source code more portable for use in simulations or reconstructions of software.

Migrate media. Legacy software often reside on unstable media such as floppy disks or magnetic tape. In cases where access to the software itself is desirable, migrating and/or extracting media contents (where possible) to a more stable medium is recommended [8].


As an active practice, software curation means anticipating future use and uses of resources from the past. Recalling an earlier blog post, our research aims to produce software curation strategies that embrace Reagan Moore’s theoretical view of digital preservation, whereby “information generated in the past is sent into the future”[9]. As the born-digital record increases in scope and volume, libraries will necessarily have to address significant changes in the ways in which we use and make use of new kinds of resources.  Technological quandaries of storage and access will likely prove less burdensome than the social, cultural, and organizational challenges of adapting to new forms of knowledge-making. Legacy software represents this problem space for libraries/archives today.  Devising curation strategies for software helps us to learn more about how knowledge-embedded practices are changing and gives us new opportunities for building healthy infrastructures [10].


[1] Rios, F., Almas, B., Contaxis, N., Jabloner, P., Kelly, H.. (2017). Exploring curation-ready software: use cases. doi:10.17605/OSF.IO/8RZ9E

[2] These are some of the open research questions being addressed by the initial cohort of CLIR/DLF Software Curation Fellows in different institutions across the country.

[3] Bearman, D. (1985). Collecting software: a new challenge for archives & museums. Archives & Museum Informatics, Pittsburgh, PA.

[4] Documentation strategy approaches archival practice as a collaborative work among record creators, archivists, and users.  It often traverses institutions and represents an alternative approach by prompting extensive documentation organized around an “ongoing issue or activity or geographic area.” See:  Samuels, H. (1991). “Improving our disposition: Documentation strategy,” Archivaria 33,

[5] The results of two applied research projects provide examples from the digital preservation literature.  In 2002, the Agency to Research Project at the National Archives of Australia developed a conceptual model based on software performance as a measure of the effectiveness of digital preservation strategies. See: Heslop,  H., Davis, S., Wilson, A. (2002). “An approach to the preservation of digital records,” National Archives of Australia, 2002; in their 2008 JISC report, the authors proposed a composite view of software with the following four entities: package, version, variant, and download. See:  Matthew, B., McIlwrath, B., Giaretta, D., Conway, E. (2008).“The significant properties of software: A study,”

[6] Carlson, J. (2010). “The Data Curation Profiles toolkit: Interviewer’s manual,”

[7]  Technical guidelines for digitizing archival materials for electronic access: Creation of production master files–raster images. (2005). Washington, D.C.: Digital Library Federation,

[8] For a good overview of storage recommendations for magnetic tape, see: To read more about the process of reformatting analog media, see: Pennington, S., and Rehberger D. (2012). The preservation of analog video through digitization. In D. Boyd, S. Cohen, B. Rakerd, & D. Rehberger (Eds.), Oral history in the digital age. Institute of Library and Museum Services. Retrieved from

[9] Moore, R. (2008). “Towards a theory of digital preservation”, International Journal of Digital Curation 3(1).

[10] Thinking about software as infrastructure provides a useful framing for envisioning strategies for curation.  Infrastructure perspectives advocate “adopting a long term rather than immediate timeframe and thinking about infrastructure not only in terms of human versus technological components but in terms of a set of interrelated social, organizational, and technical components or systems (whether the data will be shared, systems interoperable, standards proprietary, or maintenance and redesign factored in).”  See:  Bowker, G.C., Baker, K., Millerand, F. & Ribes, D. (2010). “Toward information infrastructure studies: Ways of knowing in a networked environment.” In J. Hunsinger, L. Klastrup, & M. All en (Eds.),International handbook of Internet research. Dordrecht; Springer, 97-117.


Aug 05, 8:21pm

Margaret Purdy is a Graduate Research Intern in the Program on Information Science, researching the area of library privacy.


Building Trust: A Primer on Privacy for Librarians

Privacy Protections Build Mutual Trust Between Patrons and Librarians

Librarians have accepted privacy as a central tenet of their professional ethics and responsibilities for nearly eight decades. However, by 2017, privacy as a human right has been simultaneously strengthened and reaffirmed, defended and rebuffed, but rarely do we as librarians take the time to step away and ask why privacy truly matters and what we can do to protect it.

The American Library Association and the International Federation of Library Associations have both asserted that the patrons have the right to privacy while seeking information.1 The ALA in particular brings up the notion of privacy allowing for intellectual freedom – the ability to consume information and know they will not face repercussions such as punishment or judgments based on what they read. Librarians are in the business of disseminating information in order to stimulate knowledge growth. One major stimulus for such growth is the mutual trust between the library and the patron – trust that the patron will not use the knowledge in a destructive way, and trust that the library will not judge the patron for information interests. Ensuring patron privacy is one way for the library to prove that trust. Similarly, the IFLA2 emphasizes the right to privacy in its ethics documentation. In addition to the rights of patron privacy that the ALA ensures, the IFLA also allows for as much transparency as possible into “public bodies, private sector companies and all other institutions whose activities effect [sic] the lives of individuals and society as a whole.” This is yet another way to establish trust between the library and its patrons, ultimately ensuring intellectual freedom and growth of knowledge.

Globally, internet privacy and surveillance are also matters that are currently getting much more notice and debate, and government regulations, such as the EU General Protection of Public Data (GDPR)3, are working to strengthen individuals’ abilities to control their own data and ensure it does not end up being used against them. The GDPR is slated to go into effect in 2018 and will broadly protect the data privacy rights of EU citizens. It will certainly be a policy to watch, especially as a litmus for how effective major legislation can be in asserting privacy protections. Even more practically, however, is that the GDPR protects EU citizens even if the one collecting data is outside the EU. This will potentially affect many libraries across the United States and the world at large, as there is an added level of awareness required to ensure that any collaboration with or service to EU citizens is properly protected.

Libraries Face a Double-Barreled Threat from Government Surveillance and Corporate Tracking

In addition to the ALA and IFLA codes of ethics that ensure librarians work to ensure patrons’ rights to privacy, multiple governmental codes deal with the right to information privacy. In the United States, the fourth amendment protects the right to remain free from searches and seizures, and has often been cited as a protection of privacy. Similarly, federal legislation such as FERPA, which protects the privacy rights of students, and HIPAA, which protects medical records have reasserted that privacy is a vital right. Essentially every US state also has some provisions about privacy, many of which directly relate to the right to privacy in library records.4

However, in recent years, many of the federal government’s protections have begun to slip away. Immediately after 9/11, the USA PATRIOT Act passed, allowing the government much broader abilities to track patron library records. More recently, as digital information became easier to track, programs such as PRISM and other governmental tracking arose. Both of these government programs directly threaten the ability for library patrons to conduct research, information-seeking, and more in privacy.

Businesses have also learned ways of tracking their users’ behaviors online, and using that data for practices such as targeted advertising. While the vast majority of this data is encrypted and could not be easily brought back to personally-identifiable information, it is still personal data that is not necessarily kept in the most secure way possible. And while breaches do happen, even without them, it is not out of the question for an experienced party to be able to reconstruct an individual from the data collected, and to know not only that individual’s browsing history and location, but also potentially information such as health conditions, bank details, or other sensitive information.

While this information is often used for simple outreach, including Customer Relationship Marketing, where a company will recommend new products based on previous purchases, it can also be used in more invasive ways. In 2012, Target sent out a promotional mailing containing deals on baby products to a teenage girl.5 Based on their data they had tracked about her purchases, the algorithm had determined, correctly, that she was highly likely to be pregnant. While this story received extensive media attention, businesses of all types, including retailers, hotels, and even healthcare systems participate in similar practices, using data to personalize the experience. However, when stored irresponsibly, this data can lead to unintentional and unwanted sharing of information – potentially including embarrassing web browsing or shopping habits, dates that homes will be empty for thieves, medical conditions that could increase insurance rates, and more

Growing Public Concern

One of the most pressing risks to privacy protections currently is user behavior and expectations. With the information industry becoming much more digital, information is becoming easier to access, spread, and consume. However, the tradeoff is that users, and the information they view, is much easier to track, by both corporate and government entities, friendly or malicious. Plus, because much of the tracking and surrendering of privacy, including the ability to save passwords, CRM, targeted algorithms, and more, make it more convenient to browse the internet, many patrons willingly give up the right to privacy in favor of convenience.

A recent poll6 showed that between 70% and 80% of internet users are aware that practices such as saving passwords, agreeing to privacy policies and terms of use without reading them, and accepting free information in exchange for advertising or data surrendering is a risk to privacy. However, a large majority of users still participate in those practices. There are several theories as to what causes users to agree to forgo privacy, including the idea that the accepting the risks make browsing the internet much more convenient, and users are hesitant to give up that convenience. Another theory is that there really is no alternative to accepting the risks. Many sites will not allow use without acceptance of the terms of use and/or privacy policy. A 2008 study7 calculated how much time users would spend reading privacy policies were they to actually read all of them, and found that, on average, user would spend nearly two weeks a year just reading policies, not to mention the time taken to fully understand the legalese and complicated implications.

Another similar poll8 shows that more than half of Americans are concerned about privacy risks, and over 80% have taken some precautionary action. However, most of that 80% are unaware of more that they can do to protect themselves. This is true for both government surveillance and corporate tracking. The public has similar levels of awareness and concern about both, but are unaware of how to better protect themselves, and thus, are more likely to allow it to happen.

Best Practices for Librarians


Given the increasing public concern and awareness, as well as the longstanding history of librarians’ focus on privacy, librarians have a perfect opportunity to intervene and re-establish the trust from users that their information will not be shared and to meet the professional ethical model of always protecting privacy. There are nearly endless resources that can outline in great detail what librarians should do to defend their patrons against attacks on privacy, whether that comes from government surveillance or corporate tracking. Some of these involve systematic evaluations of all touchpoints in the library and recommendations for implementing best practices. These exist even for areas that do not seem like obvious ways for privacy to be violated, such as anti-theft surveillance on surrounding buildings, or through third-party content vendors.

By dedicating library resources to systematically check for privacy practices, librarians can take some of the burden of inconvenience off of the individual patron. Many of these best practices involve taking the time to change computer settings, read and understand privacy policies, and negotiate with vendors, which few, if any, individuals would do on their own. With the muscle of the library working on it, though, the patrons will still benefit, without needing to dedicate the same amount of time. This serves a dual function as well, as in addition to actual steps to protect patrons, librarians can also serve as an educational resource to help patrons learn simple steps to take to protect their personal systems.

Some examples of protectionary moves are to create policies on library computers that ensure that as little information from user sessions is saved. There are several incredibly simple steps that, while they reduce the convenience slightly, ensure users a safe and private experience. This includes, settings that clear cookies, the cache, and user details after each session (also known as “incognito mode”); or the clearing of patron checkout records once the book is returned.

In addition to those tweaks, the ALA and LITA offer checklists of privacy best practices to systematically implement in libraries. These cover everything from data exchanges, OPACs and patron borrowing records, protection for children, and more in great detail. NISO also provides overarching design principles for approaching library privacy in a digital age. Additionally, there are recommended security audits, many of which Bruce Shuman mentions in his book, Library Security and Safety Handbook: Prevention, Policies, and Procedures.

Additionally, the library, already known for educational programs and community-oriented programming could serve as a location to educate the public about the real risks of tracking and surveillance. There is a definite gap between the public’s awareness of the risks and the public’s action to mitigate those risks. While librarians cannot force behavior, and most would not want to, offering patrons trustworthy information about the risks and how to avoid them in their personal browsing experiences helps re-establish privacy as a core value and gives patrons a reason to trust the library. This recent post from Nate Lord at Digital Guardian offers simple and more in depth steps that patrons can take to ensure their digital information is secure. If a library offered some of these in a training course or as a takeaway, it could serve as a valuable resource in narrowing the gap between patron awareness and activity.

Ultimately, privacy is often one of those words that many people give lip service to, but without fully understanding the risks and consequences, the motivation to give up convenience in order to protect privacy is not always there. However, we as librarians, who value privacy as one of the professions’ core tenets have a real opportunity to help protect patrons’ data against these threats. Resources, such as the aforementioned privacy checklists and audit guides, exist to help librarians ensure their library is in compliance with the current best practices. The threats against privacy are growing, and librarians are well-suited to intervene and ensure patron protection.

Recommended Resources



1. ALA Code of Ethics. (1939).

2. IFLA Code of Ethics.

3. GDPR Portal (2016).

4. Adams, H. et. al. (2005). Privacy in the 21st century. Westport, Conn.: Libraries Unlimited.

5. Hill, K. (2012). How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did.

6. Ayala, D. (2017). Security and Privacy for Libraries in 2017. Online Searcher, 41(3).

7. Cranor, L. (2008). The Cost of Reading Privacy Policies. I/S: A Journal Of Law And Policy For The Information Society.

8. Rainie, L., & Rainie, L. (2017). The state of privacy in post-Snowden America. Pew Research Center.

Jul 17, 10:16pm

Alex Chassanoff is a CLIR/DLF Postdoctoral Fellow in the Program on Information Science and continues a series of posts on software curation.

As I described in my first post, an initial challenge at MIT Libraries was to align our research questions with the long-term collecting goals of the institution. As it happens, MIT Libraries had spent the last year working on a task force report to begin to formulate answers to just these sorts of questions. In short, the task force envisions MIT Libraries as a global platform for scholarly knowledge discovery, acquisition, and use. Such goals may at first appear lofty. However, the acquisition of knowledge through public access to resources has been a central organizing principle of libraries since their inception. In his opening statement at the first national conference of librarians in 1853, Charles Coffin Jewett proclaimed, “We meet to provide for the diffusion of a knowledge of good books and for enlarging the means of public access to them. [1]

Archivists and professionals working in special collections have long been focused on providing access to, and preservation of, local resources at their institutions. What is perhaps most unique about the past decade is the broadened institutional focus on locally-created content. This shift in perspective towards looking inwards is a trend noted by Lorcan Dempsey, who describes it thusly:

In the inside-out model, by contrast, the university, and the library, supports resources which may be unique to an institution, and the audience is both local and external. The institution’s unique intellectual products include archives and special collections, or newly generated research and learning materials (e-prints, research data, courseware, digital scholarly resources, etc.), or such things as expertise or researcher profiles. Often, the goal is to share these materials with potential users outside the institution. [2]

Arguably, this shift in emphasis can be attributed to the affordances of the contemporary networked research environment, which has broadened access to both resources and tools. Archival collections previously considered “hidden” have been made more accessible for historical research through digitization. Scholars are also able to ask new kinds of historical questions using aggregate data, and answer historical questions in new kinds of ways.

This begs the question – what unique and/or interesting content does an institution with a rich history of technology and innovation already have in our possession?

Exploring Software in MIT Collections

MIT has of course played a foundational role in the development and history of computing. Since the 1940s, the Institute has excelled in the creation and production of software and software-based artifacts. Project Whirlwind, Sketchpad, and Project MAC are just a few of the monumental research computing projects conducted here. As such, the Institute Archives & Special Collections has over time acquired a significant number of materials related to software developed at MIT.

In our quest to understand how software may be used (and made useful) as an institutional asset, we engaged in a two-pronged approach. First, we aimed to identify the types of software that MIT may consider providing access to What are the different functions and purposes that software at MIT is created used, and reused for? Second, we aimed to understand more about the active practices of researchers creating, using, and/or reusing software. We anticipated that this combined approach might help us develop a robust understanding of existing practices and potential user needs. At the same time, we recognized that identifying and exposing potential pain points could potentially guide and inform future curation strategies. After an initial period of exploratory work, we identified representative software cases found in various pockets across the MIT campus.

Collection #1: The JCR Licklider Papers and the GRAPPLE software

Materials in the collection were first acquired by the Institute for Special Archives and Collections in 1996. Licklider was a psychologist and renowned computer scientist who came to MIT in 1950. He is widely hailed as an influential figure for his visionary ideas around personal computing and human-computer interaction.

In my exploration of archival materials, I looked specifically at boxes 13-18 in the collection, which contained documentation about GRAPPLE, a dynamic graphical programming system developed while Licklider was at the MIT Laboratory for Computer Science. According to the user manual, the focus of GRAPPLE on “the development of a graphical form of a language that already exists as a symbolic programming language.” [3] Programs could be written using computer-generated icons and then monitored by an interpreter.


Figure 1. Folder view, box 16, J.C.R. Licklider Papers, 1938-1995 (MC 499),

Institute Archives and Special Collections, MIT Libraries, Cambridge, Massachusetts.

Materials in the collection related to GRAPPLE include:

  • Printouts of GRAPPLE source code
  • GRAPPLE program description
  • GRAPPLE interim user manual
  • GRAPPLE user manual
  • GRAPPLE final technical report
  • Undated and unidentified computer tapes
  • Assorted correspondence between Licklider and the Department of Defense

Each of the documents has multiple versions included in the collection, typically distinguished by date and filename (where visible). The printouts of GRAPPLE source code totaled around forty pages. The computer tapes have not yet been formatted for access.

While the software may be cumbersome to access on existing media, the materials in the collection contain substantial amounts of useful information about the function and nature of software in the early 1980s. Considering the documentation related to GRAPPLE in different social contexts helped to illuminate the value of the collection in relationship to the history of early personal computing.

Historians of programming languages would likely be interested in studying the evolution of the coding syntax contained in the collection. The GRAPPLE team used the now-defunct programming language MDL (which stands for “More Datatypes than Lisp”); the extensive documentation provides examples of MDL “in action” through printouts of code packages.


Figure 2. Computer file printout, “eraser.mud.1”, 31 May 1983, box 14, J.C.R. Licklider Papers, 1938-1995 (MC 499), Institute Archives and Special Collections, MIT Libraries, Cambridge, Massachusetts.

The challenges facing the GRAPPLE team at the time of coding and development would be be interesting to revisit today. One obstacle to successful implementation that the team notes were the existing limitations of graphical display environments. In their final technical report on the project from 1984, the GRAPPLE team note the potential of desktop icons for identifying objects and their representational qualities.

Our conclusion is that icons have very significant potential advantages over symbols but that a large investment in learning is required of each person who would try to exploit the advantages fully. As a practical matter, symbols that people already know are going to win out in the short term over icons that people have to learn in applications that require more than a few hundred identifiers. Eventually, new generations of users will come along and learn iconic languages instead of or in addition to symbolic languages, and the intrinsic advantages of icons as identifiers (including even dynamic or kinematic icons) will be exploited. [4]

Despite technological advancement, some fundamental dynamics in human-computer interaction remain relatively unchanged; namely, the powerful relationship between representational symbols and the production of knowledge/knowledge structures. What might it look like to bring to life today software that was conceived in the early days of personal computing? Such aspirations are certainly possible. Consider the journey of the Apollo 11 source code, which was transcribed from digitized code printouts and then put onto Github. One can even simulate the Apollo missions using a virtual Apollo Guidance Control (AGC).

Other collection materials also offer interesting documentation of early conceptions of personal computing while also providing clear evidence that computer scientists such as Licklider regarded abstraction as an essential part of successful computer design. A pamphlet entitled “User Friendliness–And All That” notes the “problem” of mediating between “immediate end users” and “professional computer people” to successfully aid in a “reductionist understanding of computers.”

Figure 3. Pamphlet, “User friendliness-And All That”, undated, box 16, J.C.R. Licklider Papers, 1938-1995 (MC 499), Institute Archives and Special Collections, MIT Libraries, Cambridge, Massachusetts.

These descriptions are useful for illuminating how software was conceived and designed to be a functional abstraction. Such revelations may be particularly relevant in the current climate – where debates over algorithmic decision making are rampant. As the new media scholar Wendy Chun asks, “What is software if not the very effort of making something intangible visible, while at the same rendering the visible (such as the machine) invisible?” [5]


Building capacity for collecting software as an institutional asset is difficult work. Expanding collecting strategies presents conceptual, social, and technical challenges that crystallize once scenarios for access and use are envisioned. For example, when is software considered an artifact ready to be “archived and made preservable”? What about research software developed and continually modified over the years in the course of ongoing departmental work? What about printouts of source code – is that software? How do code repositories like github fit into the picture? Should software only be considered as such its active state of execution? Interesting ontological questions surface when we consider the boundaries of software as a collection object.

Archivists and research libraries are poised to meet the challenges of collecting software. By exploring what makes software useful and meaningful in different contexts, we can more fully envision potential future access and use scenarios. Effectively characterizing software in its dual role as both artifact and active producer of artifacts remains an essential piece of understanding its complex value.



[1] “Opening Address of the President.” Norton’s Literary Register And Book Buyers Almanac, Volume 2. New York: Charles B. Norton, 1854.

[2] Dempsey, Lorcan. “Library Collections in the Life of the User: Two Directions.” LIBER Quarterly 26, no. 4 (2016): 338–359. doi:

[3]  GRAPPLE Interim User Manual, 11 October 1981, box 14, J.C.R. Licklider Papers, 1938-1995 (MC 499), Institute Archives and Special Collections, MIT Libraries, Cambridge, Massachusetts.

[4] Licklider, J.C.R. Graphical Programming and Monitoring Final Technical Report, U.S. Government Printing Office, 1988, 17.

[5] Chun, Wendy Hui Kyong. “On Software, or the Persistence of Visual Knowledge.” Grey Room 18 (Winter 2004): 26-51.

Jun 27, 10:11am

Matt Bernhardt is a web developer in the MIT libraries and a collaborator in our program. He presented this talk, entitled Reality Bytes – Utilizing VR and AR in The Library Space, as part of the Program on Information Science Brown Bag Series.

In his talk, illustrated by the slides below, Bernhardt reviews technologies newly available to libraries that enhance the human-computing interface:

Bernhardt abstracted his talk as follows:

Terms like “virtual reality” and “augmented reality” have existed for a long time. In recent years, thanks to products like Google Cardboard and games like Pokemon Go, an increasing number of people have gained first-hand experience with these once-exotic technologies. The MIT Libraries are no exception to this trend. The Program on Information Science has conducted enough experimentation that we would like to share what we have learned, and solicit ideas for further investigation.

Several themes run through Matt’s talk:

  • VR should be thought of broadly as an engrossing representation of physically mediated space. Such a definition encompasses not only VR, AR and ‘mixed-’ reality — but also virtual worlds like Second Life, and a range of games from first-person-shooters (e.g. Halo) to textual games that simulate physical space (e.g. “Zork”).
  • A variety of new technologies are now available at a price-point that is accessible for libraries and experimentation — including tools for rich information visualization (e.g. stereoscopic headsets), physical interactions (e.g. body-in-space tracking), and environmental sensing/scanning (e.g. Sense).
  • To avoid getting lost in technical choices, consider the ways in which technologies have the potential to enhance the user-interface experience, and the circumstances in which the costs and barriers to use are justified by potential gains. For example, expensive, bulky VR platforms may be most useful to simulate experiences that would in real life be expensive, dangerous, rare, or impossible.

A substantial part of the research agenda of the Program on Information Science is focused on developing theory and practice to make information discovery and use more inclusive and accessible to all. From my perspective, the talk above naturally raises questions about how the affordances of these new technologies may be applied in libraries to increase inclusion and access: How could VR-induced immersion be used to increase engagement and attention by conveying the sense of place of being in an historic archive? How could realistic avatars be used to enhance social communication, and lower the barriers to those seeking library instruction and reference? How could physical mechanisms for navigating information spaces, such as eye tracking, support seamless interaction with library collections, and enhance discovery?

For those interested in these and other topics, you may wish to read some of the blog posts and reports we have published in these areas. Further, we welcome collaboration from library staff and researchers who are interested in collaborating in research and practice. To support collaboration we offer access to fabrication, interface, and visualization technology through our lab.