CHI and the Future of Mobile UX

CHI and the Future of Mobile UX

Katie Montgomery

CHI is an annual international conference that focuses on human factors in computing systems, also known as human-computer interaction (HCI). At first glance this may not sound like an exciting topic until you realize that you are the human factor in computing systems and you are using interfaces and structures created by HCI research all of the time. Asking your phone how to get to the nearest pizza joint? That’s HCI. Celebrating your step count with your fitbit? HCI again. Typing? Definitely HCI. We rarely spend a day without using the myriad forms of interaction circumscribed by human factors in computing systems. It just doesn’t have a sexy name.

As forces for civic and cultural improvement through learning, libraries have an opportunity, and perhaps a responsibility, to discover and invent novel ways for people to interact with information. If we can leverage our access to knowledge in collaboration with technical giants (Google comes to mind), we may be able to open up new avenues to reach our patrons and improve their lives. That’s the point after all.

This year (2018) was my first time attending CHI and I’m coming at it from a library background so the entire experience was an eye opener. The schedule alone was 95 pages long (without abstracts), and contained topics ranging from interactivity in autonomous vehicles to bio design and existence. There were dozens of concurrent sessions and choosing between “Gender-Inclusive Design: Sense of Belonging and Bias in Web Interfaces” and “Evaluating the Disruptiveness of Mobile Interactions: A Mixed-Method Approach” was no simple task. Instead I skipped the anguish of session indecision and took the easier route: attending a few pre-designed 2-4 hour courses over the week, diving in depth into topics and interacting with my fellow conference-goers to brainstorm questions and solutions and learn about each other’s backgrounds.

One course was especially rewarding. “Mobile UX–The Next Ten Years?” taught by Simon Robinson, Jennifer Pearson, and Matt Jones encouraged us to try and extend our minds beyond the flat dark glassy rectangle that mobile devices seem to be stuck in and explore our other senses within the mobile context. [1] Matt likened our present experience with mobile devices to the story of Narcissus- A beautiful man finds a perfectly still pool of water that mirrors his face and falls in love with his own reflection, eventually wasting away from lack of food and water as he refuses to leave the flawless image he has found.

An edited painting by Caravaggio:

Much in the same way that Narcissus was entranced by an idealized self we are entranced by our phones, diving into them and rarely coming up for air. Matt posited an idea-what if our phones got us to put down our phones? No, not just some kind of alert saying that you’ve spent too much time on YouTube (although we discussed those ideas too), but actual apps whose intention is to get us to interact with the real world.

Matt told us a story about his daughter. When she was six or so they had purchased a small GPS driving device. On a trip his daughter, holding the device, piped up from the back, asking “Daddy, where are the bears?”. A little baffled, Matt told her he didn’t know. A few minutes later, after peering out the window for while she asked again “Daddy, where are the bears?”. This time he asked why she thought there should be bears and she explained “It says in half a mile bear right!”. Sure, the interaction is cute, but Matt used it to create a game: every time the GPS told them that there was a “bear” on the right or left he and his daughter had to find something outside the car- a bird, stone, a tree, something in the real world. Interaction and creation define much of what it means to be alive but mobile devices are often real-world isolating and consumptive. [2] So the question remains: how do we change that status quo? Mobile devices are ubiquitous and convincing people to simply use them less is unrealistic. So how can libraries take a leading role in redirecting energy and time towards experience and action? In a purely digital context we could include local clubs and activity suggestions pertaining to subjects in topic guides. In the more focused area of mobile devices we could encourage and participate in the development of apps that recognize geographic location and ping the user with information relating to local ecology, history, or culture. Something along the lines of “You’re near Thoreau’s cabin, would you like to take a detour to see it?”, or “The woods you’re in may have lady slippers (a rare native orchid), keep an eye out! This is what they look like:

Photo by Debbi Griffin

Even better, if the app could include crowd-sourced data people would be able to create content and expand the digital way-signs redirecting to the real world. The app could include preference settings so that the user would only be given notifications about nearby natural phenomena or historical monuments, depending on their interests. Somebody start making this, I want to use it.

Libraries have a pressing need to take HCI into explicit account. Historically librarians have been gatekeepers to information but with the advent of the online public access catalog (OPAC) we threw open the doors to knowledge and invited the world to search for it on their own terms. Except we didn’t. The way that resources are organized within a library is a fairly closed system that requires training to navigate, and while we have made great strides in improving our OPACs and websites so that they are more intuitive for our users there is still work to be done. In order to empower our users to find, evaluate, and use the resources we put at their disposal we need to examine the way that they interact with our systems and modify those systems to improve usability. It’s not enough for the library catalog to knows that a book exists. The patron needs to know too.

If the ideas raised in this post have set your imagination alight and you want to incorporate apps into your library consider looking at Nicole Hennig’s work on the subject. Her books Apps for Librarians: Using the Best Mobile Technology to Educate, Create, and Engage and Mobile Learning Trends: Accessibility, Ecosystems, Content Creation are a good place to start. For a more recent survey of the current technologies as they apply to academic libraries try reading Mobile Technology and Academic Libraries: Innovative Services for Research and Learning by Robin Canuel and Chad Crichton.


1. For more on a creative outlook for the future of mobile devices read “A Brief Rant on the Future of Interactive Design” by Bret Victor.

2. Sherry Turkle’s “Alone Together: Why We Expect More from Technology and Less From Each Other” goes into this phenomenon in depth.

Investigating the Evolving Information Needs of Entrepreneurs: Integrating Pedagogy, Practice & Research

Investigating the Evolving Information Needs of Entrepreneurs:
Integrating Pedagogy, Practice & Research

Nicholas Albaugh & Micah Altman

Innovation-driven entrepreneurship is essential and indispensable in the race to solve the world’s major challenges, especially in the areas of health, information technology, agriculture, and energy. MIT is a global leader in this type of entrepreneurship: a 2015 report from the Institute’s Sloan School of Management estimated that active companies founded by MIT alumni produce annual revenues of $1.9 trillion, equivalent to the world’s tenth largest economy. In terms of the curriculum at MIT, over sixty courses in entrepreneurship were taught during the 2016-2017 academic year.

Discovering, accessing, and integrating information is critical to the success of innovation-driven entrepreneurship and it is part of the Libraries’ core role to improve the foundations for discovery, access, and integration. The presence of a vibrant community of entrepreneurs provides an opportunity to delineate and understand the information skills, needs, and challenges of students and researchers engaged in entrepreneurial ventures. This understanding can inform strategies and methods to address these challenges and aid in the design of innovation methods of library instruction which move beyond small group lectures.

In this blog post, we are going to report on the background and preliminary results of a project designed to answer these questions. There were three stages to this project: background research to identify the information related skills of entrepreneurs, the design of a survey instrument, and the surveying of MIT’s delta v accelerator program.

Initial Steps & Background Research

This was a group effort. Nicholas Albaugh (Librarian for Innovation and Entrepreneurship) did most of the heavy lifting — performing both the ‘bench’ work identifying what was known about information use in entrepreneurship, interacting with the students and the class, and creating a first draft of communications. Micah Altman (Director of Research) provided overall scientific guidance, co-lead in conceptualization, developed the research design and methodology, performed the quantitative analysis, and provided critical review. Shikha Sharma, Business and Management Librarian, and Karrie Peterson, Head of Liaison, Instruction, and Reference Services, contributed to the conceptualization of the project and provided critical review.

During the first few months of the project, the four of us met roughly once a month to develop a prospectus outlining the research questions, methods, desired outcomes, and key outputs.

After this prospectus was completed, we wanted to build on previous work by identifying existing frameworks outlining the information skills necessary for entrepreneurial success and entrepreneurial competencies more broadly.

To identify these frameworks, we conducted background research in the business and library literature using three databases: Business Source Complete, ABI/INFORM Complete, and Library, Information Science and Technology Abstracts.

The primary article in terms of identifying key information-related skills for entrepreneurs was “21st Century Knowledge, Skills, and Abilities and Entrepreneurial Competencies: A Model for Undergraduate Entrepreneurship Education” by Trish Boyles. This delineated three broad categories of entrepreneurial competencies, cognitive, social, and action-oriented. The key information-related skills fell in the cognitive category, in particular:

  • A habit of actively searching for information
  • The ability to conduct searches systematically
  • The ability to recognize opportunities when not actively looking for them by recognizing connections between seemingly unconnected things

In addition to a general framework regarding the information-related skills of entrepreneurs, we wanted a more general framework for entrepreneurial competencies. The premier text for this is Bill Aulet’s Disciplined Entrepreneurship: 24 Steps to a Successful Startup. It is the textbook for the delta v program and its author is the Managing Director of the Martin Trust Center for MIT Entrepreneurship, one of the key parts of MIT’s entrepreneurial ecosystem. Outside MIT, it has been translated into eighteen languages and serves as the text for three, web-based edX courses taken by hundreds of thousands of people in countries all over the world.

MIT delta v

We decided to survey MIT’s delta v accelerator program, as it is widely considered the capstone entrepreneurial experience for students here on campus. Participants in the program work full time over the course of the summer on the following goals:

  • Defining and refining their target market
  • Conducting primary market research about their customers and users
  • Running experiments to validate or invalidate hypotheses regarding potential customers
  • Building and nurturing their founding team


The goal of the survey was to identify which stage of the information gathering phase of the delta v program was most time consuming and which part of that process was the most challenging. We were also interested in learning what resources and tools they used during these stages and processes and what tools they would have preferred to use. We also sought to identify specific information needs of those participating in the delta v programs in order to inform solutions going forward.

Our survey consisted of six multiple-choice questions and 5 open-ended questions. The multiple-choice questions addressed the following points:

  • Time spent on market analysis vs. business model development and the most challenging part of each process
  • The relative challenge of identifying and evaluating sources and extracting and analyzing information
  • Resources, tools, and methods used to locate, extract, and collect information

The open-ended questions addressed:

  • The most useful tools they used when seeking, collecting, and analyzing information and why
  • What existing tools would have been useful to them
  • The biggest surprises they encountered during this process



We launched a pilot version of this survey at the conclusion of the program in September 2017, in which six students participated.

Some suggestive patterns emerged: All of the entrepreneurs surveyed reported that market analysis was the most time-consuming phase involving seeking, collecting and analyzing information; and all of them used a library resource in their search for information. Further, nearly all of the entrepreneurs found evaluating sources of information, and summarizing, analyzing and mining those sources challenging or very challenging — and almost all relied on manual copying and pasting to extract or collect information they discovered.


We plan to survey a larger group of MIT delta v students during the upcoming summer 2018 cohort of the program. This larger data set will allow us to draw more generalizable conclusions regarding the information-related skills necessary for entrepreneurial success.

We hope these preliminary results will prompt other universities to investigate the specific information needs of entrepreneurs, particularly students in non-traditional settings like accelerators, incubators, and competitions as opposed to the classroom. Once these particular information needs are better understood, librarians can better address them through targeted workshops and instruction.

Guest Post: Graduate Research Intern, Katherine Montgomery, on the inaugural CHI Science Jam

Katie Montgomery is a Graduate Research Intern in the Program on Information Science, researching the areas of usability and accessibility.


by Katherine Montgomery

Research libraries are catalysts for interaction with and creation of knowledge. As information and interactions with it become increasingly digital, librarians are increasingly concerned with the way that computers and humans interact. [1]

The Computer Human Interface group of the ACM is a group of professionals devoted to studying these interactions. Their annual conference, CHI, is a place where people share the state of the art, and learn to use the state of the practice. CHI itself isn’t a standard library conference but it addresses many of the concerns of librarians in a broader context. For example, focal points include digital privacy (which libraries work to protect), improving UX in virtual and physical realms, gamifying learning interactions, and addressing the pitfalls of automation. The conference is also packed with people the library serves, i.e. academics.

A ‘jam’ or a ‘hackathon’ is distinguished by teams of relative strangers coming together to tackle specific problems in a focused and creative way within a limited time frame. The event fosters personal connections, concrete learning, pride in the product, and has the potential to generate real life changes. Libraries aim to nurture precisely these elements and would do well to look to hackathons and jams and adapt their structure to empower patrons. Here at the MIT libraries, we aim to create and inspire hacks in the great MIT tradition of using ingenuity and teamwork to create something remarkable.

Attending the Science Jam is a great way to start CHI, especially if you’re coming from a library background. The Science Jam enables you to interact with your prototypical patrons on problems that interest both of you and in a fashion that familiarizes you with patron needs. The Science Jam itself is a way to hack the conference. [2]

This is the first year they’ve run the program, and if you’ve never heard of a Science Jam before here’s the lowdown: it’s essentially a hackathon for scientists. You form teams, come up with a problem, pose a question, create a hypothesis, design a test, run the test, analyze your results, and present your study, all in 36 hours. About 60 people attended this year’s jam. We formed ten teams, broke into two rooms (so we could use each-other as test subjects the next day without contaminating our sample with knowledge of the study), and began the stimulating and occasionally frantic process. My team tackled privacy. Our initial problem? People share other people’s data without thinking about it or even realizing it. Our question was, how could we change this behavior? In order to create something testable we quickly honed the question to a much more specific issue and hypothesis. When people attend large conferences, or festivals, or concerts, or other public events they often take pictures that focus on a screen, or a float, or a stage, but include strangers in the foreground or to the sides. They then upload those pictures to their social media accounts where, even if they aren’t tagged, those strangers are vulnerable to facial analysis software and the eyes of the public. We hypothesized that if given cues that they are sharing the faces of strangers people might change their behavior by altering the photo to obscure those faces. Our initial hope was to create a digital interface but time and tech constraints limited us to a paper prototype. We took photographs which contained bystanders but were focussed on a different element, in this case a sign or a presenter with slides. We gave our participants the choice of selecting one of these photos to hypothetically upload to their social media account (we asked the participants to imagine that these were pictures they had taken). After selecting the photo they were presented with an upload interface with the option to go back and select another photo, crop the image, or upload the photo. However, these were given to three different groups with three additional caveats. The first group was given no textual cues as to the presence of potential bystanders in the photo (our control). The second group was given textual cues that there were potential bystanders in the picture, ie “this photo may contain two people, inside, standing up”. The third group was given visual cues that there were potential bystanders, ie blown up images of the faces beneath the main image. threefaces.png

These images were used with the express permission of the people they depict

For the most part, people uploaded the pictures anyway, not bothering to crop out the bystanders and not expressing concern for privacy in the follow-up questionnaire. The cues didn’t make a significant difference between behaviors, but we were surprised that such a technologically enlightened group didn’t take measures to protect people’s privacy more. Of course, our test group only contained 15 people (five per scenario), our prototype was on paper, and there were a number of other potential issues with our methodology, but the question and premise remain sound. How can we help people be aware of the fact that they may be violating other people’s privacy when uploading photographs to social media? And how do we help them alter that behavior?

The next day I attended a presentation given by Roberto Hoyle about his work testing the efficacy of various photo alterations in protecting privacy. Afterwards, we got to talking and posited an idea. What if Facebook added a feature to their image upload interface that asked a simple question: “Do you want to protect the privacy of the people you don’t know in this picture?”. If the person said yes then Facebook could auto-blur the faces it didn’t recognize as friends. The blur feature could be removed or modified, but it would bring the issue to the attention of the user and make it easy (and hopefully aesthetically pleasing, or at least acceptable), to obscure the faces of strangers.

While we agreed it was probably a moon shot I decided to go down to the exhibition hall and talk with the Facebook folks at their booth. I was met with a combination of skepticism and interest. Since then I’ve been in touch with a couple people at Facebook advocating for the idea. If your Facebook interface changes you’ll know it’s been a success. If not? Then the benefits are exclusively mine. Because of the Science Jam I had the opportunity to meet and work with people I would otherwise have never known, pursue meaningful ideas, improve my teamwork, practice scientific testing and analysis with a tight deadline, exercise my presentation skills, and make friends ahead of the conference itself. Libraries could benefit from implementing a similar model ahead of extended programming. Doing a week of events on graphic novels? Include a Cartoon Jam where people can come in, team up, generate ideas, produce some sketches and storylines, and share them with each other! Running a summer of gardening programs? Engage a couple of professionals in your area and encourage patrons to bring in photographs of their trouble gardens (lots of shade, rocky, hot, snow spill), form groups, hit the books, and pick each other’s minds for solutions. Trying to get the library more involved with the school letterpress? Collaborate with the experts there and run a Book Jam [3], challenging your students to connect e-readers and the early practice of printing. There are any number of ways that Libraries can take advantage of the Jam/hackathon model to engage their patrons and further the goal of becoming hubs for creation, not just consumption.

Excited? Inspired? Ready to work up a plan for your own hackathon or Jam? Take a look at the resources below and get cooking.


  1. Current research in the Program on Information Science focuses on how measures of attention and emotion could be integrated into these interactions.  
  2. CHI will be in beautiful Scotland next year. Attend the Science Jam. You won’t regret it.  Oh, and if you want to check out some of the documentation from this year’s Science Jam take a look at #ScienceJam #CHI2018 on Twitter.
  3. The very cool Codex Hackathon is already taken

Crosspost: How Big Data Challenges Privacy, and How Science Can Help

This originally appeared in the Washington DC 100 May 8th edition . It was co-written with Alexandra Wood, at the Berkman Klein Center, and briefly summarizes our joint paper:

Micah Altman, Alexandra Wood, David R. O’Brien, and Urs Gasser, “Practical approaches to big data privacy over time,” International Data Privacy Law, Vol. 8, No. 1 (2018),


The collection of personal information has become broader and more threatening than anyone could have imagined. Our research finds traditional approaches to safeguarding privacy are stretched to the limit as thousands of data points are collected about us every day and maintained indefinitely by a host of technology platforms.

We can do better. Privacy is not the inevitable price of technology. Computer science research provides new methods that protect privacy much more effectively than traditional approaches.

And research practices in health and social sciences show that it possible to strike a good balance between individual privacy and beneficial public knowledge.


GUEST POST: Resources for Software Curation

Alex Chassanoff is a CLIR/DLF Postdoctoral Fellow in the Program working to identify, understand and  describe baseline characteristics about software creation, use, and reuse in research libraries/archives, grounded in cases found across MIT.   

Below are a (growing) compendium of resources related to software curation for collecting institutions. 

What’s missing? Email me here!

I. Collecting/Acquiring/Appraising Software

Data Management, Planning & Policies

Cornell's Guide to Writing "Readme" Style Metadata: Templates/best practice/guidance for creating "readme" files to accompany data sets/software.

Data Management Planning Tool (2011-present): An online application that helps researchers create data management plans.

Depsy (2015-present): Depsy helps users investigate impact metrics for scientific software, tracking research software packages hosted on CRAN (software repo for R programming language) or PyPI (software repo for Python-language software).

GNU Ethical Repository Criteria: Criteria for "hosting parts of the GNU operating system"; can also be used to evaluate other repositories hosting free source code (and optionally executable programs too)

1st IEEE Workshop on Future of Research Curation and Research Reproducibility (2016): Summarizes workshop discussions and recommendations related to curation of research data, software, and related artifacts.

IFLA Key Issues for E-Resources Collection Development: A Guide for Libraries (2012): Overview for libraries that addresses some key issues in collecting “e-resources.”

Springer Nature Research Data Policies (2016): FAQ by researchers about data policies, data repositories, and sharing data.

Guidelines & Tools

Collecting Software: A New Challenge for Archives and Museums

Guidelines for Transparency and Openness Promotion in Journal Policies: "Established by the Open Science Framework The TOP Guidelines provide a template to enhance transparency in the science that journals publish. With minor adaptation of the text, funders can adopt these guidelines for research that they fund."

How to Appraise and Select Research Data for Curation (2010): Discussion of appraisal concepts; geared towards research data but provides insight into practices for appraising software.

Media Stability Ratings (2018):  Assigns a "media stability rating" to different media formats, in attempt to mitigate loss.

Stewardship of E-Manuscripts (2009): Compilation of tools that can be used in acquisition & stewarding of born-digital materials.

Timbus Debian Software Extractor  (2015): Tool to extract metadata for debian software packages, developed as part of the Timbus Context Project.

II. Describing Data/Software/Environments

Descriptive Standards & Definitions

Asset Description Metadata Schema for Software: A metadata schema and vocabulary to describe software making it possible to more easily explore, find, and link software on the Web.

Best Practices for Cataloging Video Games using RDA & Marc21 (2015):

DataCite (2016-present): A metadata schema for the publication and citation of research data.

Data Documentation Initiative (2011-present): Standard to describe the data produced by surveys and other observational methods in the social, behavioral, economic, and health sciences.

DDI-RDF Discovery Vocabulary (2013): RDF vocabulary to support the discovery of micro-data sets (aka "raw data") and related metadata using RDF technologies.

Force 11 Software Citation Principles (2016): A consolidated set of citation principles that may encourage broad adoption of a consistent policy for software citation across disciplines and venues.

Software Ontology (2011): A resource for describing software tools, their types, tasks, versions, provenance and data associated.

Trove Software Map: Classifies software by the following 9 attributes: development status, environment, intended audience, name, natural language, operating system, programming language, and topic.

User Studies

Software Search is Not a Science, Even Among Scientists (2016): Survey of how researchers search for software, including criteria they use to evaluate software results (e.g., how easy is the software to learn)

Examples of Cataloged Software/Data Sets/Repositories

JHU's Data Archive: Data and Software associated with Seviour et al

Computer History's Source Code for FORTRAN II compiler

re3data: Registry of research data repositories

III. Preserving Software

Case Studies & Reports

A Case Study in Preserving a High Energy Physics Application with Parrot (2015): Describes the development of Parrot, an application dependency capture program for complex environments.

Exploring Curation-Ready Software (2017): Report 1 by the Curation-Readiness Working Group at the Software Preservation Network.

Heritage.exe (2016): Cross-comparison case study of software preservation strategies at three US institutions.

Improving Curation-Readiness (2017): Report 2 by the Curation-Readiness Working Group at the Software Preservation Network.

Preserving and Emulating Digital Art Objects (2015): Reports on the results of an NEH-funded research project "to create contemporary emulation environments for artworks selected from the archive, to classify works according to type and document research discoveries regarding the preservation effort."

Preserving Virtual Worlds I, II (2007-2010; 2011-2013): The Preserving Virtual Worlds projects I and II explore methods for preserving digital games and interactive fiction.

Preserving.Exe: Toward a National Strategy for Software Preservation (2013): A report from the National Digital Information Infrastructure and Preservation Program of the Library of Congress, focused on identifying valuable and at-risk software.

SPN Metadata Survey (2017): Survey results on how institutions with digital preservation programs are using metadata to aid in preserving software.

Research Initiatives

The Digital Curation Sustainability Model(DCSM) (2015): JISC-funded project to highlight the key concepts, relationships and decision points for planning how to sustain digital assets into the future.

National Software Reference Library (NSRL): The NSRL is designed to collect software from various sources and incorporate file profiles computed from this software into a Reference Data Set (RDS) of information.

PERSIST (2012-present): UNESCO hosted initiative to "ensure long-term access to the World’s Digital Heritage by facilitating development of effective policies, sustainable technical approaches, and best preservation practices."

Software Preservation Network (SPN) (2013-present): Community of practitioners and researchers, working to address the problems of how to preserve software.

Software Heritage Network (2016-present): "The goal of the SHN is to collect all publicly available software in source code form, replicate it massively to ensure its preservation, and make it available to everyone who needs it."

Tools, Applications, Best Practices & Standards

Library of Congress Recommended Format Statement for Software: "Identifies hierarchies of the physical and technical characteristics of software which will best meet the needs of all concerned, maximizing the chances for survival and continued accessibility of creative content well into the future."

National Archives' Strategy for Preserving Digital Archival Materials (2017): Overview of strategies used by NARA to preserve digital materials.

Obsolescence Ratings (2018): "This list categorizes the ease with which a range of formats that have been, or are, in common use in their fields can be read, in terms of the equipment available to do so."

Pericles Extraction Tool (2015-present): Extraction of significant environment information from live environments, to better support object use and reuse, in the scope of long term preservation of data.

Preservation Quality Tool  (2016-present): "This tool will provide for reuse of preserved software applications, improve technical infrastructure, and build on existing data preservation services."

Software Independent Archival of Relational Databases (SIARD) (2007): An open file format developed by the Swiss Federal Archives for the long-term archiving of relational databases; data can be stored long-term independently of the original software.

Guest Post: Scholar Profile of Nick Montfort

Alex Chassanoff is a CLIR/DLF Postdoctoral Fellow in the Program. She has been conducting interviews with scholars across MIT's campus who create, use, and/or reuse software to understand more about their scholarly practices.  Below are snippets from an interview with Nick Montfort, a professor of digital media in the Comparative Media Studies and Writing section at MIT.  Nick is also an interactive fiction writer, computational poet, and code studies scholar.    

On Reconstructing Code

“So software or creative computing programs or research programs….these are the areas I work in.  There are different sorts of outcomes and some of them are important software produced at MIT, like Joseph Weizenbaum’s Eliza which is a very frequently cited research system and highly influential – Janet Murray named it the first “computer character.” It’s a simulated parody of a Rogerian psychotherapist….asking for you to speak about yourself, and then reflecting that back for you to hear.

One of the interesting things about this system from my perspective is that the original code doesn’t exist, but there’s a paper that describes its function in great detail.  So there are many, many re-implementations of it.  You can run it on the Commodore 64 and BASIC – there are programs to implement an Eliza-like system for that. So there’s not really a canonical Eliza in the way that there is a canonical Adventure.   The lack of preservation for software doesn’t always mean that– if you don’t have the original code or object– it doesn’t always mean that its not influential, important, able to be cited, able to be part of the intellectual discourse.  Of course, it presumably doesn’t HURT to have access to those works in any case.”

On Emulation as Software Preservation

“An emulator is a software version of a computer. Some people find it very distasteful that the emulator is not the authentic hardware which is interesting to note….the way we see it, you can think about it as a particular edition OF a computer. In fact, the Commodore 64 that’s over there (points)  running that program right now is one edition but there are different editions of the C64 with different hardware.  So for example, there’s been a ROM revision to the Commodore 64, so it behaves a little bit differently depending upon which ROM revision you have.  So, in fact even when say, ‘the hardware,  it’s running on the hardware’…there’s more than one ‘the hardware!’ I think that’s even more obvious today.   So, for example, when you have a PlayStation 3 that is supposed to be compatible with a PlayStation 1 or 2 initially, but then that feature is dropped as they refine the production of it….”

A Close Reading of a Commodore 64 Keyboard


“You can see a lot about the layout of the keyboard which is different from modern keyboards.  So if you tried to type in this program that I initially typed in, one thing that you might find funny about it is that if you press shift plus…you need the shift to type plus on a modern typewriter…you get this large cross symbol that doesn’t work – it’s not a plus sign – it’s a special graphical character… the keyboard layout is different in several ways…you have a pi symbol on the keyboard, you don’t have curly braces, you have the arrow keys are in the bottom right and you need to press shift to move up and shift to move left….so maybe these are all curiosities, but when you start to use the system, they change your experience of it.  The other thing is that these graphical characters, including the ones you see on here, are characters you can just type, along with other graphical characters.  You can type them into a program or directly at the BASIC interpreter – you can deal with it quite easily…

The thing about the hardware version then is just from the standpoint of the keyboard, you can see the keyboard is different.  It wasn’t standardized in the way that our Mac and PC keyboards are today, but it also provided these extra facilities like the curious character set of the Commodore 64 was exposed to you because it was actually visibleon the keyboard – you could see what the different characters were.  And when you work in an emulator….well, first of all you have to figure out how you want your key mappings to be.  For example, if you’re a Commodore 64 touch typist, you might want your keyboard to be set up in the same physical layout as the C64, but mostly people chose a logical layout where, for instance, if you press shift plus on your keyboard its going to correspond to the plus sign on the commodore 64.  So, you have these issues with setting up the keyboard – that’s one of the reasons why emulation is better suited for joystick games, where it’s a pretty straightforward mapping than using the keyboard in elaborate ways.  On the other hand, if you do want to use an emulator, it provides these extra facilities.  So, you can save the full state of the machine at any point.  So if you look at something more intricate and wanted to show how a word processor or GEOS (the Macintosh-like operating system for the C64) or an elaborate game that has a lot of state….if you want to show how these things worked, then you probably want to save a particular point and you might not always have the capability for doing this within the software itself, but the emulator would allow you to say, ‘Ok, we’ll just take the full machine state’ and will allow a classroom working together or students individually or scholars to come back to that.” 

On Temporality and Games

A_Mind_Forever_Voyaging_Coverart.png“I don’t go very often to play old games, actually…I fear I’m more of a collector (laughs) although I am interested in the ability for people to use these, rather than for their preciousness and economic value.  When people came to play A Mind Forever Voyaging, we did some videography.  It’s a 1985 InfoCom game and it’s very easily played on modern day computers.  But what I did is I set up for a group of four people the first official Infocom edition of the game to run on the Apple IIC.  And then over on this large screen, I connected a computer with the most recent (although it’s pretty old) official Infocom release.  Activision released this Masterpieces of Infocom for MS-DOS Windows 3.1/ Windows 95 at some point in the late 90s.  And I had this running in DOSBox essentially.  So they had their choice between playing these…or both of these…and the group decided they wanted to play on the Apple II and they remarked on some specific material differences there.

One of the things that’s interesting is that the pace of play is different – you don’t have a multi-tasking machine, it’s not connected to the internet, you can’t go and look for hints…you can go and look on your phone, of course, but you don’t have it easily available to you. Additionally, you don’t have the same very rapid pace.  I watched students playing interactive fiction recently and not stopping to read the text outputs, just sort of powering through typing commands.   On the Apple 2 when you type a command, there would be a little pause before you get a response.  If you type something that’s completely not understood or not useful, you would get a response back fairly quickly.  And then if you did something interesting that changed the state of the game or required disk access, then there would be a longer pause — the disk would spin up, and for players, what I remember and what people report is that there is this moment of anticipation – like ‘Oh Something Is Going to Happen Now! It’s So Exciting!’ So the material qualities of the system there make some sort of difference in play.  I think it’s also why people would play interactive fiction pieces that took maybe ten or twelve hours to work through in the 1980s.  People spend that much time playing games, but interactive fiction specifically is much more abbreviated in comparison to that.  Now people make 2 hour 15 minute games that are for briefer play – people still enjoy engaging with the form – eighty games were released at the IF Competition this year.”

On Authenticity and Networked Everything

“At a classic gaming expo, there was this setup with a big wood-grained cathode ray tube television, and like a really ugly 1970s couch with Atari cartridges on a coffee table and a system in front… and of course it’s in the middle of a convention center, not in someone’s house and you could sit down and play the games in this reconstructed sort of context. So people can always build more or less context around things, to give different sorts of ideas.  We can’t reconstruct even the 70s or 80s in great detail and certainly as you go further back in the history of material texts or literary or gaming or cultural history, its very tough to do.  So I think that there are certain things that people are going to encounter because of historical interest and as scholars.  Their engagement with it might be limited and that’s fine, they also might bring ideas back into the mainstream. So for instance, one of my points in showing people the Commodore 64 is that you can turn it on, you can write a one line program like this… it’s not just historical curiosity about the Commodore 64.  There are a bunch of reasons for this.  It didn’t come with a disk drive, you needed to purchase it separately which allowed for the up-selling of it.  And it allowed for lower cost of that one unit that didn’t have moving parts and so forth.  But it did have BASIC built in, which was the case with essentially all home computers at the time and that programming language did facilitate this immediate exploration of what you could do with computing, being able to do very small scale programs.  Some people would type in pages-long programs from magazines or books and not have any way to save them! So when you turned off your computer it was gone! But it took a long time to type this in, and you might make mistakes and have to go correct it, and then you could play the game afterwards, but as soon as you turned the computer off it was gone but the whole process of doing this engaged you with programming and computing in ways that aren’t as possible now.  

Of course, there are people who did engage with the early World Wide Web that way, they went to ‘view source’,  they looked at how HTML was put together and that’s how they learned.  There’s no view source in the App store…there was ‘view source’ in the 90s, there still is, and this ability to turn something on and immediately type in a short program and make changes to it, work with it, is not something that I bring up…when people come in and sometimes students say I’d like to take your course and it says no programming experience is required but I’m worried that I don’t have programming experience, and I say, ‘Well, sit down at the Commodore 64 and let’s program some.’ And in fact it’s not that much of a challenge when it’s posed that way.  So, it’s still something that is useful today and it’s still also useful as a design critique of current computers. While we’ve added a lot of capabilities, certainly the Commodore 64 is not better at accessing social networks, video editing, etc… but we’ve lost some of the ability to work with computation in direct and useful and powerful ways.  And I’m not sure that an emulator accomplishes that – I think sitting down at a Commodore 64 accomplishes that in a different way, because by the time you have installed the emulator and opened it up and your keyboard doesn’t match etc., you now have made things into a much harder problem then they originally were.”  

On Curating Software-Driven Works: Autofolio Babel 

“This is Autofolio Babel or Portfolio Babel, you could also say, it’s based on Jorge Luis Borges’ Library of Babel – there are a lot of computational projects on this.  One of the things about this piece is that Borges defines quite specifically how the books are supposed to look: that they are 80 characters wide and 40 characters tall, arranged in a square… and Borges specifies a 24-character alphabet with some punctuation symbols. Instead of using this alphabet, I used a unigram distribution of Borges’ story itself in Spanish.  So the most likely thing that one would see coming up on the screen would be a page from Borges’ story, and if you look closely you can probably see, because of accent marks maybe if you study it for a while,  you can tell that it’s Spanish language text in its origin.



Screenshot, Una página de Babel

So there’s a piece of software, each of these screens is driven by a Rasperry Pi Zero and this is just a program, it goes much slower than if it runs on a standard, much larger computer – I’ve rotated the screen at the HTML level – the material aspects of this are a bit different – we have a folio here (two screens), and here (two computers), it’s one folio that generates another folio, although this folio is powering this folio down here. They really generate each other.

One of the ways in which this work might be presented is on a table, possibly in front of a chair, or at a lectern, in a way that is suitable to its nature as a book object rather than some other type of screen.  So it would be similar to the kind of curation that people do with video art and to have that kind of care with a piece like this.  There are elements of these pieces that will wear out.  And thinking about if you were curating [Nam June Paik’s] Electronic Superhighway – it has like 170 CRTs and you can’t just say I’ll throw in a flat panel if one of them goes out…most people can, but not people who curate video art.

It’s not really a software concern at this point, but rather a system concern for a system that includes software.  And having Babel as the software component work – that’s more or less a subset.  I wouldn’t want someone to take video of this and put that video out as a ‘preservation method.’  This needs to be a functioning computing machine for this to work, so the software preservation would be part of it from my standpoint.

So I would want the ability to actively compute and recombine…and then one could do various things…in the same way that if your book wears out, you have some type of manuscript or print codex that is damaged or something, you can think about how you would restore this if it were a book? So you can obviously rebind books, in this example, maybe it would be the opposite of binding — maybe you replace the screens, but keep the casing and power apparatus if there were some problem there.  Certainly, if you needed to replace capacitors, most people wouldn’t say that would be problematic. It sort of gets into being a Ship of Theseus problem… of how much replacement effaces the original. This is an interesting case, but it’s something I would consider within book arts/art curation.  I would say librarians and special collections have a particular perspective on it, and art curators would have another.”  

Describing Autofolio Babel (currently in the Trope Tank at MIT)

“Autofoilo Babel consists of these two Dell displays. They are the same model, logos in the front are covered with gaffers tape, these are salvaged…everything here is salvaged…I bought the Raspberry Pis at some point but not for the purpose of making this particular piece. So this is a type of bricolage maybe…one of the ways you could describe the media of the piece is reused electronics.  These have two monitors that are detachable from these stands, but they are both on the stands that come with them. There are two Mini-HDMI to HDMI male to male cables. There are two micro USB to USB male to male cables. There are two Raspberry Pi Zeros – a very early model.  There’s 8 GB SD cards, two of those.  There’s two of everything because it’s a folio.  And these are bound together with two wire twist ties – and there are two power cords which go from the monitors to a standard 125 volt power supply.  So the SD card has a Raspberry Pi image and that’s an image that is set up to automatically start.  It’s a fairly standard image, but there are a few important changes that are made so it starts a browser.  In this case, it starts Chromium in a particular mode where it doesn’t pester and ask you about unlocking your password and stuff; and it sets it to full screen and runs. It also turns off screen blanking, power saving, and screen saving.  So this will run as long as this is on and then the piece itself that’s in there is a free software piece – it’s a single webpage that is almost the same as the one that’s online at –the change really is just rotating this page. 

If I were to sell this to a collector, for instance, they would….I’m trying to think of what the licensing situation would be…there is a slight customization I’ve made to a free software piece, but there’s nothing that the collector would be able to do that would restrict the basic software from being freely available as it is now…and also able to be modified.  People can make their own versions, they can make their own work out of it, as has happened at least once.  So I’ll just show you…..this is just an operating system, that’s Chromium… I haven’t hooked up a mouse, just hooked up a keyboard, but in fact you don’t really need a mouse because you can get to most things on the keyboard here.  So this doesn’t have networking – it’s not on the network and this particular piece is to be read in a certain way for certain values of reading.  This is easier to manage since this is not a networked artifact – it doesn’t receive updates – there are not security issues with it – you can go in and mount this card read only and go through the whole image if you wanted and get the information you wanted or copy it and go through that it.”

On Authorship and Code Modification

“For my dissertation, I created a research interactive fiction system called Curveship with its own domain – so you could do everything you expect to do with interactive fiction, but it wasn’t deployable.  You couldn’t make a game you could give to other people.  So for that reason or other reasons, it never took off for people to use.  But that’s a larger system with thousands of lines of code – in theory it would be a platform for work.  Most of my work is considerably smaller- a page or line of code – these are online for people to use and modify.   Taroko Gorge is an example of something I wrote in Python when I was in Taiwan years ago, and after that made a JavaScript version of it.  And people began to modify that JavaScript version and put in their own words, without having a lot of expertise as programmers or identifying as programmers.  And they started to make their own “remixes” of that work, so there’s dozens of those that are available online.   To me, they don’t really threaten the integrity of the original work. I suppose there’s a possibility that someone could be confused that someone’s later modification might be something I did somehow.  But given the whole context of computing, the real concern is that people are intimidated and don’t think that things are open to modification – I see that it’s much more urgent to make that work available.  

I have a project called Memory Slam which is a slowly-growing collection of classic systems – classic and simple versions that I’ve re-implemented.  So, I’ve made Python versions and I’ve made JavaScript versions…there’s six of those pieces now.  I created this so that people could study and modify these systems but they are not close material re-makings of the systems.  So David Link took an exhibit on tour where he rebuilt Ferranti Mark 1 (the world’s first commercially-available electronic computer) and had things functioning very much like the original Christopher Strachey Loveletter Generator and for the people who got to go to that exhibit, great, but there’s another experience to be able to study and modify the way that code like that functions.  So, for example, could you make a love-letter generator into something that expresses dislike or hatred of someone? Could you make a love-letter generator about food? To what extent are the formal properties of the system susceptible to various changes?  


Screenshot, Love Letter Generator,

So when I redid these, the point was mainly to make them available for that type of study, modification, play…I think they are good formal models of those original systems, but they are not capturing all the material qualities.  And the reason I mention all this about Memory Slam is that probably it would make sense to put new versions of that code up – I have Python 2 code – and it might be useful to add Python 3 or somehow find something that could work in both versions.  I could make cleaner html and JavaScript versions. And if I do this – is there a point to keeping the original version and how would that be kept?”  

Dear Reader, I Was Hoping He Would Tell Me

“So one thing I could do is include the git repository in the directory itself that’s available to anyone – so if you really care to know the history of it…you can review that.  When I worked on Curveship, I used Subversion.  Sometimes, it’s rather heavy and sometimes you don’t know whether you will be done with something in 30 minutes or whether it might be a project of several weeks. And you don’t know with a small scale work, do you want to create a branch where you are exploring that you might merge in? This version control perspective is often quite elaborate for very small scale projects.”

On Distributional Poetics

“This 10-print program, which is a random maze generator, is an example of a particular type of distributional poetics, where you see there’s two symbols and in this case, picking from them is equally likely… and that’s a concrete poem or visual art piece that’s made that way. You can make things with words or with lines or syntactically with phrases as well.  There is a shift both as a reader or appreciator of this work from an aesthetic perspective, and as a maker of this work.  It’s that both perspectives need to be…it’s only meaningful if they are attuned to the distributional nature of the work.

So Borges’ description of the Library of Babel is one in which you have an exhaustive library and some pages might be ripped but there’s always a page that is one character different somewhere else in the library, right? So the idea of an exhaustive library in which every possible page like this, every volume containing these pages is represented, and this is a distribution of analog…it’s important also that even though you don’t see this in the work, on the web it makes more sense but these are pages…they are web-pages…so that is something that metaphorically connects through the web to Borges’ idea. So if you come to this thinking ‘it’s a loop of video’ rather than ‘its producing every possible arrangement of these letters’ then I don’t see how your aesthetic perspective on it would be particularly useful – or would allow you the fullest appreciation of it.  I think there are ways in which we are readers of distributions and ways in which we are writers of distributions, and this is keeping things fairly simple, because if you start with existing stores of text and process them, that’s something else. But here we are just talking about a simple distribution system and just processing them right? So the poetics question is – how do we present this in such a way and how do we make this in such a way that it has the inter-textual connections and the metaphorical connections? It is a page, it connects to the description, it implements the specification of Borges’ story in one way but not in another way…and so forth.

So the poetics of this piece have to do with the physical organization of it, what’s shown to someone who is viewing it.  There are certain things… it has a title that evokes something about book arts, for instance, and so a person who knows something about digital media art and something about book arts might know there are things that appear on screens that aren’t videos and might be more aesthetically prepared to receive this.” 

Preservation as Play Back?

“So there’s also the ability to document things.  Compared to documenting a play, it would be significantly harder to have video documentation of a play in part because when you get video documentation it interferes with the production of the play – with the actors putting it on. Here you can just go and take video of this and see what the piece looks like, pretty much, as documentation, but you are not preserving the object any more than taking a good photograph of a painting is preserving a painting.  The archival perspective is often coming from record keeping…in this case, the informational content or the record content is maybe not the main thing going on.”

What is the Scholarly Object? What Should we Preserve? 

“Let’s make a distinction between traditional scholarship and creative practice – so in this piece (Una página de Babel) the software component is referred to by Álvaro Seiça in his PhD and some of his work was actually modifying this piece. So from that standpoint, it enters traditional scholarship, just as there has been practice-based scholarship with other pieces of mine. So in order to follow the arguments that Álvaro makes, in order to follow the discussions in the “great conversation” – what types of software preservation should be done….well, this goes back to Joseph Weizenbaum. The version we have for his system is a LISP implementation that some people call the original, but he didn’t write it in LISP, he wrote it in Michigan Algorithm Decoder, this system called MAD, the code may be around….it might be in the archives….but the core of what was needed was his representation of how that system worked in his paper.  Now could we learn more about the specifics of this — the type of implementation he did, what his process of development was–  if we had that code….yes, of course, that would be very useful.  And we have snippets of example interactions.  But at some point there were lots of these and they were on Teletypes so they were actually in a medium that, if that hadn’t been discarded…there could be a box of transcripts with Eliza that is sitting in the Institute Archives right now.”


Guest Post: Graduate Research Intern, Ada van Tine, on Libraries & Neurodiversity

Ada van Tine is a Graduate Research Intern in the Program on Information Science, researching the area of library privacy.


Our Libraries and Neurodiversity

By Ada van Tine

Andover-Harvard Theological Library Stacks by Ada van Tine

It is a quiet day the library where you work, you find it peaceful. But that is not the case for everyone. One of your patrons, Anna, is an 18 year old woman who falls on the autism spectrum. She needs to do research for her college final paper on W.E.B. Du Bois. She lives with her parents nearby the school and library, but their house is noisy and full of visiting relatives right now. However Anna doesn’t consider the library to be a calm alternative and is very nervous about going to the library because the fluorescent lights highly irritate her, their buzzing endlessly permeating her brain, causing nausea. To cope with this she often does repetitive movements with her hands. In the past, librarians and other patrons have been really awkward with her because of her hand movements and reaction to the lights. But she really needs to get these books for her paper, what will you do as a librarian to help this patron meet her needs? For individuals who are members of a neurominority, libraries can be extremely stressful, upsetting, and in the worst cases traumatic.

In libraries, we understand that we need to accommodate people who are different, but the problem is that sometimes we are not aware of who we might be failing to serve and why. If Anna gives feedback about the library in a suggestion box, the you might well schedule a replacement of the fluorescent lights as part of the library’s renovations. That is a small step toward progress, however we should not wait around for an invitation to make our libraries more bearable, leaving the chance that some patrons might be suffering in silence in the meantime. Librarians need to be radically proactive so as not to make their spaces only welcoming to the part of the population with neurotypical leanings. The solution, however, is not merely a focus on those who are “different” and need some kind of special accommodation.

Rather, the researchers and advocates who talk about neurodiversity now stress that neurodiversity is “the idea that neurological differences like autism and ADHD are the result of normal, natural variation in the human genome.” (Robinson, What is Neurodiversity?) Simply said: all humans fall on neurological spectra of traits, and all of us have our own variances from the norm. For each person in the world there exists a different way of perceiving and interacting with other people and information. For instance, people with dyslexia, people with autism, people with ADHD, and people who have not had a good night’s sleep all perceive the world and the library differently. The concept of Neurodiversity is another way to recognize that.

Furthermore, new research is continually helping us to evolve our ideas about neurodiversity. Therefore, libraries should stay abreast of advancements in technology for the neurodiverse population because they will benefit every patron. “Actively engaging with neurodiversity is not a question of favoring particular personal or political beliefs; rather, such engagement is an extension of librarians’ professional duties insofar as it enables the provision of equitable information services” (Lawrence, Loud Hands in the Library, 106-107). Librarians are called through the ALA Core Values of Access and Diversity to make all information equitably available to all patrons. To not recognize the existence of neurodiversity would be to ignore a segment of the whole society which we are called to serve.

There are immediate ways that your library can better serve a larger portion of the neurodiverse population. For example, below are some relatively low cost interventions:

  • For dyslexic individuals have a small reading screen available. esearch has shown that those with dyslexia can read more easily and quickly off of smaller screens with small amounts of text per page (Schneps).
  • Audiobooks, text-to-speech, and devices that can show text in a color gradient also help dyslexic patrons with their information needs.
  • For people who are on the autism spectrum replace the older fluorescent lights in the library, and don’t focus solely on open collaborative spaces in the library layout (Lawrence, Loud Hands, 105). Also train yourself and your employees to recognize and know how to react properly with autistic individuals who may express non verbal body language such as repetitive movements (Lawrence, Loud Hands, 105).
  • For people with ADHD, have quiet private rooms available so they can better concentrate at the library as well as audio books and text-to-speech programs so that they can listen to their research and reading while doing other things (Hills, Campbell, 462).
  • Train staff to never touch a person who is on the autism spectrum without their explicit permission, be aware of their sensory needs and hold the interview in a quiet place with no background noise such as an office fountain, and with no fluorescent lights. Some people on the autism spectrum are also smell sensitive, so notify staff to refrain from wearing perfume. (

New technologies and findings in cognitive science are being developed to better adapt to those individuals who are members of a neurominority. For example, a new reading program is being developed by Dr. Matthew Schneps that combines a reading acceleration program with compressed text-to-speech and visual modifications which has so far proven to drastically increase the speed of dyslexic and non dyslexic readers alike (Shneps). There are many studies on the ways in which modern technology can be used to better communicate with and educate autistic students. The future is hopeful.

Addressing neurodiversity in our libraries and in our societies is not a solved problem. For example there is research and development being done to reframe digital programs to be viewed as an ever growing ecosystem, never in stasis, so that they may better adapt to every user’s need as well as be transparent about the metadata of programs so that users can know which parts of the system are enabling or disabling their assistive technology (Johnson, 4). There are many steps to take that can help make the library more friendly to a neuro diverse audience, but the most important thing to keep in mind is that we must all plan to change and adapt now and over time to make our society a better, more liveable place for everyone. So that maybe when Anna comes to research the library and staff will be prepared to be a little more welcoming than she expected, and maybe she’ll even want to come back.

What to do next:


You may feel overwhelmed by the vast and complicated nature of this important task. The first step is always to educate yourself and get a grounding in basic literature about a subject. Many resources are included in the next section to aid in this discovery process.

You may wish to start off by learning about neurodiversity in general (What is Neurodiversity?,Definition of Neurodiversity). If you’ve identified a specific population need in your community — you may want to dig in deeper with resources specific to that neurominority, here are a few. (Autism Spectrum, ADHD, Dyslexia).

There are some good books and articles specifically about neurodiversity and libraries included in the resources. (Library Services for Youth with Autism Spectrum Disorders, Programming for Children and Teens with Autism Spectrum Disorder,

Loud Hands in the Library, Neurodiversity in the Library).

As it turns out, there is a lack of literature relating to best practices and programming in libraries in reference to neurodiversity. However, to understand and engage with this topic and community librarians should consider attending events and workshops — a number held by advocacy and research organizations are included below. (ADHD, Dyslexia, The A11Y project, International Society for Augmentative and Alternative Communication, The Center for AAC and Autism).



Reference List

The American Association of People with Disabilities. Retrieved from

Autistic Self Advocacy Network. Retrieved from

The A11Y project. Retrieved from

Campbell, I., Hills, K. (2011). College Programs and Services. In M. DeVries, S. Goldstein, & J. Naglieri (Eds), Learning and Attention Disorders in Adolesence and Adulthood (457-466). Hoboken, New Jersey: John Wiley & Sons, Inc.

The Center for AAC and Autism. Retrieved from

Children and Adults with Hyperactive Attention Deficit/Hyperactivity Disorder. Retrieved from

Eng, A. (2017). Neurodiversity in the Library: One Librarian’s Experience. In The Library With The Lead Pipe, 1.

Farmer, L. S. J. (2013). Library Services for Youth with Autism Spectrum Disorder. Chicago: American Library Association.

How Educators Can Help Autistic People by Sensory Accommodations. Retrieved from

International Dyslexia Association. Retrieved from

International Society for Augmentative and Alternative Communication. Retrieved from

Johnson, Rick. (2017, Sept 25). Accessibility: Ensuring that Edtech Systems Work Together to Serve All Students. Educause Review. Retrieved from


Klipper, B. (2014). Programming for Children and Teens with Autism Spectrum Disorder. Chicago: American Library Association.

Lawrence, E. (2013). Loud Hands in the Library. Progressive Librarian, (41), 98-109.

Neurodiversity. Retrieved from

Ploog, B. O., Scharf, A., Nelson, D., & Brooks, P. J. (2013). Use of computer-assisted technologies (CAT) to enhance social, communicative, and language development in children with autism spectrum disorders. Journal Of Autism And Developmental Disorders, (2), 301. doi:10.1007/sl0803-012-1571-3

Robison, John Elder. (2013, Oct 7). What is Neurodiversity? Psychology Today. Retrieved from

Schneps, Matthew H. (2015). Using Technology to Break the Speed Barrier of Reading. Scientific American. Retrieved from

A History of the Internet : Commentary on Scott Bradner’s Program on Information Science Talk

A History of the Internet : Commentary on Scott Bradner’s Program on Information Science Talk

Scott Bradner is a Berkman Center affiliate who worked for 50 at Harvard in the areas of computer programming, system management, networking, IT security, and identity management. Scott Bradner was involved in the design, operation and use of data networks at Harvard University since the early days of the ARPANET and served in many leadership roles in the IETF. He presented the talk recorded below, entitled, A History of the Internet — as part of Program on Information Science Brown Bag Series:

Bradner abstracted his talk as follows:

In a way the Russians caused the Internet. This talk will describe how that happened (hint it was not actually the Bomb) and follow the path that has led to the current Internet of (unpatchable) Things (the IoT) and the Surveillance Economy.

The talk contained a rich array of historical details — far too many to summarize here. Much more detail on these projects can be found in the slides and video above; from his publications, and from his IETF talks. (And for those interested in recent Program on Information Science research on related issues of open information governance, see our published reports.)

Bradner describes how the space race, exemplified by the launch of Sputnik, spurred national investments in research and technology — and how the arms race created the need for a communication network that was decentralized and robust enough to survive a nuclear first-strike.

Bradner argues that the internet has been a parent revolution, in part because of its end-to-end design. The internet as a whole was designed so that most of the “intelligence” is encapsulated at host endpoints, connected by a “stupid” network carrier that just transports packets. As a result, Bradner argues, the carrier cannot own the customer, which, critically, enables customers to innovate without permission.

ARPANET, as originally conceived, was focused on solving what was then a grand challenge in digital communications research: To develop techniques and obtain experience on interconnecting computers in such a way that a very broad class of interactions are possible, and to improve and increase computer research productivity through resource sharing.

Bradner argues that the internet succeeded because, despite the scope of the problem, solutions were allowed to evolve chaotically: ARPA was successful in innovating because it required no peer review. The large incumbent corporations in the computing and networking field ignored internet because they believed it couldn’t succeed (and they believed it couldn’t succeed because its design did not allow for the level of control and reliability that the incumbents believed to be necessary to making communications work). And since the Internet was was viewed as irrelevant, there were no efforts to regulate it. It was not until after the Internet achieved success, and catalyzed disruptive innovation that policymakers deemed it, “too important to leave to the people that know how it works.”

Our upcoming Summit supported by a generous grant from the Mellon Foundation, will probe for grand challenge questions in scholarly discovery, digital curation and preservation, and open scholarship. Is it possible that the ideas that could catalyze innovation in these areas are, like the early Internet, currently viewed as impractical or irrelevant? .

Safety Nets (for information): Commentary on Jefferson Bailey’s Program on Information Science Talk

Jefferson Bailey is Director of Web Archiving at Internet Archive. Jefferson joined Internet Archive in Summer 2014 and manages Internet Archive’s web archiving services including Archive-It, used by over 500 institutions to preserve the web. He also oversees contract and domain-scale web archiving services for national libraries and archives around the world. He works closely with partner institutions on collaborative technology development, digital preservation, data research services, educational partnerships, and other programs. He presented the talk recorded below, entitled, Safety Nets: Rescue And Revival For Endangered Born-digital Records — as part of Program on Information Science Brown Bag Series:

Bailey abstracted his talk as follows:

The web is now firmly established as the primary communication and publication platform for sharing and accessing social and cultural materials. This networked world has created both opportunities and pitfalls for libraries and archives in their mission to preserve and provide ongoing access to knowledge. How can the affordances of the web be leveraged to drastically extend the plurality of representation in the archive? What challenges are imposed by the intrinsic ephemerality and mutability of online information? What methodological reorientations are demanded by the scale and dynamism of machine-generated cultural artifacts? This talk will explore the interplay of the web, contemporary historical records, and the programs, technologies, and approaches by which libraries and archives are working to extend their mission to preserve and provide access to the evidence of human activity in a world distinguished by the ubiquity of born-digital materials.

Bailey eloquently stated the importance of web archiving: “No future scholarship can study our era without considering materials published (only) on the web.” Further, he emphasized the importance of web archiving for social justice: Traditional archives disproportionately reflect social architectures of power, and the lived experiences of the advantaged. Web crawls capture a much broader (although not nearly complete) picture of the human experience.

The talk ranged over an impressively wide portfolio of initiatives — far too many to do justice discussing in a single blog post. Much more detail on these projects can be found in the slides and video above, Bailey’s professional writings, the Archive blog, and experiments page, and archive-it blog for some insights into these.

A unified argument ran through the Bailey’s presentation. At the risk of oversimplifying, I’ll restate the premises of the argument here:

  1. Understanding our era will require research, using large portions of the web, linked across time.
  2. The web is big — but not too big to collect (a substantial portion of) it. [1]
  3. Providing simple access (e.g. retrieval, linking) is more expansive than collection;
    enabling discovery (e.g. search) is much harder than simple access;
    and supporting computational research (which requires analysis at web-scale, and over time) —
    is much, much harder than discovery.
  4. Research libraries should help with this (hardest) part.

I find the first three parts of the argument largely convincing. Increasingly, new discoveries in social science are based on analysis of massive collections of data that areis generated as a result of people’s public communications, and depends on tracing these actions and their consequences over time. The Internet Archive’s success to date establishes that much of these public communications can be collected and retained over time. And the history of database design (as well as my and my colleagues experiences in archiving and digital libraries) testifies to the challenges of effective discovery and access at scale.

I hope that we, as research libraries, will be step up to the challenges of enabling large-scale, long-term research over content such as this. Research libraries already have a stake in this problem because most of the the core ideas and fundamental methods (although not the operational platforms) for analysis of data at this scale comes from research institutions with which we are affiliated. Moreover if libraries lead the design of these platforms, participation in research will be far more open and equitable than if these platforms are ceded entirely to commercial actors.

For this among other reasons, we are convening a Summit on Grand Challenges in Information Science & Scholarly Communication, supported by a generous grant from the Mellon Foundation. During this summit we develop community research agendas in the areas of scholarly discovery at scale; digital curation and preservation; and open scholarship. For those interested in these questions and related areas of interest, we have published Program on Information Science reports and blog posts on some of the challenges of digital preservation at scale.


[1] The Internet Archive currently holds 35 petabytes of information. Which is roughly equivalent to the text of 7 million long novels — or to the amount of new information produced across the globe every 45 minutes.

Labor And Reward In Science: Commentary on Cassidy Sugimoto’s Program on Information Science Talk

Labor And Reward In Science: Commentary on Cassidy Sugimoto’s Program on Information Science Talk

Cassidy Sugimoto is Associate Professor in the School of Informatics and Computing, Indiana University Bloomington, where researches within the domain of scholarly communication and scientometrics, examining the formal and informal ways in which knowledge producers consume and disseminate scholarship. She presented this talk, entitled Labor And Reward In Science: Do Women Have An Equal Voice In Scholarly Communication? A Brown Bag With Cassidy Sugimoto, as part of the Program on Information Science Brown Bag Series.

In her talk, illustrated by the slides below, Sugimoto highlights the roots of gender disparities in science.


Sugimoto abstracted her talk as follows:

Despite progress, gender disparities in science persist. Women remain underrepresented in the scientific workforce and under rewarded for their contributions. This talk will examine multiple layers of gender disparities in science, triangulating data from scientometrics, surveys, and social media to provide a broader perspective on the gendered nature of scientific communication. The extent of gender disparities and the ways in which new media are changing these patterns will be discussed. The talk will end with a discussion of interventions, with a particular focus on the roles of libraries, publishers, and other actors in the scholarly ecosystem..

In her talk, Sugimoto stressed a number of patterns in scientific publication:

  • Demise of single authorshop complicates notions of credit, rewards, labor, and responsibility
  • There are distincted patterns of gender disparity in scientific publications: Male-authored publications predominate in most field (with a few exceptions such as Library Science); women collaborating more domestically than internationally on publication; and woman-authored publications tend to be cited less (even within the same tier of journals).
  • Looking across categories of contribution — the most isolated is performing the experiment. And Women are most likely to fill this role. Further, if we look across male-and-female led teams, we see that the distribution of work across these teams varies dramatically.
  • When surveying teams — women tended to value all of the forms of contributions more than men with one exception. Women judge technical work, which is more likely to be conducted by women, as less valuable.
  • Composition of authorship has consequences for what is studied. Womens’ research focuses more often than men on areas relevant to both genders or to women.

Sugimoto notes that these findings are consistent with pervasive gender discrimination. Further, women as well as men frequently discriminate against other women — for example, in evaluation of professionalism, evaluation of work, and in salary offers

Much more detail on these points can be found in Sugimoto professional writings.

Sugimoto’s talk drew on a variety of sources: publication data in the Web of Science; from acknowledgement and authorship statements in PLOS journals. Open bibliometric data, such as that produced by PLOS, the Initiative for Open Citation, and various badging initiatives can help us to more readily bring disparities to light.

At the conclusion of her talk, Sugimoto suggested the following roles for librarians:

Sugimoto’s talk drew on a variety of sources: publication data in the Web of Science; from acknowledgement and authorship statements in PLOS journals. Open bibliometric data, such as that produced by PLOS, the Initiative for Open Citation, and various badging initiatives can help us to more readily bring disparities to light.

  • Use and promote open access in training sessions
  • Provide programming that lessens barriers to participation for women and minorities
  • Advocate for contributorship models which recognize the diversity of knowledge production
  • Approach new metrics with productive skepticism
  • Encourage engagement between students and scholars
  • Evaluate and contribute to the development of new tools

Reflecting the themes of Sugimato’s talk, the research we conduct here, in the Program on Information Science is strongly motivated by issues of diversity and inclusion — particularly on approaches to bias-reducing systems design. Our previous work in participative mapping aimed at increasing broad public participation in electoral processes. Our current NSF-supported work in educational research focuses on using eye-tracking and other biological signals to track fine-grained learning across populations of neurologically diverse learners. And, under a recently-awarded IMLS award, we will be hosting a workshop to develop principles for supporting diversity and inclusion through information architecture in information systems. For those interested in these and other projects, we have published blog posts and reports in these areas.