In his talk, illustrated by the slides below, Bernhardt reviews technologies newly available to libraries that enhance the human-computing interface:
Bernhardt abstracted his talk as follows:
Terms like “virtual reality” and “augmented reality” have existed for a long time. In recent years, thanks to products like Google Cardboard and games like Pokemon Go, an increasing number of people have gained first-hand experience with these once-exotic technologies. The MIT Libraries are no exception to this trend. The Program on Information Science has conducted enough experimentation that we would like to share what we have learned, and solicit ideas for further investigation.
Several themes run through Matt’s talk:
VR should be thought of broadly as an engrossing representation of physically mediated space. Such a definition encompasses not only VR, AR and ‘mixed-’ reality — but also virtual worlds like Second Life, and a range of games from first-person-shooters (e.g. Halo) to textual games that simulate physical space (e.g. “Zork”).
A variety of new technologies are now available at a price-point that is accessible for libraries and experimentation — including tools for rich information visualization (e.g. stereoscopic headsets), physical interactions (e.g. body-in-space tracking), and environmental sensing/scanning (e.g. Sense).
To avoid getting lost in technical choices, consider the ways in which technologies have the potential to enhance the user-interface experience, and the circumstances in which the costs and barriers to use are justified by potential gains. For example, expensive, bulky VR platforms may be most useful to simulate experiences that would in real life be expensive, dangerous, rare, or impossible.
A substantial part of the research agenda of the Program on Information Science is focused on developing theory and practice to make information discovery and use more inclusive and accessible to all. From my perspective, the talk above naturally raises questions about how the affordances of these new technologies may be applied in libraries to increase inclusion and access: How could VR-induced immersion be used to increase engagement and attention by conveying the sense of place of being in an historic archive? How could realistic avatars be used to enhance social communication, and lower the barriers to those seeking library instruction and reference? How could physical mechanisms for navigating information spaces, such as eye tracking, support seamless interaction with library collections, and enhance discovery?
Catherine D’Ignazio is an Assistant Professor of Civic Media and Data Visualization at Emerson College, a principal investigator at the Engagement Lab, and a research affiliate at the MIT Media Lab/Center for Civic Media. She presented this talk, entitled, Creative Data Literacy: Bridging the Gap Between Data-Haves and Have-Nots as part of Program on Information Science Brown Bag Series.
In her talk, illustrated by the slides below, D’Ignazio points to the gap between those people that collect and use data, and those people who are the subject of data collection.
D’Ignazio abstracted her talk as follows:
Communities, governments, libraries and organizations are swimming in data—demographic data, participation data, government data, social media data—but very few understand what to do with it. Though governments and foundations are creating open data portals and corporations are creating APIs, these rarely focus on use, usability, building community or creating impact. So although there is an explosion of data, there is a significant lag in data literacy at the scale of communities and citizens. This creates a situation of data-haves and have-nots which is troubling for an open data movement that seeks to empower people with data. But there are emerging technocultural practices that combine participation, creativity, and context to connect data to everyday life. These include data journalism, citizen science, emerging forms for documenting and publishing metadata, novel public engagement in government processes, and participatory data art. This talk surveys these practices both lovingly and critically, including their aspirations and the challenges they face in creating citizens that are truly empowered with data.
In her talk, D’Ignazio makes five recommendations on how to help people learn data literacy:
Many tutorials on data use abstract or standardized examples examining cars (or widgets) — this does not connect with most audiences. Ground your curriculum in community-centered problems and examples.
Frequently, people encounter data “in the wild” without metadata or other context that are needed for constructing meaning with it. To address this, have learners create data biographies — which explain who and how the data was collected and used, and its purposes, impacts and limitations.
Data is messy, and learners should not always be introduced to it through a clean, static data set but through encountering the complex process of collection.
Design tools that are learner-centric: focused, guided, inviting, and expandable.
People like monsters better than they like bar charts — so favor creative community-centered outputs over abstract purity.
Much more detail on these recommendation can be found in D’Ignazio’s professional writings.
D’Ignazio’s talk illustrated two more general tensions. One general tension is between a narrow conception of data literacy as coding, spreadsheets, statistics; and a broader conception that is not yet crisply defined but is distinct from statistical-, information-, IT-, media-, and Visual- literacies. This resonates with work done by our program’s research intern Zach Lizee on Digital Literacy and Digital Citizenship in which he argues for a form of literacy that prepares learners to engage with the evolving role of information in the world, and to use that engagement to advocate policy and standards that enact their values.
D’Ignazio’s talks also highlights a broad general tension that currently exists between the aspiration of open data and data journalism to empower the broader public, and the structural inequalities in our systems of data collection, sharing, analysis, and meaning-making. This tension is very much in play with respect to Libraries and Universities approaches to open access.
Much of academia, and many policy-makers have embraced the potential value of Open Access to content. The MIT libraries’ vision also embraces the challenge of building an open source platform to enable global discovery and access to this content. Following the themes of D’Ignazio’s talk and based on our research, I conjecture that library open platforms could be of tremendous worth — but not for the reasons one usually expects.
The worth of software, and of information and communication technology systems and platforms generally, is typically measured by how much it is used, what functions it provides, and what content/data it enables one to use. However the importance of Library participation in the development of open information platforms goes beyond this. Libraries have not distinguished themselves from the Googles, Twitters and Facebooks of the world in making open content discoverable, or in the functionality that their platforms provide to create, annotate, share, and make meaning from this content: The commercial sector has both the capacity and the incentives to do this — as it’s profitable.
The worth of a library open platform is in the core libraryvalues that it enacts: broad inclusion/participation, long-term (intergenerational) persistence, transparency, and privacy. These are not values that current commercial platforms support — because the commercial sector lacks an incentives to create these. To go beyond open access to equity in participation in the creation and understanding of knowledge, libraries, museums, archives, and others that share thesevalues must lead in creating open platforms.