Synchronicity: Merging Text with Audio/Video Components of Oral History Online

January 1, 2010 by

Oral History is a complex information package that has not yet fully realized its potential with regard to internet access.  Content management systems still generally treat the different components of oral history as separate entities.  You can search the text or you can listen / watch the interview but the different components are rarely integrated.   The Louie B. Nunn Center for Oral History at the University of Kentucky Libraries, in partnership with the Kentuckiana Digital Library, has designed a web interface to try to more intelligently and efficiently present oral histories online by enabling users to search at the word level and link from the text to the appropriate moment in the audio where those corresponding words occur. This capability is made possible through a process of digital preparation of the interviews audio, transcript and metadata in a web-based  application we have designed called OHMS.  Designed to mimic the workflow of a video game, OHMS has been constructed to inexpensively and efficiently encode transcripts for online delivery of the time coded transcripts and audio files.  One hour of interview can be marked up and submitted in a matter of minutes.  We launched the interface last year and have been filling it up with synchronized transcripts since.  We are currently planning to explore possibilities with applying this technology more broadly to include compatibility with other content management systems such as ContentDM (among others), and more immediately, to work with streaming video.  The front end interface can be accessed at the Kentuckiana Digital Library: here.

I want to explore this idea of more effectively staging and accessing oral history online and explore the advantages and disadvantages of our chosen method and discuss challenges we face in the process.

Tags: , , ,

6 Responses to Synchronicity: Merging Text with Audio/Video Components of Oral History Online

  1. Katie Holt on January 5, 2010 at 4:33 pm

    I’m very interested in hearing more about your project. At the College of Wooster, we’re discussing a format where our students could construct public history websites to present their senior independent study research projects. Many of our students create oral histories as part of their research, so I’m eager to hear about your experiences. I have done some work with Omeka, so I was drawn to the websites that use this platform to share oral histories, including the Bracero History Archive (braceroarchive.org/).

  2. Erin Bell on January 5, 2010 at 5:06 pm

    I took a look at the time-synced oral histories and am very impressed! We are always looking for ways to increase the usability of long oral history sound files (we have a collection of about 500 oral histories at CSU) and have done some experimenting with different approaches but have never had the time or resources for the kind of development you have done here. I am really excited by the multiple access points, with researchers being able to read the transcript, search full text, and jump to specific points in the transcript and/or file based on search/browse results. I cannot think of anything missing from that equation.

    I’ll be looking forward to hearing more about the technology, processing, formats, etc. Any chance you can hook us a up with a guest pass to experiment with/tour the OHMS web tool?

  3. Boone Gorges on January 12, 2010 at 1:32 pm

    I recently stumbled on www.snapstream.com/, which is a commercial company trying to make TV broadcasts searchable in a similar way that you are: with time-stamped transcripts. I’m curious to learn whether you’ve explored some of the ways that your work might dovetail with or piggyback on efforts to commercialize the technology of speech-to-text, etc.

  4. Doug Boyd on January 12, 2010 at 2:17 pm

    sorry, no guest passes. I tried. I will demo it onsite however.

  5. William G. Cowan on January 14, 2010 at 9:26 am

    The Ethnograhic Video for Instruction and Analysis project at Indiana University developed a set of tools to segment and annotate digital video from the field videos of ethnographers. One feature of this annotation tool (the Annotator’s Workbench, AWB) is the ability to create a transcript of a song or spoken text. We do this in a most rudimentary way and have been searching for a better approach. This project seems to have come a lot further that we have and I am very interested in seeing your presentation.

  6. Mike Christel on February 1, 2011 at 3:23 pm

    I had the pleasure of hearing Doug Boyd speak at the Oral History Assoc. (OHA) meeting in 2010. I strongly believe that synchronized metadata helps promote access to oral history collections. At OHA 2008 and OHA 2010, automated speech alignment was demonstrated tying transcripts to spoken narrative, and showing where matches occur after text or map queries in the interviews. That work builds from Carnegie Mellon’s NSF-funded Informedia digital video library work, and is itself funded by NSF. See www.idvl.org for more details and current instances using 2 oral history collections: The HistoryMakers African American oral history digital archive, and the Highmark Blue Shield Living Legacy Series celebrating the 150th birthday of Harrisburg, Pennsylvania. Enjoy the improving interfaces into oral history collections!

Skip to toolbar