Amy Rudersdorf Joins the AVPreserve Team as Senior Consultant
8 February 2016
Exactly: A New Tool for Digital File Acquisitions
13 January 2016
AVPreserve and the Louie B. Nunn Center for Oral History at the University of Kentucky Libraries are excited to announce the release of a new tool for born-digital acquisition and delivery.
Library of Congress Releases AVPreserve’s New BIBFRAME Report On Technical Metadata For Audiovisual Resources
8 January 2016
AVPreserve is happy to announce the release of the results of our recent study (on behalf of the Library of Congress) of technical metadata for Audiovisual resources in the context of BIBFRAME.
BIBFRAME AV Assessment: Technical, Structural, And Preservation Metadata
8 January 2016
This report presents the findings of a study conducted by Bertram Lyons and Kara Van Malssen of AVPreserve, on behalf of the Library of Congress, to evaluate the existing state of technical, structural, and preservation metadata for audiovisual resources in the bibliographic environment in light of existing standards for audiovisual metadata, and to make recommendations about how BIBFRAME can support the expression of such information. This study follows on our May 2014 report titled, “BIBFRAME AV Modeling Study: Defining a Flexible Model for Description of Audiovisual Resources,” also commissioned by the Library of Congress, which explored and provided high-level recommendations on a flexible data model for audiovisual resources.
BIBFRAME AV Modeling Study: Defining A Flexible Model For Description Of Audiovisual Resources
8 January 2016
Lead by Kara Van Malssen, AVPreserve completed this report, commissioned by the BIBFRAME team within the Network Development and Standards Office at the Library of Congress, to evaluate the content description needs of the moving image and recorded sound communities and to specify how those requirements can be met within a semantic bibliographic data model designed generically to support all content types found in libraries.
The Creator and The Archivist
6 January 2016
A significant portion of certain Archivist’s job is processing collections — the activity of arranging and describing materials that have been deposited with an archive. At times this is simple. For the most part it is difficult. Consider your own paper and digital files, and imagine someone who doesn’t know you personally sifting through those files and computer desktop and download history and et cetera trying to make sense of what is there and what is important.
AVP Holiday Card – 2015
31 December 2015
Artwork by Stephanie Housley from Coral & Tusk
AVPreserve Brings on Kevin Ford as New Senior Consultant
11 December 2015
We are very excited to announce the addition of our newest team member at AVPreserve, Kevin Ford. Adding to our locations in NYC and Madison, Kevin is the founding member of our Chicago office and brings extensive experience in data management and software development. His work has been highlighted by an impressive track record in the development and application of systems utilizing linked data. This includes his work at the Library of Congress enhancing and greatly expanding id.loc.gov and his deep involvement with the Library’s Bibliographic Framework Initiative (Bibframe). He also brings his experience developing and implementing large scale data management systems, not just at LC, but also for MarkLogic as a Senior Consultant. Aside from adding to a deep bench of talent at AVPreserve, Kevin brings expertise to the team that opens up exciting new possibilities for AVPreserve and our clients. Read more about Kevin here.
My Digitization Vendor Always Sends .md5 Files As Part Of Their Deliverables
5 December 2015
A file’s checksum is a specially generated hash based on a computation of all of the individual bytes that make up the file. A variety of algorithms exist that can be used to perform this computation, e.g., MD5, SHA-1, or SHA-256. For example, an MD5 checksum is a 32-bit string that may look something like this: ff839faf604272ba094741b62c7e4254. Because the computation takes into account every byte of a given file, if any bytes change for a given file, then the outcome of the computation will produce a different hash. Only an identical set of bytes will produce the same hash repeatedly when processed through the checksum algorithm in use.
Exiftool Tutorial Series
1 December 2015
This four-part series of video tutorials, created by Kathryn Gronsbell is focused on Exiftool, a command-line application that can read, write, and edit embedded metadata in files. The tutorial series provides detailed support to users looking for an approachable and practical introduction to Exiftool.
Featured exercises have wide-ranging applications but trend towards improving digital preservation workflows through a step-by-step exploration of Exiftool’s basic features and functions.