The Digital Transformation of Holocaust Testimony Archives

25 April 2024

Preserving and accessing Holocaust testimonies is crucial in today’s world. As misinformation and historical revisionism continue to increase, archives play an even more essential role. Therefore, Yale’s Fortunoff Video Archive for Holocaust Testimonies serves as a beacon, reshaping how we engage with survivors’ stories. This blog covers the archive’s evolution, challenges, and significance, highlighting the vital efforts made to safeguard these memories for future generations.

The Origins of the Fortunoff Video Archive

The Fortunoff Video Archive began in New Haven in 1979 as a grassroots initiative. Survivors and their families founded it to record and preserve Holocaust oral histories. This effort aimed to ensure that survivors’ voices would never be forgotten.

Initially, the archive operated as a nonprofit, using analog video equipment to record testimonies. Survivors often participated as both interviewers and interviewees, thereby creating an intimate setting. As the project gained support and funding, it eventually became affiliated with Yale University in 1981.

Transitioning to the Digital Age

Digital technology sparked a major transformation for the archive. Between 2010 and 2015, it transitioned from analog to digital formats, digitizing over 10,000 hours of testimony. This process required meticulous planning and skilled technicians to maintain the recordings’ original quality.

Notably, Frank Clifford, a dedicated video engineer, played a pivotal role in this transition. His expertise ensured that the digitization maintained the authenticity of the archive’s materials, allowing the archive to move seamlessly into the digital age.

Enhancing Accessibility Through Technology

Moreover, the Fortunoff Archive embraced technology to improve accessibility. The development of the Aviary platform marked a key milestone, enabling users to search and access testimonies online. This platform uses advanced indexing systems, which help users navigate the extensive collection efficiently.

In addition to video testimonies, the archive has also developed transcripts and indexes for research purposes. Although these indexes were originally handwritten, they have since been digitized, synchronizing with the video content and aiding researchers in locating specific topics within testimonies.

Ethical Considerations in Archival Practices

Importantly, the archive operates with strong ethical guidelines, always prioritizing survivors’ well-being. Each testimony includes a release form, giving survivors control over their narratives. This ethical focus extends to how access is managed, ensuring the archive remains sensitive to the subject matter.

Furthermore, over 200 affiliated institutions worldwide now offer access, enabling researchers to engage with testimonies while maintaining survivor confidentiality. This approach reflects the archive’s deep respect for the individuals whose stories are being preserved.

Engaging the Public: Outreach and Education

In addition to preservation, the Fortunoff Archive actively engages the public through various initiatives. For instance, the podcast “Those Who Were There” shares testimonies in an engaging audio format, making survivor stories more accessible to a broader audience.

The archive also offers educational programs, film series, and fellowships. As a result, these initiatives promote a deeper understanding of the Holocaust, encouraging empathy and awareness among future generations.

Challenges in the Digital Landscape

While the digital transformation has increased accessibility, it has also brought challenges. Misinformation and Holocaust denial, for example, threaten historical narratives. Consequently, as digital manipulation becomes more prevalent, verifying sources has become more important than ever.

The archive also faces challenges related to copyright and ownership, ensuring survivor rights are protected. Therefore, balancing accessibility with ethical responsibility remains a central concern.

The Future of Holocaust Testimony Archives

Looking ahead, the future of the Fortunoff Archive lies in collaboration and innovation. As technology continues to advance, integrating Holocaust testimony collections across platforms becomes possible. Consequently, efforts to create a centralized platform will enhance research and collaboration among institutions.

Furthermore, as fewer witnesses remain, preserving these testimonies becomes even more urgent. The Fortunoff Archive remains dedicated to ensuring survivors’ voices are heard in our collective memory.

Conclusion: The Importance of Remembering

In conclusion, the Fortunoff Video Archive’s work is vital in preserving survivors’ stories from one of history’s darkest times. By combining technology, ethical practices, and public engagement, the archive honors these memories. As we look ahead, it is our responsibility to carry these stories with us, ensuring the past’s lessons continue to guide our present and future actions.

Transcript

Chris Lacinak: 00:00

Hello, thanks so much for joining me on the DAM Right Podcast.

To set up our guest today, I want to first set the stage with two important items.

I founded AVP back in: 2006

Actually April 21st was our 18th year anniversary, so happy birthday to AVP.

Anyhow over the past 18 years, I’ve had the privilege of working across a number of verticals.

Anyone who has worked in a number of places within their career will know that one of the big and important parts of onboarding and becoming a productive part of a new company is learning and using the terminology.

Each organization has its unique terms and the distinct way that they use those terms.

So you’ll understand when I say that the thing that has differed the most in working across verticals has been the terminology.

Our corporate clients talk about DAM, our libraries and archives clients talk about digital preservation, our government clients talk about digital collection management, and so on.

In truth, there is a great deal of overlap in the skills and expertise necessary to effectively tackle any of these domains.

Of course, there is nuance that is important and distinct, which is mostly about understanding purpose, mission, context, and history.

This is akin to learning the terminology of a given workplace and coming to understand the things that make each workplace unique.

Like anywhere, the use of a terminology is a signal to people about which tribe you are part of.

Just as words have meaning, how you use those words has meaning.

For years, this reality has caused a great deal of consternation for us at AVP.

Why?

Because we have always worked with an array of customers, we have always had to make sure to be careful and precise in our use of terminology.

With an individual customer, this is easy.

With a website, this is very difficult.

On a website, you have to choose the terms that will resonate with your target audience and have them know that when they land on their page, they are with their people.

We didn’t want people who talk about DAM to see us talking about collection management and vice versa, thinking that they were not with their people.

But in wanting to avoid offending anyone, we failed to talk effectively to everyone.

In: 2021

Since then, I’ve been relieved to find that 1) we have offended very few of them, and 2) these verticals have also started to embrace the term digital asset management themselves.

Even more, these verticals have started to embrace technologies that use the DAM label.

And conversely, technologies that use the DAM label have started to represent the interests and needs of people who consider themselves to practice digital collection management and digital preservation.

I say all this as a backdrop because the focus of today’s episode is on an archive of video Holocaust testimonies.

It almost feels wrong to refer to these testimonies as “digital assets.”

But even though my guest does not use any technology that refers to itself as a DAM, the practices and skills that are used are digital asset management practices and skills.

A common refrain for digital assets is that they are not digital assets until you have the rights and the metadata to be able to find them, use them, and derive value from them.

Historically, in the distinctions that have existed between the use of the terms digital asset management and digital collection management, one of them is the definition of value.

In DAM conferences 20 years ago, if you talked about digital assets and value, you could be certain that 90% or more of the people in the room were thinking dollar signs.

And if you were at an archive conference and you talked about digital collection management and value, you could be certain that 90% or more of the people in the room were thinking of cultural and historical value.

And while I think this is becoming less true over time, it feels important to say that in this podcast episode, and in the podcast in general, when we talk about digital assets and their value, that we mean any and all of the above.

It is very true to say that a file without rights and metadata has no value of any sort financially, culturally, historically, or otherwise.

If you cannot find it, if you cannot use it, it has no value.

So in this episode, I want to ensure our listeners that there was a great deal of meaning and relevancy in calling these Holocaust testimonies digital assets.

They are truly assets that have a great deal of value in the most holistic and meaningful of ways.

Having said that, and with the Holocaust Remembrance Day coming up on May 6th, I am privileged to have the Director of Yale’s Fortunoff Video Archive for Holocaust Testimony, Stephen Naron with me today.

Prior to becoming the Director, Stephen was an employee at the Fortunoff Archive where he worked extensively on this collection of materials and helped guide it into the digital age.

Since becoming the Director of the Fortunoff Archive, Stephen has been prolific and innovative in his work to make these testimonies available to the public and to proactively use the materials in the archive to create compelling experiences for people to discover and engage with these testimonies.

This has included collaborating on the development of a software platform, launching a podcast, releasing an album, running a fellowship program, and running both a speaker and a film series.

And that’s not even all of it.

I’m so thrilled to have Stephen Naron on the DAM Right Podcast with me today and to introduce him to the DAM Right audience.

Remember, DAM Right, because it’s too important to get wrong.

Stephen Naron, welcome to the DAM Right Podcast.

I’m super excited to have you today.

Very glad to be talking with you about all kinds of topics around DAM and this amazing collection and archive that you’re the Director of.

Thank you for joining me.

Stephen Naron: 05:40

Oh, it’s a pleasure to be here, Chris.

Thanks.

Chris Lacinak: 05:42

I wonder if we could start with you just giving us a background about your background, your history and kind of how you came to be where you are today.

Stephen Naron: 05:51

imonies, on and off now since: 2003

So it really was my first professional job as a librarian and archivist.

But obviously, I’ve always had a deep interest in Jewish history and Jewish culture and Jewish languages.

And I studied abroad, learned Hebrew and Yiddish and German.

And while I was in Germany as a graduate student, I was lucky enough to get a position in an archive at the Centrum Judaicum as a student worker.

And it was the [speaking in foreign language] and this is a sort of general archive for all of the Jewish communities in Germany.

And I worked with that collection for over a year as a student worker.

And that’s when I really was bitten with this sort of bug, this interest in archives in general.

And so that’s when I decided to sort of turn towards the field of archives and libraries.

And when I got my degree, I focused on archives in UT and Austin, which was a great program.

I learned a lot.

And then right out of library school, I found the position at the Fortunoff Video Archive.

And so it really was the first professional experience I had.

And I just loved working with this collection.

It’s a collection that’s exclusively audio visual testimonies of Holocaust survivors and witnesses of the Holocaust.

And yeah, so that’s a little bit about my academic background and how I became interested in working in particular with audio visual collections.

Chris Lacinak: 07:45

Wow.

So you’ve been at the archive for quite a while now.

When was that that you started there?

Stephen Naron: 07:51

In: 2003

And then I moved to Europe with my wife and we were in Sweden.

f years and then came back in: 2015 

Fortunoff Video Archive from: 1984

And so I had a wonderful opportunity to mentor, to have her as a mentor and learn really from the individuals who helped build the collection over the last 45 years.

Chris Lacinak: 08:46

And has the archive always been under the auspices of Yale University or did it start independent from Yale?

Stephen Naron: 08:53

Well, that’s one of the most interesting things about this collection is that it actually started in New Haven as a grassroots effort of volunteers and children of survivors, survivors, fellow travelers who formed a nonprofit organization in New Haven to record testimonies of Holocaust survivors and witnesses.

o it didn’t come, that was in: 1979

first tapings were in May of: 1979

And it really was very much an effort from the ground up.

Survivors were in the leadership of the organization, the nonprofit, president of the nonprofit was a man named William Rosenberg, who was a survivor from Częstochowa, Poland.

Survivors would hold meetings in their homes to organize the tapings.

They’d fund the rental of what was at the time quite expensive video equipment to do this professional broadcast, professional standard recordings.

And of course, survivors served as interviewers and as interviewees.

So they were on both sides of the camera.

And so that’s in the early days, ’79 starts.

survivors who was recorded in: 1979 

And Renee happened to be married to a professor at Yale, Geoffrey Hartman, who was a professor of comparative literature.

And so Geoffrey became involved in this sort of local project, community project, very early on.

And he, as an academic, knew how to write grants.

And so he wrote a number of successful grants to help increase the funding of the project.

And he was then really responsible for bringing the collection and giving it a permanent home at Yale.

o it was deposited at Yale in: 1981

And at that time, there were about 183 testimonies that had been recorded by the Video Archive’s predecessor organization.

This organization was called the Holocaust Survivors Film Project.

So this project then became the Video Archive for Holocaust Testimonies.

And there were about 182 testimonies at the time, and it’s now grown to over 4,300 testimonies.

It’s 10,000, more than 10,000 hours of recorded material.

It was recorded in North America, South America, across Europe, in Israel, in over 20 different languages, in over a dozen different countries, with the help of what we call affiliated projects, which are independent projects that form a collaborative agreement with the Fortunoff Video Archive.

And so it has just grown exponentially.

And ever since ’82, we’ve been serving the research community.

They come to Yale, use the collection there, hundreds of researchers every year.

And then in about: 2016

And so these access sites are all over the world.

There are over 200 of them.

And usually institutions of higher learning or research institutes.

So the collection has been, not only has it, did it grow from a small grassroots effort into a sort of a global documentation project, but it’s now readily accessible all over the world.

Chris Lacinak: 12:49

You’ve hinted at several things that I just want to kind of put on the table so listeners understand, but the Fortunoff Video Archive for Holocaust Testimonies is all video recordings.

Is that right?

Stephen Naron: 13:01

Yeah, right.

It’s exclusively video recordings.

And in fact, it was this HSFP, the Holocaust Survivors Film Project, was the first project of its kind to begin recording video interviews with survivors on any sort of extended basis.

So we really are the first sustained project of its kind.

And by sustained, I mean really sustained.

our most recent interview in: 2023

So we’re talking about over 40, almost 45 years of documentation.

And so that provides quite a unique longitudinal perspective of this whole genre of Holocaust testimony.

There’ve been lots of many, there’ve been many other projects that followed in our wake.

But most of them rise and fall fairly quickly.

This is a project that’s really withstood the sort of test of time.

rs, who were recording in the: 1980

So when we get a call from a survivor who wants to give testimony and who hasn’t given testimony before, we pull in some of the most experienced interviewers there are who have done this type of work.

Chris Lacinak: 14:29

You mentioned that these were originally recorded, many of them, you’re still recording them, so you’re not recording them on analog videotape today.

But originally they were recorded on what was considered broadcast quality analog videotape.

You talked about there being a digitization process of everything in your collection, I believe at some point along the way.

Could you just tell us about like, what are some of the other, I assume there’s transcripts and other aspects.

Can you tell us a little bit about just what does the collection look like and kind of what are some of the salient steps that you’ve taken to make it usable, preservable, accessible?

Stephen Naron: 15:07

There is a story there.

Because this archive has had such a long history, it’s gone through, and it’s from the very beginning been an archive that is, let’s say, I don’t want to say groundbreaking, but certainly forward thinking in its use of technology from the very beginning.

Just the embrace of broadcast video alone was sort of at the time a revolutionary step.

But beyond that, the Fortunoff Video Archive has always been sort of a step ahead, at least in the larger library system at Yale, in thinking about how to make the collection accessible, embracing digital tools, cataloging through Arlen and other sort of central online searchable databases.

We were one of the first collections on campus, if not the first collection on campus, to have its own website.

So we’ve always embraced technology, at least for the benefits that it can bring in terms of making this collection more accessible and more available to the research community.

But as far as what other content or what other layers of information that we’ve had to sort of transform from an analog to a digital world, yeah, we’ve had the videos themselves.

And that took over five years, where we had an incredible video engineer named Frank Clifford, who used to work at Yale Broadcast, who then came over to the Fortunoff Video Archive and by hand, using SAMMA Solos and a fleet of U-Matic and Betacam decks, digitized all 10,000+ hours of video in real time, day after day after day for years.

Sadly, he passed away.

But really, he did just an incredible work.

And as you know, as someone who’s worked hands-on with analog legacy video, he kept those machines running by all means necessary.

h shedding tape that was from: 1979 

And so, that’s just one step, right?

But then we have all these analog indexes that were handwritten, handwritten notes that describe the content of each interview that then became typed indexes.

And those indexes were in WordPerfect and various versions of Word and OpenOffice.

And so, we have this whole other effort of standardizing and migrating the indexes from one format to another.

We eventually moved everything into OHMS.

So now, all those indexes have been OHMSed, and we’ve connected, of course, the OHMS indexes with the video.

And so, that was a huge effort.

Chris Lacinak: 18:21

Let me stop you just for a second, because I think there’s probably many people that don’t know what indexes are, or at least how you define them, and OHMS.

So maybe let’s just drill down a little bit on that.

What’s an index?

What’s it look like?

How does it work?

And what is OHMS?

Stephen Naron: 18:36

Okay.

So, the indexes are a little idiosyncratic for us, right?

So, we call them indexes.

We used to call them finding aids, which is a lot more in tune with the kind of archival world.

But they weren’t really finding aids per se, either, although they did allow us to find things.

What they were are detailed notes in the first person in English, regardless of what the language of the testimony is.

So, first person notes written by students who had the native language of the testimony they were watching, and they’re very, very summarized.

So, they read kind of like transcripts, but they aren’t transcripts.

They’re not word for word.

The goal was to capture the most salient details of the testimony in as terse a form as possible.

And every five minutes, the student would put a time code from the video, a visible time code, so that researchers could then use these indexes or notes or finding aids to find specific speech events in the testimony.

This was long before you had SRT and WebVTT kind of transcripts, right?

You’d use this paper.

So, they’d get this paper indexed.

They’d take it with them.

They’d have the video, VHS use copy, sitting in manuscripts and archives in Sterling Memorial Library, and they’d be looking through the notes and trying to find the section of the testimony that was most relevant for their research.

And so, those notes exist, those indexes, those notes, those finding aids, they exist in a number of different forms.

And even more confounding, the notes, the indexes were created from the use copies, and the use copies had visible time code, and that visible time code did not refer, was not the same time code as the original master tapes, because the VHS use copies, of course, don’t start and stop at the same time as the master tapes.

So, there was this discrepancy between the time code on the notes and the time code on the master tapes, so we couldn’t use the indexes properly with the digital master videos.

So, that’s why we sort of came up, and there was no like programmatic way to just mathematically transform the index timing to the master tape timing.

So, that’s when we found OHMS, and we saw that OHMS was just a sort of ideal system where you could synchronize, and OHMS stands for Oral History Metadata Synchronizer, and you could use OHMS, it was a free tool, it is a free tool that you can use to synchronize text-based data, so indexes, finding aids, transcripts with the digital audio or video.

And so, we did that with the entire collection, which also took us years, but now we have all the indexes are searchable and full-text searches in Aviary, which allows researchers enormous amount of flexibility in terms of locating specific topics and events within a testimony or across all the testimonies.

Chris Lacinak: 21:40

And you created indexes which were not transcripts, was that because of the amount of time it took, was that because that’s what current best practice was?

Why did you take that route instead of transcripts at the time?

Stephen Naron: 21:53

That’s also a really good question.

Well, actually, there is a practical side, it simply was too time-consuming and expensive to create full transcripts, and this is a collection that really grew very slowly and has had limited resources its entire existence, so we had to be cautious about where we sort of put our resources.

And so, these indexes seemed like the quickest, most cost-effective way to gain intellectual access to the collection.

And the archivists used these indexes then to create catalog records, regular old MARC catalog records, almost like every testimony was cataloged, almost like a book.

And you could then search across those catalog records.

But beyond the practical side, there was also an ethical and I think intellectual reason not to go the path of transcripts.

One was that no transcript, no textual transcript can truly capture the richness of an audio-visual document.

You cannot capture gestures, you cannot capture tone, you cannot capture pauses that are very meaningful in a recording like a video testimony of a survivor, the look in the eyes.

I mean, these are things that cannot adequately be captured in a transcript.

And so, the thought was if you can’t make an accurate transcript, we have to really push the viewer to watch the recording.

And again, that’s also part of the ethos of the archive is that we want you to watch.

We want you to witness the witness, right?

We want you to be present, entirely present.

And if you provide transcripts to researchers, as we all know, the researchers will go straight to the transcripts and use the transcripts and might not even watch the video.

And that’s big, you know, some researchers are lazy like that.

But we felt that that was an ethically unsound use of video testimony.

And so, we really want to, we sort of pressure, let’s say, or coerce the researcher to watch the video and to watch the video in its entirety.

And I think that’s an obligation.

There’s an ethical obligation there that needs to be followed.

Chris Lacinak: 24:28

Yeah, that’s really interesting.

So, you didn’t want to mediate, it sounds like, you didn’t want there to be a mediation between the person that was watching or using these materials and the original testimony.

That’s super interesting.

It makes a lot of sense.

Stephen Naron: 24:40

Well, I mean, it also, it does make sense because, I mean, think about it.

If you read a transcript and it’s read by the, and it’s spoken by the survivor with an ironic tone of voice, how are you supposed to understand that there’s irony or sarcasm in a transcript?

You have to listen and watch in order to truly grasp what’s happening.

Otherwise, researchers will quite simply make mistakes.

They will misquote and misinterpret.

Chris Lacinak: 25:09

So I sidetracked you there.

You were kind of on a path talking about the various elements that you have in the archive.

You were talking about indexes and ohms when I stopped you.

And were there other things that you wanted to talk about there?

Stephen Naron: 25:22

Well, there are a couple of other things that are interesting and we’re still trying to figure out how to integrate them.

One is we conducted something called a pre-interview.

All of the testimony, so the process that we follow when we record testimonies is that there’s contact with the survivor several weeks or a week before the interview and the interviewers who are going to be at the session call, one of them at least, calls the witness and informs them about how it’s going to work, that it’s a very open-ended interview process, that they’re going to introduce themselves at the beginning and they’re going to tell us their, you know, start from their earliest childhood memories all the way up to the present, there aren’t set questions.

But they then also ask them a series of questions, mostly biographical questions.

Where were you born?

When were you born?

What did your parents do?

Did you have any siblings?

And so they gather all this information prior to the actual interview so that they can then go back to the library and do research about this person’s life.

So the town they’re from, learning about the town they’re from, learning about the camps and the ghettos that they might have been in, you know, really diving into this person’s life so that when they show up in the actual recording, the interviewers are already well informed about this person’s life.

They know the names of the siblings and the parents and what they did and they don’t have to ask these questions because they know it.

And then they can just serve as sort of guides or assist the witness as they really tell their life story in as open a manner as possible.

So those pre-interview forms are really interesting.

Also because the interview, once they get into the recording studio, there’s a lot of unknowns.

So sometimes the information that’s on the pre-interview doesn’t make it into the interview because the interview has a kind of life of its own.

But we need to find a way to make the data in those pre-interview forms more accessible to the researchers because there’s some interesting information there.

And then the other piece is we’re creating transcripts now.

So as I mentioned, those indexes, those finding aids, they’re always in English no matter what the original language is, which can be really frustrating for researchers who know these languages and then have to search in English, let’s say, to find information in a Slovak testimony or a Hebrew testimony or a Yiddish testimony.

So we’re now in the process of transcribing the entire collection in the original languages so that native speakers and researchers can search across testimonies in their language, which is in a way a compromise and a move away from what I said earlier about, you know, we want to, if we provide transcripts, then the risk is that people will just use the transcripts and not to watch the video.

But we felt this was a necessary step in this day and age to provide further intellectual access.

Chris Lacinak: 28:39

Well, it also seems that there’s been a major technological leap, whereas today I know the way that you provide access to transcripts is synchronized with the testimony.

So I mean, that’s a very different experience than maybe 15 years ago where someone would have just gotten that transcript and may have never watched the testimony, right?

That seems like that’s a very different experience and stays true to what you said about why it was important not to do that at the time.

Stephen Naron: 29:05

Yeah, absolutely.

And also think about, we’ve also been approaching transcription with another, you know, another motivation.

And that is that obviously people who are hearing impaired can’t take advantage of an audio visual testimony in the same way that a hearing person is.

So to be able to provide the transcript and subtitles for testimonies is also really valuable.

The other thing is even many of these testimonies can be extremely difficult to understand because of the survivors often are speaking in a language that isn’t their native tongue.

And so there’s a lot of heavily accented testimonies.

And so having transcripts and subtitles, transcripts as subtitles can be really valuable for everyone.

Chris Lacinak: 29:56

Speaking of the technological leap, some of the things you were talking about, right?

Writing indexes down on paper, pre-digitization was videotaped.

When did you do the digitization work again?

What year or years?

Stephen Naron: 30:07

So I would say: 2010 

And we still, you know, even when we launched Aviary, the vast majority of the digital, the digitization work had been done by the time we were able to launch Aviary and make the testimonies accessible at access sites.

Chris Lacinak: 30:28

Two points about that.

One is it sounds so archaic this day and age, right?

Writing indexes down on paper.

And I believe there’s probably many modern practitioners that think that that sounds absurd.

But two points, one, that wasn’t that long ago and that was not unstandard.

That was pretty typical of what you’d find in a lot of people that were managing collections, especially of analog materials.

Two, just as an insight into, you know, your one archive out of many archives in the world and just to think about how many people haven’t done what you’ve done, which is the digitization work, the transcription work, the, you know, you’ve, as you said, you’ve embraced technology and that’s not to put anybody down that hasn’t.

It’s just to kind of get a moment, a glimpse into how many things that are out in the world that were created not that long ago and for decades prior that still may be not accessible in some way.

Stephen Naron: 31:33

Yeah.

And I mean, also like if you think about a traditional, you know, many traditional oral history, oral history projects, they would often record on tape or video and then create the transcript and then hand the transcript to the interviewee who would then, you know, sign that this transcript is an accurate depiction of my statement, right?

And then they’d actually get rid of the original tapes because the transcript then becomes kind of the document.

So yeah, that’s, we’re very different.

We’ve approached this very differently than a lot of oral history projects.

And yeah, absolutely, we’re really lucky that this collection, as I said, it, you know, it’s still a very small in terms of human resources who work with this collection, but, you know, we’ve been lucky to have the longevity that we have and to have the support from Yale University Library that really allows us to focus just exclusively on this collection, right?

So from the beginning, there has been this laser focus on making this as intellectually accessible and usable and standard, right?

So we’ve used, you know, standard library and archival practice to make this collection accessible using, you know, terminologies and taxonomies like Library of Congress subject headings and things like that, that make it very easy to share our metadata with others to search across collections.

And so yeah, I think we’ve been very lucky to be a part of a research library from the very beginning, which helped us to go down that path of description and description upon description upon description.

Chris Lacinak: 33:22

Thanks for listening to the DAM Right podcast.

If you have ideas on topics you want to hear about, people you’d like to hear interviewed or events that you’d like to see covered, drop us a line at [email protected] and let us know.

We would love your feedback.

Speaking of feedback, please give us a rating on your platform of choice.

And while you’re at it, make sure that you don’t miss an episode by following or subscribing.

You can also stay up to date with me and the DAM Right podcast by following me on LinkedIn at linkedin.com/in/clacinak.

And finally, go and find some really amazing and free resources from the best DAM consultants in the business at weareavp.com/free-resources.

You’ll find things like our DAM Strategy Canvas, DAM Health Scorecard, and the Get Your DAM Budget slide deck template.

Each resource has a free accompanying guide to help you put it to use.

So go and get them now.

And I guess you also have the benefit, although the archive is large in absolute terms and relative terms, it’s fairly small.

So that gives you an advantage to be able to really dive deep and do a lot of great work around, you know, compared to an archive that might have hundreds of thousands of recordings or millions of recordings.

Stephen Naron: 34:35

Yes, for sure.

Chris Lacinak: 34:37

The Fortunoff Video Archive for Holocaust Testimonies is not the only archive of Holocaust testimonies in the country or in the world.

And each of those have had to make decisions about where, when, how to give access.

And my understanding is that different decisions have been made about how to provide access to testimonies.

I wonder if you could just give us a sense of the, what’s the landscape?

You know, are there a few or are there dozens of archives of Holocaust testimonies?

And help us understand what some of the, and I’m not trying to, you know, I’m not trying to say that anybody’s right or wrong or anything like that, but just understand some of the considerations about that these archives have had to navigate in thinking about how to provide access to Holocaust testimonies.

Stephen Naron: 35:29

There are many, many collections all over the world.

after us, after we started in: 1979

And they do have indeed very different approaches to making the collections accessible.

, we have to remember that in: 1979 

sn’t really established until: 1993

And prior to that, there weren’t a lot of other organizations doing this work in the United States or in North America.

But the US Holocaust Memorial Museum is a national institution.

It’s a government-funded institution.

And so the materials that they create, the testimonies that they’ve created and collected over the years, they have been given a very broad, sort of broad permission to make those as accessible as possible.

And I think that’s in part because they see their mission as a sort of general, you know, educational effort, right?

The general public to educate the American people about the history of the Holocaust.

In order to do that best, they have to make their sources as accessible as possible.

That includes testimony.

So their testimonies, of which they have thousands, are all digitized and accessible in their collection search online.

So there really are no barriers at all to the average citizen researcher who wants to go in and watch as much unedited testimony as he or she desires.

So that’s a very open model.

And I think it has a lot of, there’s a lot of benefits to that.

I do sometimes wonder how much of the general public is really interested in watching an unedited 10-hour testimony of a Holocaust survivor, how much of that they really, how many really do that.

But for the average, for the research community, certainly it’s an enormous advantage.

There are other institutions that on a national level, like Yad Vashem in Israel, Yad Vashem has an enormous collection of testimonies, both that they’ve created themselves and that they’ve collected over the years.

Some of those, many of those are available online, but many, I would say the vast majority, are only available to researchers who are then on site.

So they have a slightly more restrictive approach.

But their aim has been to collect as much of the source material as possible, either in original form or as digital copies.

So they’re a little bit more restrictive in a sense.

And then you have another major collection, the USC Shoah Foundation, which was started by Steven Spielberg after the release of Schindler’s List in ’94.

And he and his organization, the organization that grew out of this initial impulse, collected something like 50,000 testimonies of survivors, but in a very short period of time, so I think about less than 10 years.

And they’re now at USC, but they weren’t at USC originally.

They were on the Universal Studios backlot, I think.

And so they had a very different approach to this work, almost outside of the traditional world of academia and libraries.

And for a long time, their collection was only accessible through, and it still is for the most part, only accessible through subscription, a subscription model.

And so they became, they have this enormous, incredible collection, but it’s only accessible to at universities and research centers that have the resources to pay for that subscription fee.

And so that’s another model that is a little bit more restrictive.

At the same time, they have free tools for high schools and for educational use, something called Eyewitness that has something like 3,000 unedited testimonies that are openly available.

So, they still provide thousands of complete unedited testimonies, but the vast majority of the collection is behind a paywall.

And then you see the other Fortunoff Video Archive, which has digitized its entire collection now.

But for decades, its collection was only accessible at Sterling Memorial Library in the Manuscripts and Archives Department in the reading room at Yale University.

So you’d have to make the pilgrimage to New Haven to work with this material.

And so that’s also in a sense, very restrictive.

Not everyone can afford, not all researchers can afford to make the trip to New Haven to do that type of work.

But there’s no costs involved with using the collection.

So in a sense, it’s open to everyone.

And that’s how it worked at Yale.

And so that was a bit restrictive.

But now we’ve also opened up now, now that the collection is digital, and making it available at these access sites, I already said more than 200 of them, but still, it’s not like we’ve thrown it all up online like US Holocaust Memorial Museum.

It’s still kind of like a closed fist that’s kind of slowly opening, right?

And it’s only accessible at these access sites.

It’s free, so the access sites don’t have to pay a subscription fee, but they still have to sign a memorandum of agreement with us.

It’s only accessible on IP ranges that are associated with those institutions, so at various universities and research centers.

So there’s still a certain amount of restriction on who can see it when and where.

And we just have a very different model.

And that model of how to use a collection like this comes from, I think, the fact that we were started by survivors themselves and children of survivors.

This organization from the very beginning was very concerned about the well-being of the survivors before, during, and after the interview has been given.

All of the witnesses sign release forms, and in these release forms, it clearly states that Yale University owns copyright to the recording.

We can do, theoretically, legally, whatever we’d like, but that doesn’t mean we should.

And there was always a sense that the survivors, although they quite clearly wanted to share their story with us and in a very public manner by giving testimony, they still deserve some modicum of privacy and anonymity.

And so we’ve been fairly restrictive in terms of not making it widely accessible online.

etera, but could survivors in: 1981 

That’s what the internet is.

And that feels like a step too far without any kind of mediation for us.

Chris Lacinak: 43:53

You also talked about 200 access sites.

Could you tell us what are those?

Who are they?

How do they work?

What does that look like?

Stephen Naron: 44:01

Yeah.

I mean, I did want to say actually something else about some of the ways in which we, the collective places certain restrictions on access that might seem a little strange or idiosyncratic.

Another example that I forgot to mention was, yeah, so things are slightly locked down in a sense they’re only available at access site.

But another thing that’s really unusual about this collection is we also truncate the last names of the survivors.

So if you were to search the metadata, if you were to go to Aviary and search, you would see very quickly that the testimonies are, the titles of the testimonies are, you know, Stephen N.

Holocaust Testimony, Chris L.

Holocaust Testimony.

The last names are hidden from view.

And obviously once you’re at an access site and you’re watching the testimony and the person introduces themselves, you hear their name, you hear their last name.

And in the transcript, if they say their last name, it’s transcribed there, but you don’t see the transcript unless you’re at an access site either.

And the reason behind this was in the early days, one of the survivors full name appeared in a documentary film that was screened on television.

And the survivor received threatening phone calls after the film was screened.

And after that, they decided that this was a risk that they were unwilling to take and push to truncate the last names in order to protect the survivor’s anonymity.

Of course, if you do research, it’s not foolproof.

If you make the effort to come and do the research, you can find out all this information, personal information.

But the idea was to provide some basic hurdle that would provide some protection.

And as you can imagine, that’s served its purpose well, but it also complicates the research process for the research community.

If you’re a researcher and you’re looking for a very specific person who you know gave testimony, it’s much harder to locate them.

Can’t just search for their last name and find them.

So that’s an example of things that might seem sort of counterintuitive.

We did this, though, to protect the survivors.

And what we saw was our first ethical obligation.

And then we have the obligation to the research community, which comes second.

And that’s also a little bit unusual for an organization such as ours.

But you had a question about beyond this sort of access, what the access sites were or how they worked.

So the access sites are mostly universities and research institutions.

So Holocaust museums all over the world, South America, North America, Israel, Europe.

We even have an open access site in Japan.

And the access sites sign a memorandum of agreement that clearly states what they will and what they can and cannot do with the collection.

They provide us with their IP ranges.

So we restrict the collection to an IP whitelist of all of the IP ranges at these institutions.

So you either have to be on campus to watch the testimonies or you have to use a VPN that you only students and faculty will have.

Everyone has to register in Aviary, our access and discovery system.

And that was one of the, when we helped develop this Aviary, that was one of our major requirements was that we would have some ability to control who sees what, when, where, and how.

And so we force everyone to register in our collection and ask for permission to view testimonies before they’re given sort of free access to everything.

And so it’s a very protective model.

In some ways it seems to, I would guess, be in tension with the way a lot of other libraries and archives work where you want to have the anonymity of the user is just as important as the materials that they’re using.

But because we have this, such sensitive materials in this collection, we felt we needed some extra level of control and protection.

Chris Lacinak: 48:41

Relative to what you described earlier, folks had to come to New Haven.

I mean, it’s hugely opened things up.

That’s been a major transformation in that regard, it sounds like.

Among the users, you have proactively been a big user.

You’ve been extraordinarily prolific.

I mean, you’ve talked about not just in the creation of co-creating of Aviary, but you’ve also created a podcast, I believe, from the collections.

You’ve done an album, which you pressed on vinyl, which was not from the testimonies, but was related to the testimonies.

You have these fellowship programs, you do speaker series, you do film series, you do all sorts of stuff.

Can you talk a little bit about, maybe, you know, there’s a lot to talk about there.

You don’t have to go through each one, but maybe tell us about the podcast and the album that you did.

I’d love to hear a little bit more about that, or if there’s any other of those that you’d really like to highlight.

Stephen Naron: 49:35

In interest in various historical topics, there’s always a kind of ebb and flow, right?

And so, I think, to a certain extent, there can be a sort of complacency about, well, this is an amazing collection, without a doubt.

Researchers will come to us.

But I think that times have changed, and that the research community now expects you to sort of come to them.

And that’s a real fundamental shift in the way we think.

And yeah, as you mentioned, we have the fellowship program.

We have a film grant project where we provide a grant to a filmmaker in residence who then creates a short edited program based on testimonies from the collection.

We have a lot of events and conferences that we support that are designed to sort of lift up the collection in both the public eye, but also among the research community.

We’ve done our own productions based on the testimony.

So the podcast series is already in its third season.

We’re planning to do a fourth season.

And this podcast series is really just, again, like I said, we’ve always sort of embraced new methods and new technologies.

And this really just seemed like the ideal way to bring audiovisual material to a new listenership, to the non-research listenership.

I’m obviously a big fan of podcasts, and I’ve been listening to a number of podcasts that were based on oral history collections.

And there’s one in particular that I stumbled upon called Making Gay History, which is based on the oral histories that Eric Marcus recorded with leading figures in the LGBTQ community.

And I don’t know a lot about this topic.

This is not an area that I know a lot about.

And I found it one of the most compelling podcasts I’ve ever heard based on these archival recordings.

And I said, “Okay, well, we should do something like this.”

And so I asked Eric Marcus if he’d be willing to help produce a series for us.

And he also just happens to be a nice Jewish boy from New York.

And so he agreed and found a team to support him, another co-producer, Nahanni Rous.

And they’ve been producing edited versions of the testimonies in podcast form now for three seasons with quite a bit of success.

You know, over 100,000 downloads and streams on Spotify.

And so these are listeners that would probably never stumble into Aviary at an access site and use the collection that way.

They might find some of our edited programs on our website or on YouTube, but this is just another way to push these voices out into the public.

Chris Lacinak: 52:59

And that podcast for listeners is “Those Who Were There” is the name of that podcast, right?

Stephen Naron: 53:03

Yeah, “Those Who Were There.”

The latest…

And if you go to our website or Google “Those Who Were There,” you’ll find it.

You can listen to it on the website as well as on all your podcast apps.

But the website has a lot of other additional information, including episode notes for each episode that are written by a renowned scholar, Professor Sam Kassow, who provides additional context about each episode, which is really valuable, and further readings.

This that we’ve gotten from the family’s scanned images from family archives.

So it’s a really…

I think it’s, you know, on the one hand, it’s a little strange because you’re taking a video testimony and removing the video and making it into audio in order to do this.

So it feels like you’re losing something, obviously, in this transformation.

But you also gain something because as you know, if you listen to podcasts, you know, when it’s just you and a pair of headphones and you’re walking down the street listening to a podcast, you just sort of disappear into your head and it’s very intimate as well.

So I think it’s appropriate, although there is something lost and something gained.

And then you said the songs project.

So that’s called “Songs from Testimonies.”

It’s also available on our website.

And that’s really a…

It really started as a kind of traditional research project.

So one of our fellows, Sarah Garibova, discovered some really unusual songs that were sung in a testimony that we’d never heard before when she was creating her critical edition.

And we found the song so compelling that we asked a local ethnomusicologist in New York and a musician himself to come and perform the songs at a conference as a sort of act of commemoration.

And we were just blown away by the results and thought that we need to do more of this.

And so it became both an ethnomusicological research project, but also a performative project.

So Zisl Slepovitch is our musician in residence, and he’s moved through the collection, locating testimonies with song, sometimes fragmentary songs that were interwar songs, religious songs, songs that were written in ghettos and camps that may be very well-known, but may also be completely unknown.

And he’s done the research, and then he’s performed these songs.

He’s created his own notation or his own composition for each of the songs and performed them.

And we’ve recorded them with an ensemble, and they’re now available for listening.

And there’s been concerts.

We’ve performed the songs several times in concert with the context, showing excerpts from the testimonies.

Where does the song come from?

Explaining how the song emerges and the meaning of the lyrics.

And yeah, so it’s a research project.

It’s a performance project.

It’s a commemorative project.

It’s also a really valuable learning tool.

It’s a way for the general public to enter into a difficult topic and learn a lot about testimony.

So it’s been a pretty rewarding project.

Chris Lacinak: 56:30

Such a beautiful story.

I love that.

And I also know that you pressed it on vinyl as well, didn’t you?

Stephen Naron: 56:39

Yeah, well, because I’m a music nerd.

So this was…

Well, and I mean, also, I’m an archivist, and vinyl lasts a really long time.

So my thought was that if we press it on vinyl, it will last longer if we do it on CD.

We also do it on CD, and it’s available in all the streaming services as well.

But it is a work of art.

We had a local letterpress artist, Jeff Mueller, who runs Dexterity Press.

He printed each of the sleeves by hand.

And they were designed by this incredible Belarusian artist, Yulia Ruditskaya.

And she did all the design work.

She actually created an animated film around one of the songs as well.

There’s more information.

She was one of our Vlock fellows.

It’s on our website as well, the Filmmaker in Residence Fellowship.

So yeah, it’s a really interesting project.

And I’ve learned a lot about the value of music as a historical source through this effort.

But also the music itself is just quite beautiful.

These are world-class musicians performing these pieces.

It’s really something to listen to.

Chris Lacinak: 58:01

So I’d like to circle back to the discussion around the other Holocaust testimony archives and collections that exist out in the world.

To someone that’s an outsider to the nitty-gritty details of all that, and you gave us some good insights into what some of the variances and variables are there.

But it would seem that as a naive user who is interested in researching Holocaust testimonies, that I might be able to go to a single place and search across all of these various collections, or at least a number of them.

Does that exist?

Is that in the works?

Is there discussions amongst the various entities that hold and manage collections?

Stephen Naron: 58:48

Well, what I would say to that naive researcher is, there absolutely should be something like that.

And it is a shanda that there isn’t.

And yeah, there are discussions about how to make that possible.

And there have been some small attempts.

But at this point, I think my description as well of the different organizations and their different sort of policies around access also point to the underlying problem here, which is that all of these organizations are unique individual organizations with policies and procedures and politics that can prevent them from playing nicely with one another.

And I certainly include the Fortunoff Video Archive.

We’re not any– I’m not excluding us from this, right?

So it’s not about the technology.

The technology is very much there to make it possible for a sort of single search across testimony collections that would reveal results for the research community.

And I think it absolutely has to be the next step.

And not just for the research community, but for the families.

One of the most infuriating things, I think, for children and grandchildren of survivors is they don’t know where their grandparents’ testimony is.

Which archive is it in?

They have no simple way to find it.

And that seems to me to be a major disservice to the families of the survivors who, at great emotional risk, gave us their testimony.

So we really need to find a way to do that.

And we need to work together across organizations to make that happen.

US Holocaust Memorial Museum has also made some really important inroads in this regard.

They have something called Collection Search, where they’ve added metadata from the USC Shoah Foundation, their metadata, and the Fortunoff Video Archives metadata, since they have access to our collection on site at USHMM, into their collection search.

So that’s the first search engine I’ve seen where you can actually search across USHMM, USC, and Fortunoff and find testimonies that are related.

And we’re also doing it in Aviary to a certain extent.

So in Aviary, we’ve got a couple of different organizations with testimonies that have joined together to create what’s called in Aviary a Flock.

And so it’s a way to search across.

It’s like a portal that can search across different collections in Aviary.

ings of survivors recorded in: 1946

And a number of other organizations that have audio and video testimonies in Aviary, and you can search across those as a collective.

And so there are plenty of examples of this working.

We’ve also got a, we formed a digital humanities project that brought together transcripts over a three, I think 3,000 testimony transcripts of survivor testimony from Fortunoff, USHMM, and USC Shoah Foundation, and a project called Let Them Speak.

And you can search across the transcripts of all those collections.

And that’s pretty, that’s also a step, again, another example of what would be possible.

Imagine a world in which everybody just finally shared their testimonies.

So we have a lot of examples of how this works and the benefits of it, but we don’t have like a, we don’t have, it’s almost like we need an umbrella organization that would pull all of these disparate groups together and make them agree on how to share metadata in a way that everyone can have access to it.

Chris Lacinak: 62:51

Right.

Stephen Naron: 62:52

We’re not there yet.

Chris Lacinak: 62:53

Yeah.

Okay.

So some glimmers of hope, but not quite there yet.

Stephen Naron: 62:56

Yeah.

Chris Lacinak: 62:57

Switching gears, I want to ask a question.

I recently had Bert Lyons on the show and we talked about content authenticity.

And I guess I wonder, I mean, this is an issue for every archive, but given the focused efforts around Holocaust denial and things like that, I wonder how you’re thinking about the prospect of fakes and forgeries in the age of AI when, you know, it’s not a new issue.

Fakes and forgeries have been issues for archives for as long as archives have been around, but just the ability and capability of people to create content now to support false narratives and cause issues for archives like yourself.

I wonder, is that something that’s getting talked about within Holocaust testimony circles or is that still on the horizon?

Stephen Naron: 63:54

As technology improves or changes and is more sophisticated and these AI tools become more sophisticated, yes, certainly that’s a new risk, but there are also new technologies and tools to identify things that are fake.

So the technology brings with it new types of artifacts and ways to see whether or not this is testing the authenticity of a digital object.

I’m sure that’s way beyond my, I can’t really talk about that because that’s beyond my field of expertise.

But in my area, I mean, really the more dangerous thing instead of like outright denial, which has always existed but is really limited to the margins, is something that you’ve seen more and more of, which is not outright denial, but a kind of half-truth or willful manipulation of the facts to sort of, it’s like denial light.

It’s bad history being sort of marketed as authentic history in order to pursue a particular ideological or political end, right?

So you see this a lot in, not to pick on anyone in particular, but in certain regimes in Europe that have been considered more, have taken a sort of more populist, authoritarian turn, there have been quite obvious attempts to replace traditional independent scholarship with scholars who are being sort of controlled, funded, supported by the state and the government in a way and sort of asked to willfully, willingly misrepresent the truth, right?

So they still cite historical sources, but they cite them in a way that would not be sort of attempted objective historical writing, right?

In order to tell a story that is inaccurate, let’s say that Polish citizens were not complicit in the Holocaust and every Polish village was filled with individuals who were willing to hide and save Jews from extinction.

These types of sort of exaggerations and misrepresentations of sources, that’s becoming a much greater threat than outright denial.

Also because it’s difficult, because the way it’s shaped, it looks like scholarship, looks like research, it’s presented from official organizations that just happen to be corrupted.

And so that becomes much more of a difficult thing to push back on, but you can and scholars do that and that’s exactly what good scholars do is they push back on this stuff.

But yeah, the AI, considering this is an audiovisual collection exclusively at the Fortunoffe archive, it seems pretty frightening what would be possible.

Chris Lacinak: 67:45

Right.

Well, first point well taken.

I mean, it sounds like let’s not focus too much on the nitty gritty of AI at the sacrifice of recognizing the larger issues, which are much broader than that.

So I really hear what you say there and appreciate those comments.

Here’s one of the things I think about, I mean, the kind of quick scenario you threw out was like someone creates something fake and their tools to identify things as fake.

And that’s true.

I think what’s almost more worrisome for me, and I think that every archive will need to kind of arm themselves with, and there are technologies to do this, at least if not today, then on the near horizon, but is to be able to combat claims of things that are authentic, that are held within an archive, which people claim are fake, and they have to prove that they’re authentic.

Right?

Like that is when people start to cast doubt about authentic things being fake, that’s almost more worrisome to me than someone creating something fake and having to prove that it’s fake or saying that it’s authentic.

Stephen Naron: 68:58

Yeah, absolutely.

And that sort of reminds me of the same kind of bad history that I was trying to describe, like these sort of willful manipulation of the sources that exist and claiming they’re either inauthentic or sort of misrepresenting, misquoting them or quoting them selectively in order to make an argument that’s unsound.

I mean, that’s absolutely true.

That seems like a tactic that could be used.

I mean, at the Fortunoff Video Archive, we can at least point to a chain of, you know, a provenance chain that takes us all the way back to the original master recordings, which are still in cold storage at, you know, in New Haven, right?

So actually I think they’re in Hamden at our storage facility there.

Chris Lacinak: 69:55

For those New Haven geography buffs.

Stephen Naron: 69:59

Yeah, I didn’t want anyone to, it’s not fair.

It’s in Hamden.

But yeah, so I mean, we have a chain that we can then show the sort of authentic steps that were taken.

And even in the digitization process, there was great care given to the sort of SAMMA systems document the whole digitization process.

And so what’s happening as the signal sort of changes over time.

And so you also have a pretty, you have like a record of the actual transfer and can show if there’s been interruptions or not, lack of interruptions and things like that.

So that’s a pretty detailed level of authenticity control.

Chris Lacinak: 70:43

So Stephen, one of the things that I want to do with this podcast is to back up out of the weeds and reflect on why the work that we do is important to remind ourselves to rejuvenate on purpose and meaning of this work.

And with that in mind, I wonder if you could reflect on the importance and the why behind the Fortunoff Video Archives work.

Why is it important?

Stephen Naron: 71:10

Well I think that it’s important for a couple of reasons.

I’ll just give you three.

Well first of all, the Holocaust is quite possibly the greatest crime committed in the 20th century and one of the greatest crimes in history.

And as such, the brutality of the Holocaust has really impacted our society on so many levels.

So from a kind of universal perspective, we’re still very much living in a world that was shaped by the impact of the Holocaust and the Second World War.

Our belief in these ideas of universal human rights, etc., and of course our inability to always adequately support the regime of human rights internationally, this is directly related to the events of the Holocaust.

And so if you really want to understand the world in which we’re living today, you cannot do so without approaching the history of the Holocaust.

And the history of the Holocaust needs to be approached by every generation in a new way.

And having an archive such as this is one of the best ways, working and engaging with an archive such as this is really one of the best ways to approach this topic.

It’s also important, and the work we do is important, because I think the archive is something of a living memorial to those who did not survive, right?

So the survivors themselves are really the anomalies.

They’re the lucky ones.

And the vast majority of European Jewry was murdered, 6 million men, women, and children.

And so I really see this archive as a sort of living memorial to both the survivors and those who did not survive, their families who did not make it.

And so the archive can serve as a bridge between the living, us, and the dead.

And in fact, as time progresses, and we’re beginning to reach an era where there will no longer be any living witnesses of the Holocaust, due to just simply demographic change, the archives, and archives like this one of testimonies of Holocaust survivors will only become that much more important.

It will be the only way in which we can really engage with personal stories of the witnesses.

Only diaries and memoirs and testimonies like this can give us access to what it felt like to be there in the war, in the camps, in the ghettos, and to have survived.

And then I think the work we do is important as, first of all, as an act of solidarity with the survivors and witnesses themselves.

And as an act of solidarity, it really has served as a model for what I would call an ethical and empathic approach to documenting the history of mass violence from the perspective of those who were there, the witnesses, right?

So a bottom-up perspective.

And it has served as a model, and it continues to serve as a model for lots of organizations who do the type of important work of documenting human rights and civil rights abuses.

So yeah, so those are just three ways I think that the collection really is a, continues to have an impact and is really an important organization.

Chris Lacinak: 75:04

Steven, thanks so much for joining me today.

It’s been extraordinarily enlightening.

I want to thank you for your work that you do, and it’s just been an amazing, it’s been amazing to hear about the journey of this incredible collection and archive.

So thank you for sharing with us today.

In closing, I want to ask you a question that I ask all of my guests on the DAM Right podcast, which is totally separate from anything we’ve talked about so far today, which is, what’s the last song you added to your favorites playlist or liked?

Stephen Naron: 75:39

The last song I added to my playlist.

Well, I guess I have to stay true to the archives and maybe not be entirely honest and say that one of the last songs I put on my playlist was from the volume three of our Songs from Testimonies project, which is called “Shotns or Shadows.”

And it would be the title track, “Shotns,” which is a Yiddish song.

That’s in my playlist.

And I hope you all listen to it too.

Chris Lacinak: 76:17

Okay, we’ll share the links to that in the show notes.

Can you tell us what the actual last song you put in your playlist was?

Stephen Naron: 76:24

It’s actually, you know, usually it’s whole albums.

I put whole albums in my playlist.

Is a Greek avant-garde musician named Savina Yannatou, who I stumbled upon.

Yeah, the song is called something in Greek, which I will not mispronounce for your audience.

Chris Lacinak: 76:48

I’ll get the link from you so we can share it with everybody.

Wonderful.

All right, well, Stephen, thank you so much.

You’ve been extremely generous with your time and all your insights.

Thank you very much.

I appreciate you taking the time.

Stephen Naron: 76:59

No problem.

Thank you, Chris.

Chris Lacinak: 77:00

Thanks for listening to the DAM Right podcast.

If you have ideas on topics you want to hear about, people you’d like to hear interviewed, or events that you’d like to see covered, drop us a line at [email protected] and let us know.

We would love your feedback.

Speaking of feedback, please give us a rating on your platform of choice.

And while you’re at it, make sure that you don’t miss an episode by following or subscribing.

You can also stay up to date with me and the DAM Right podcast by following me on LinkedIn at linkedin.com/in/clacinak.

And finally, go and find some really amazing and free resources from the best DAM consultants in the business at weareavp.com/free-resources.

You’ll find things like our DAM Strategy Canvas, DAM Health Scorecard, and the Get Your DAM Budget slide deck template.

Each resource has a free accompanying guide to help you put it to use.

So go and get them now.

The Critical Role of Content Authenticity in Digital Asset Management

11 April 2024

The question of content authenticity has never been more urgent. Digital media has proliferated, and advanced technologies like AI have emerged. Distinguishing genuine content from manipulated material is now crucial in many industries. This blog examines content authenticity, its importance in Digital Asset Management (DAM), and current initiatives addressing these challenges.

Understanding Content Authenticity

Content authenticity means verifying that digital content is genuine and unaltered. This issue isn’t new, but modern technology has intensified the challenges. For example, the FBI seized over twenty-five paintings from the Orlando Museum of Art, demonstrating the difficulty of authenticating artworks. Historical cases, like the fabricated “Protocols of the Elders of Zion,” reveal the severe consequences of misinformation. Digital content’s ubiquity makes it vital for organizations to verify authenticity. Without proper measures, content may remain untrustworthy.

The Emergence of New Challenges

Digital content production has skyrocketed in the last decade. Social media rapidly disseminates information, often without verification. Generative AI tools create highly realistic synthetic content, complicating the line between reality and fabrication. Deepfakes can simulate real people, raising serious concerns about misinformation. Organizations must combine technology with human oversight to navigate this complex environment.

The Role of Technology in Content Authenticity

Technology provides tools to detect and address authenticity challenges. Yet, technology alone isn’t enough. Human expertise must complement these solutions. The Content Authenticity Initiative (CAI), led by Adobe, is one effort creating standards for embedding provenance data in digital content. The Coalition for Content Provenance and Authenticity (C2PA) also works to embed trust signals into digital files. These efforts enhance content verification and authenticity.

Practical Applications of Content Authenticity in DAM

For organizations managing digital assets, content authenticity is crucial. DAM systems benefit from integrating authenticity protocols. Several practical applications include:

  • Collection Development: Authentication techniques help evaluate incoming digital assets.
  • Quality Control: Authenticity measures verify file integrity during digitization projects.
  • Preservation: Provenance data embedded in files ensures long-term reliability.
  • Copyright Protection: Content credentials protect assets when shared externally.
  • Efficiency Gains: Automating authenticity data reduces manual errors.

The Risks of Neglecting Content Authenticity

Neglecting content authenticity poses significant risks. Misinformation spreads quickly, damaging brands and eroding public trust. Sharing manipulated content can lead to legal issues and financial losses. Ignoring authenticity can have severe consequences, including reputational and legal liabilities.

Collaboration and the Future of Content Authenticity

Collaboration is vital for achieving content authenticity. Organizations, technology providers, and stakeholders must develop best practices together. The rapidly evolving digital landscape demands ongoing innovation. Investing in authenticity technologies and frameworks will become essential.

Case Studies: Content Authenticity in Action

Organizations are already implementing successful authenticity measures. Media outlets verify user-generated videos and images with specialized tools. Human rights organizations embed authenticity data into witness-captured files, ensuring credibility in court. Museums and archives verify digital assets’ provenance, preserving their integrity.

Conclusion: The Imperative for Content Authenticity

Content authenticity is a societal necessity, not just a technical issue. As digital content grows, verifying authenticity will be vital for maintaining trust. Organizations that prioritize content authenticity will navigate the digital age more effectively. Collaboration and technology will ensure digital assets remain credible, trustworthy, and protected.

Transcript

Chris Lacinak: 00:00

Hello, welcome to DAM Right, Winning at Digital Asset Management. I’m your host, Chris Lacinak, CEO of Digital Asset Management Consulting Firm, AVP. In the summer of 2022, the FBI seized more than 25 paintings from the Orlando Museum of Art based on a complex, still unclear scheme to legitimize these supposedly lost and then found paintings as the works of Basquiat. In 1903, the Protocols of the Elders of Zion was published, detailing a series of meetings exposing the Jewish conspiracy to dominate the world. It was used in Nazi Germany and by anti-Semites worldwide to this day as a factual basis to promote and rationalize anti-Semitism. Of the many problematic things regarding this text, one of the biggest is that it was a complete work of fiction. In 2005, an investigation conducted by the UK National Archives, identified a number of forged documents interspersed with authentic documents posing as papers created by members of the British government armed services, tying them to leading Nazi figures. No one was convicted, but three books by the author, Martin Allen, cited these forged documents and documentation shows that he had access to these specific documents. In 1844, an organized gang was convicted in London for creating forged wills and registering fictitious deaths of bank account holders that the gang had identified as having dormant accounts so that they could collect the remaining funds. As this sampling of incidents demonstrates, content authenticity is not a new problem. It is, however, a growing problem. The proliferation of tools for creating and altering digital content has amplified the authenticity dilemma to unprecedented levels. In parallel, we are seeing the rapid growth and deployment of tool sets for detecting fake and forged content. As is highlighted in this conversation, the line between real and fabricated lies in the intent and context of its creation and presentation. This conundrum signals that technology alone cannot bear the weight of discerning truth from fiction. It can merely offer data points on a file’s provenance and anomalies. As the hyperspeed game of cat and mouse continues on into the foreseeable future, it’s also clear from this conversation that addressing this challenge in any truly effective way requires an integrated and interoperable ecosystem that consists of both people and technology. The stakes are high touching every industry and corner of society. The ability to assert and verify the authenticity of digital content is on the horizon as a cornerstone of digital asset management, as well as being a social imperative. Amidst this complex landscape of authenticity, integrity, and technological chase, I am excited to welcome a vanguard in the field, Bertram Lyons, to our discussion. As the Co-Founder and CEO of Medex Forensics, an Illuminary in content authenticity, Bert’s insights are extraordinarily valuable. His journey from a Digital Archivist at the American Folklife Center at the Library of Congress to spearheading innovations at Medex Forensics underscores his deep engagement with the evolving challenges of digital veracity. Bert’s involvement in the Content Authenticity Initiative and the C2PA Working Group, coupled with his active roles in the American Academy of Forensic Sciences and the Scientific Working Group on Digital Evidence, highlight his commitment to shaping a future where digital authenticity is not just pursued, but attained. Join us as we explore the intricate world of content authenticity, guided by one of its esteemed experts.

Bertram Lyons, Welcome to DAM Right. I’m so excited to have you here today. Uh, um, I’m particularly excited at this moment in time, because I feel like the expertise and experience you bring is going to be a breath of fresh air, um, that gives us a deeper dive into the nuance and details of a topic, content authenticity, which I think is most frequently, uh, experienced as headlines around, uh, kind of bombastic AI sorts of things, and I think that, uh, you’ll, you’ll bring a lot of clarity to the conversation. So thank you so much for being willing to talk with us today. I appreciate it. 

Bertram Lyons: 04:27

Thanks Chris. 

Chris Lacinak: 04:28

I’d like to start off with just talking a little bit about your background. I think it’s fair to say that you didn’t come to forensics and content authenticity with the most typical background. I’d love to hear a bit about how you arrived here and how the journey, uh, kind of informed what your approach is today. 

Bertram Lyons: 04:47

To give you a sense of, you know, where I think I am today is working in the world of, uh, authenticating digital information, uh, specifically video images. Um, and how I got there, you know, I spent 20 years plus working in the archives industry. That was really what, what I spent my time doing up until a few years ago. Um, I started at, you know, various different kinds of archives, um, one. exciting, um, uh, place that I worked for, for a variety of years. When I first started out, it was a place called the Alan Lomax Archive. And that was a really cool audiovisual archive. You know, it had tons of formats from, from the start of recording technology up until the time that, that particular individual, Alan Lomax, stopped recording, which spanned from like 1920s through the . 1990s So, you know, really a lot of cool recording technology. And I did a lot of A. D. analog to digital conversion at that time. Um, and that led me down a path of really ultimately working in the digital side of, of archives and ending up at the Library of Congress in D. C. where I, where, you know, my job was specifically a Digital Archivist, and my job there was to learn and understand how historical evidence, um, how it existed in digital form. Um, to document that and to be able to establish strategies and policies for keeping that digital information alive as, as long as possible, both, both the bits on one side and the, um, and the information itself on, on the other side and ensuring that we can, we can reverse engineer information as needed as, as time goes on, uh, so we don’t lose the information in our, in our historical collections. So, uh, it’s been many years with that and then, you know, jumped out, jumped ship from, from LC and started working with you, uh, at AVP and, uh, you know, for a number of years. And that was an exciting ride where we applied a lot of that knowledge, you know, I was able to apply a lot of my experience to our, our customers and clients and colleagues there. Um, but ultimately the, the thing that brought us, brought me into the digital evidence world where I work now was through a relationship that we developed with the FBI and their Forensic Audio Video Image Analysis Unit, um, in Quantico where, you know, we were tasked to increase capabilities, you know, help that team there who, who were challenged with establishing, , authenticity of evidence for court and help them to increase their ability to do that, uh, both manually using their knowledge about digital file formats, but also ultimately in an automated way because Unfortunately, and fortunately, digital video and image, um, and audio are, just everywhere, you know, there’s just so much video, uh, image and audio data around that it becomes the core of almost every investigation that’s happening. Um, any question about what happened in the past we turn to multimedia 

Chris Lacinak: 07:43

I think back to you sitting at the American Folklife Center and Library of Congress. Did you ever have any inkling that one day you’d be working in the forensics field? Was that something you were interested in at the time or was it a surprise that kind of to you that you ended up where you did? 

Bertram Lyons: 07:57

on my mind in that when I, in: 2000 

Chris Lacinak: 10:22

Transitioning a bit now away from your personal experience, I, I guess in preparing for this conversation, it dawned on me that content authenticity is not a new problem, right? That there’s been forgeries and archives and in museums and in law enforcement situations and legal situations for, for centuries, but but it does seem very new in its characteristics. And I wonder if you could talk a bit about like what’s happened in the past decade that makes this a much more urgent problem now, uh, that it deserves the attention that it’s getting. 

Bertram Lyons: 10:57

I think, you know, you say the past decade, a few things that I would put on the table there. One would be just entirely. the boom, which is more than a decade old, but the boom in social media and that like, and that the how fast I can put information out into the world and how quickly you will receive it, right? Wherever you are. So it’s just the, the ability for information to spread And information being whether it’s, whether it’s a, you know, media like image or audio or video or whether it’s, you know What I’m saying in text. Those are different things too, right? So just to scope it for this conversation, just thinking about the creative or documentary sharing of image, video, and audio, right? So it’s a little bit different probably when we talk about misinformation on the tech side. But when we talk about content authenticity with media things, you know, it can go out so quickly, so easily, from so many people. That’s a, you know, that’s a huge shift from years past where we’re worried about the authenticity of a photograph in a, in a museum, right? That’s a, the reach and the, uh, the immediacy of that is, is significantly different, um, in today’s world. And then on, uh, I was, I would add to that, now the ease with which, and this is more of the last decade, with which the, uh, individuals have access to creatively manipulate or creatively generate, you know, new media, That can be confused with, from create, from the creative side to the documentary side. Can be confused with actually documentary evidence. So, you know, the content’s the same whether I create a video of, you know, of myself, um, you know, climbing a tree or whatever. Um, that’s content and I could create a creative version of that that may not have ever happened. And that’s for fun and that’s great. We love creativity and we like to see creative imagery and video and audio. Or I could create something that’s trying to be documentary. You know, Bert climbed this tree and he fell out of it. Um, and that really happened. I think the challenge is that we’re starting, the world started, the world of creating digital content is blending such that you wouldn’t be able to tell whether I was doing that for, from a creative perspective or from a documentary perspective. And then, you know, and I have the ability to share it and claim one or the other, right? And so the, the, those who receive it now, out in the social media world and the regular media world, you know, have to make a decision. How do I interpret it? 

Chris Lacinak: 13:31

Yeah

Bertram Lyons: 13:31

But I think the core challenge that we face off the authentication side is still one of intent by the individual who’s, who’s creating and sharing the content. The tools have always been around to do anything you really want to digital content, um, whether it’s a human doing it or, or asking a machine to do it. In either scenario, what’s problematic is the intent of the person or group of people creating that, and how they’re going to use it.

Chris Lacinak: 14:04

What do you think people misunderstand most about the topic of content authenticity? Is there something that you see repeatedly there?

Bertram Lyons: 14:11

From the way the media addresses it generally, I think one of the biggest misinterpretations is that synthetic media is inherently bad in some way. that we have to detect it because it’s inherently bad, right? You get this narrative, um, that is not true. You know, it’s, it’s a creation process, and it inherently is not a, uh, it doesn’t have a bad or a good to it, right? It comes back to that question of intent. Synthetic media or generative AI that’s creating synthetic media is really just allowing a new tool set for creating what you want to create. We’ve been looking at CGI movies for years and how much of that is ever real. Very little of it, but it’s beautiful and we love it. It’s entertaining. And it comes back to the intent. On the flip side, another really, I think, big misunderstanding in, in this is that, this really comes down to people’s understanding of how files work and how they move through the ecosystems that they’re, that they’re stuck in. You know, files themselves don’t live except for within these computing ecosystems. They move around, they get re-encoded, they, um, and as they follow the, that lifecycle, they get interacted with by, by all kinds of things. Um, like by encoders that are changing, uh, the resolution, for example, or encoders that are just changing the packaging. Um, those changes, which are invisible to the, to the average person, those changes are actually extremely detrimental to the ability to detect synthetic media, or anything that you want to detect about a, about a, you know, that content. As that content gets moved through, it’s being normalized, it’s being laundered, if you will, um, into something that’s very basic. Um, and, and as that laundering happens, that particular content and that particular packaging of the file becomes in some ways useless from a forensic perspective. And I think the average person doesn’t get that yet. That information is available to them. That, that if you want to detect if something’s synthetic and it’s sitting on your Facebook feed, well it’s too late. Facebook had the chance on the way in, and they didn’t do it, or they did do it. Um, and now we’re stuck with like network analysis stuff. Who did, who posted that? Now we’re going back to the person. Who posted that? Where were they? What was their behavior pattern? Can we trust them? Versus, you know, having any ability to apply any trust analysis unless it’s a blatantly visual issue to that particular file. 

Chris Lacinak: 16:45

Can you give us some insights into what are some of the major organizations or initiatives that are out there that are focused on the issue of content authenticity? What’s the landscape look like? 

Bertram Lyons: 16:55

From the content authenticity perspective. It’s a lot, a lot of it’s being led by, major technology companies who, who, who trade in content. So that could be from Adobe, who trades in content creation. Could to Google, who trades in content distribution and searching. Um, you know, and everybody in between. Microsoft, Sony, you know, organizations who are either creating content. Whose tools allow humans to create content and computers or, uh, organizations who really trade in the distribution of that content. Um, so there’s, there’s an organization that’s composed of a lot of these groups called the Content Authenticity Initiative. Um, and there’s, it’s, that, that organization is really heavily led by Adobe. Um, but has a lot of other partners involved with it. And then it sort of has become an umbrella for, for, for, uh, I’d say an ecosystem based perspective on content authenticity that’s really focused on, um, the ability to embed what they’re calling content credentials, but ultimately to embed signals of some sort, whether it’s actual text based cryptographic signatures, whether it’s watermarking, other kinds of, there’s other kinds of approaches, but ultimately to embed signatures, or embed signals in digital content. Such that as it moves through this ecosystem that I mentioned earlier, you know, from creation on the computer, to upload to a particular website, to display on the web, through a browser. It’s really focused on like, can we, can we, can we map the lifecycle of, of a particular piece of content? Um, can we somehow attach signals to it such that as it works its way through, um, it can, those signals can be read, displayed, evaluated, and then ultimately a human can determine how much they trust that content. 

Chris Lacinak: 19:00

If I’ve got it right, I think the Content Authenticity Initiative are the folks that are creating what’s commonly referred to as C2PA or the coalition for content provenance and authenticity. Is that right? 

Bertram Lyons: 19:12

That’s right. Yeah, that’s like the schema, 

Chris Lacinak: 19:15

Okay. 

Bertram Lyons:: 19:15

technical schema. 

Chris Lacinak: 19:16

And in my reading of that schema, and you said this, but I’ll just reiterate and try to kind of recap is that it looks to primarily identify who created something. It really focuses on this concept of kind of trusted entities. Um, and it does offer, um, as you said, provenance data that it will automatically and or systematically embed into the, uh, files that it’s creating. And this starts at the creation process, goes through the post production and editing process through the publishing process. Is that a fair characterization? Is there anything that’s kind of salient that I missed about, uh, how you think about or describe that, uh, schema? 

Bertram Lyons: 20:03

I think that’s fair. I think the only thing I would change in the way you just presented it is that the C2PA is a schema and not software. So it will never embed anything and do any of the work for you. It will allow you to create software that can do what you just said. C2PA itself is purely like a set of instructions for how to do it. And then if you, or if you, uh, you know, want to implement that, you can. If Adobe wants to implement that, they actually already implemented it in Photoshop. If you create something and extract it, you will have C2PA data in it, um, in that file. So it’s really creating a specification that can then be picked up by, um, anybody, any who generates software to read or write, uh, video or images or audio. Actually, it’s really built to be pretty broad, you know. They define ways to package the C2PA data sets into PDFs, into PNGs, into WAVs, you know, generally, um, trying to provide support across a variety of format types. 

Chris Lacinak: 21:03

And the provenance data that’s there, or the specification, uh, for, for embedding, uh, creating provenance information is optional, right? It, someone doesn’t have to do it. Is that true? 

Bertram Lyons: 21:16

Let me come at it a different way. 

Chris Lacinak: 21:18

Okay 

Bertram Lyons: 21:18

It depends on what you use. If you use Adobe tools, it will not be optional for you. Right? If you use a, a, a tool to do your editing that’s not working, that doesn’t, hasn’t implemented C2PA, it will be optional. It won’t even be available to you. Um, that’s why I talk about ecosystem. You know, the, the tools you’re using have to adopt, implement this kind of, um, technology in order to ultimately have the files that you export contain that kind of data in them, right? So it’s optional in that you choose how you’re going to create your content, and you have the choice to buy into that ecosystem or actually to select yourself out of that ecosystem. This reminds me of the early days of kind of in just generally speaking embedded metadata, where before everyone had the ability to edit metadata in word documents and PDF documents and audio files and video files and all that stuff. It was a bit of a. black box that would hold some evidence. And there were cases where folks claimed that they did something on such and such a date, but the embedded metadata proved otherwise. Uh, today that feels naive because it’s so readily accessible to everybody. So I kind of, in the same way that, um, there was a time and place where not everybody could access and view and, or write and edit. Uh, and embedded metadata in files, this sounds similar that, that the tool set and the ecosystem, as you say, has to support, um, that sort, that sort of, those sort of actions. Yeah, they’ll have to be able, you’ll have to support it, and I’ll, just, just so, so, somebody listening doesn’t get the wrong idea, C2PA spec is very much stronger than the concept of embedded metadata, and that, it’s cryptographically signed. So, you know, up until C2PA existed, anybody could go into a file and change the metadata, and then just re save the file and no one would ever know. Potentially. Um, but what the, the goal of C2PA actually is to make embedded metadata stronger. Um, and it’s to generate, um, these, this package of a manifest. It says, you know, inside of this file, there are going to be some assertions that were made by the tool sets that created the file and maybe the humans that were involved with the tool set that created the file, they’re going to make some assertions about its history and then they’re going to sign it with the cryptographic signature. They’re going to sign everything that they said such that if anything changes, the signature will no longer be valid, right? So it’s really a goal of trying to lock down inside the file the information that was stated about the file when it was created and to bind that to the, to the hashing of the content itself. So if I have a picture of me, that all the pixels that go into that picture of me get hashed to create a, you know, a single value, um, what we call a checksum. That checksum is then bound to the statements I make about that. I created this on Adobe Premiere, well actually, Adobe Photoshop would make a statement about what I did to create it, you know, it was created by Photoshop, it was, these edits were done, this is what created it, and that’s an assertion, and then I might say, you know, Bert Lyons created it, that’s the author, that’s an assertion, those assertions are then bound to the checksum of the file, of the image itself, right, and locked in, and if that data sticks around in the file as it goes through, um, it’s ecosystem, and someone picks it up at the end of the pathway, they can then check. Bert says he, he created this on this date, using a Photoshop. Photoshop said he did X, Y, and Z. Signature matches, nothing’s been changed. Now I have a trust signal, and it’s still going to be up to the human to say, do I trust that? Is C2PA strong? Is the cryptography and the trust framework strong enough, such that nobody could have, nobody really could have changed that? 

Chris Lacinak: 25:16

So this C2PA spec then brings kind of this trust. entity trust level, who created this thing, but it also then has this robust cryptographic, um, signed, uh, kind of provenance data that tells exactly what happened. And it sounds like it is editable, uh, it’s deletable, it’s, it’s creatable, but it’s within the ecosystem that it lives within and how it works, it sounds like that there are protection mechanisms that mitigate, um, intentional, uh, augmentation for, you know, malicious purposes or something that it, that it mitigates that risk. 

Bertram Lyons: 25:56

Yeah, I mean, think about it like this. Like it, it doesn’t take away my ability to just go in and remove all the C2PA data from the file. I, I just did that with a file I created for, from Adobe, right? I needed to create a file of my colleague Brandon. I wanted to put a fun fake generative background behind him. And I, I created it and I put some fake background behind them and I exported it as a PNG and I looked in there because I know and I was out of curiosity and so I was like, oh look, here’s the, here’s the C2PA manifest for this particular file. I just removed it. Nothing stops me from doing that. Resaved the file and moved on. Now this file, um, so the way C2PA works, this file now longer, now no longer has C2PA data. It can go down. It can go across, uh, about its life like any other file. And if someone ever wanted to evaluate the authenticity, they’re going to have to evaluate it from without that data in it. They’re going to look at the metadata, they’re going to look at where it was posted, where they accessed it, what was said about it, all of that. The same way that we do for everything that we, we, we interact with today. Um, if that C2PA data had stayed in the file, which I, I, I was just wanting to make sure that I, I’m always testing C2PA, you know, does it still, does the file still work if I removed this, et cetera. Um, but if it stayed in there, it likely would’ve been removed from LinkedIn when I posted it, for example, I posted it up on LinkedIn. Um, it would’ve, it would, it would’ve been removed anyway ’cause the file would’ve been re it reprocessed by LinkedIn. Uh, but if LinkedIn, LinkedIn was C2PA aware, which maybe one day it will be, it would say it would be, and if I left the C2PA data in it and I submitted it to. To, uh, LinkedIn, then LinkedIn would be able to say, oh, look, I see C2PA data. Let me validate it. So it would validate it, and then gimme a report that said, there’s data in here and I validated the checksum, uh, the, or the, the, the signature from the, from C2PA. And now it could display that provenance data for me. It was created by Bert in Photoshop. Um, and it could, again, it all comes around to communicating back to the end user. About the, about the file. Um, now if I had done, tried to make, it doesn’t, still doesn’t stop me from making a malicious change. If I, instead of removing the C2PA data, I went in and tried to change something, that, what would happen? Like maybe I changed the, who created it from Bert to Chris. Um, when that hit, if LinkedIn was C2PA aware, when that hit LinkedIn, LinkedIn would say this has a manifest in it, but it’s not valid. So it would alert me to something being different in the metadata. In the ctpa manifest then from when it was originally created doesn’t keep me from doing it. But now I’m sending a signal to LinkedIn where they’re going to be able to say there’s something invalid about the manifest. That’s kind of the behavioral patterns that happen. So again, it comes back to you. And I went through that example just to show you that still, no matter what we implement, the human has decisions to make on the creation side, on the sharing side and on the interpretation side. 

Chris Lacinak: 29:04

Right.

Bertram Lyons: 29:04

Nothing’s really even at this most advanced technological state, which I think C2PA is probably the strongest effort that’s been put, put forward so far. You know, if I, if I want to be a bad actor, I’m going to, I can get around it. You know, I could just, well, I can opt out of it. That’s where it comes down. So the ecosystem is what’s really important about that approach is that the more systems that require it, then, and the less I have to opt out of it, the better. Right? So we’re creating this tool for it to work. It’s about, really about the technological community, buying in and locking it down such that you can’t share a file on Facebook if you don’t, if it doesn’t have C2PA data in it. If LinkedIn said you can’t share something here if it doesn’t have C2PA data, then once I remove the data, I wouldn’t be able to share it on LinkedIn. 

Chris Lacinak: 29:54

Right.

Bertram Lyons: 29:55

That’s what’s missing so far. 

Chris Lacinak: 29:57

Thanks for listening to the DAM Right podcast. If you have ideas on topics you want to hear about people, you’d like to hear interviewed or events that you’d like to see covered, drop us a line at [email protected] and let us know. We would love your feedback. Speaking of feedback. Please give us a rating on your platform of choice. And while you’re at it, make sure to follow or subscribe so you don’t miss an episode. If you’re listening to the audio version of this, you can find the video version on YouTube using at @DAMRightPodcast and Aviary at damright.aviaryplatform.com. You can also stay up to date with me and the DAM Right podcast by following me on LinkedIn at linkedin.com/in/clacinak. And finally, go and find some really amazing and free DAM resources from the best DAM consultants in the business at weareavp.com/free-resources. You’ll find things like our DAM Strategy Canvas, DAM Health Scorecard, and the “Get Your DAM Budget” slide deck template. Each resource has a free accompanying guide to help you put it to use. So go and get them now. Let’s move on from C2PA. Um, that, that sounds like that covers this, some elements of content authenticity at the organizational level, at provenance documentation level, some signatures and cryptographic, um, protections. You’re the CEO and Founder of a company that also does, uh, forensics work, uh, as you mentioned, Medex Forensics. Uh, could you tell us about what Medex Forensics does? What does that technology do and how does that fit into the ecosystem of tools that focus on content authenticity? 

Bertram Lyons: 31:43

The way we approach and, and the contributions that we try to make to the forensics field is from a file format forensics perspective. So if we know how video file formats work, we can accept a video file, we can parse that video file and extract all the data from it and all the different structures and internal sequencing, ultimately to describe the object as an object, as a piece of evidence, like you would if you were handling 3D evidence. Look at it from all the different angles, make sure we’ve evaluated its chemistry, like we really understand every single component that goes to make up this information object called a file. Um, and once we do that, we can then describe how it came to be in that state. How did it come to be as it is right now? If the question was, hey, is this thing an actual original thing from a camera? Was it filmed on a camera and has not been edited? Then we’re going to evaluate it, and we’re not going to say real or fake, true or false. We’re going to say, based on the internal construction of this file, it is, it is consistent with what we would expect from an iPhone 13 camera original file, right? That’s the, that’s the kind of response that we would give back. And that goes back into the interpretation. So if the expectation was, was this an iPhone 13? We’re going to give them a result that matches their expectation. If their expectation was this came from a Samsung Galaxy, and we say it’s consistent with an iPhone 13, that’s going to change their interpretation. They’re going to have to ask more questions. Um, so that’s what we do. We have built a, a, a methodology, uh, that can track and understand how encoders create video files. Uh, and we use that, the, that knowledge to automatically match the internal sequencing of a file to what we’ve seen in the past and introduce that data back. So that’s, that’s kind of where we play. Um, in that world. I’ll, I’ll point out just a couple of things. So we call that non-content authentication. Um, and you would also want to employ content based authentication. So maybe critical viewing, just watching it. That’s the standard approach, right? The critical viewing approach. Or analytics on the, on pixels with quantification of, you know, uh, are there any cut and pastes? Are there any pixel values that jump in ways that they shouldn’t jump? So there’s a lot of algorithms that really focus on, on, uh, the quantification side of, of, uh, of the pixels in the image. People do analysis based purely on audio, right? Audio frequencies, looking for cuts and splices and things like that. So there’s a lot of ways that people approach content authenticity, um, that ultimately all together if used together can create a pretty strong approach. I mean, it takes a lot of knowledge to learn the different techniques and to understand the pros and cons and how to interpret the data, and that’s why there’s not probably a single, uh, tool out there right now because you just have the domain knowledge required is, is quite large. So we’re the kind of tool that we are. Just to tie in where we sit within the question of content credentials in C2PA is that we read, we would be a tool that would ultimately, if we were analyzing it, we would read the C2PA data in your file and say, oh, there’s a C2PA manifest in that file, and we would validate it, and we would then report back, there’s a valid C2PA data manifest, and here’s what the manifest says, so we would also be someone who would play in that ecosystem on the, on the, you know, the side of analysis, not on creation. We don’t create or, you know, get involved with creating C2PA, but we recognize, read and validate C2PA data in a file, for example. Um, we, we’re looking at all the signals, uh, but that would be one signal that we might evaluate, uh, in an authentication exam. 

Chris Lacinak: 35:28

You said, uh, Medex won’t tell you if something is real or fake, but just to kind of bring this all together, tying into C2PA, uh, let me say what I think my understanding is, how this might work is, and you correct me when, where I get it wrong. But it seems that C2PA may, for instance, say this thing was created on this camera. It was edited in this software on this date by this person, so on and so forth. Medex can say what created it and whether it’s edited or not. Uh, so for instance, if something, if, if the C2PA data said, uh, this was created in, um, uh, an Adobe product, but Medex purported that it was created in Sora, let’s just say just throwing anything out there, uh, that that it wouldn’t tell you this is real or fake, but it would give you some data points that would help the human kind of interpret and understand what they were looking at and, and make some judgment calls about the veracity of that. Does that sound right?

Bertram Lyons: 36:28

Yeah, that’s right. And I’d say the human and or the, the, uh, the workflow algorithm that’s taking data in and out that, you know, that, from a, think about more like moderation pipeline, you know. C2PA says X, Medex says Y. They conflict, flag it. Or, they don’t conflict, they match. Send it through. You can think about it that way too, from like an automation perspective. Um, but also from a human perspective.

Chris Lacinak: 36:54

For the listeners of this podcast, which are largely DAM practitioners and people leveraging digital asset management and their organizations, I’d love to bring that back up, you know, bring us back up to the level of, of why should a Walt Disney or a Library of Congress or National Geographic or Museum of Modern Art, why should organizations that are practicing digital asset management with collections of digital files, you know, we talked, we kind of delved into like legal things and social media things. But why should an organization that isn’t, uh, involved in a legal dispute or, or, or, or some of the other things we’ve talked about, why should they care about this? And how does, how does content authenticity play into the digital asset management landscape? Can you help us get some insights into that? 

Bertram Lyons: 37:37

Yeah, that’s a great question, that’s near and dear to my heart. And we, we probably need hours to talk about all the reasons why, but let’s try to tee up a couple and then you can help me get to it. You know, there’s, I’ll, I’m gonna, I’m gonna list, list a set and then we’ll, we’ll hit some of them. But so, you know, let’s think about collection development, right? So just on the collection development side, we want to know what we have, what we’re collecting, what’s coming in. And we want to apply and we do this as much, as best we can as today, um, in that community with triage tools like, like um, I’ll name one, Siegfried is a good example, built off of the UK’s National Archives PRONOM database. It really focuses on identifying file formats. So, you know, to date, we want to know what file, like as, as, when we’re doing collection development, we want to know what file formats are coming in. Um, but furthermore, actually when we’re doing collection development, you know, I’m speaking of organizations like, like MoMA and Library of Congress, who are collecting organizations. We’re going to get to National Geographic and, uh, Disney and et cetera shortly. You know, on that side, we need collection development tools to make us, make sure we know what we have, right? It goes back to your earlier fakes question. We don’t want to let something in that’s different than what we think it is. And authentication techniques are not present, uh, in those organizations today. It’s a tool that purely metadata, metadata analysis is happening. Just extracting metadata, reviewing the metadata, uh, reviewing the file format based on, based on format, uh, these quote unquote signatures that the UK, Um, National Archives has, has produced and with, with the community over the years, which are great. You know, they’re really good at quickly saying this is a doc, Word doc. This is a PDF. This is a, you know, you know, they identify the type of file. They don’t authenticate the content in any way. So that’s one side of it. Did, um, quality control on big digitization projects is another great way to do this. And start to incorporate this. And of course we kind of do that with metadata techniques still. We’re looking for metadata. We don’t look at file structure, for example, and those kinds of, uh, we don’t know exactly what happened to the file. We know what’s in the file, but we don’t always know what happened to the file. Authentication techniques are focused on that. Um, so I think there’s just ways that that could be added to the current pipelines in those communities. Um, then we think about the file, the content that we’re now storing on the preservation side. We don’t want to necessarily change the hash of files, right? When you’re thinking about libraries and museums and archives. So there’s, there’s probably not a, not a play there to embed C2PA metadata, for example. At least not in the original. There’s probably a play to embed it in the, in the derivatives that are created for access or, or etc. That’s something to discuss. Um, on the create, creation side, you think about companies or organizations like Disney or National Geographic. Content credentials are an excellent mechanism, you know, that and watermarking, which is all, which is all part of the same conversation, um, and moving on, and, and, and this is moving beyond visual watermarking to, uh, non perceptible watermarking, to, to things like that, which are being paired with, with C2PA these days. And, and the, the value there at the, is, is about protecting your assets. Can you ensure that as this asset goes through its lifecycle, whether it’s in your DAM, um, in which case you want your DAM to be C2PA aware or watermark aware. You want your DAM to read these files and report. The C2PA manifest is here for this asset, it’s valid, and here’s the history. You know that that’s another way of securing your assets internally, but then as they go out of the company, whether into advertisements or whether out, you know, being shared to patrons or however they’re being used. out of the company. You know, it’s just another mechanism to ensure your, your copyright’s there to ensure that you are protecting that asset and, and anything that happens to it’s being directed back to you. Um, that’s where on the creative pro, pro production side of the house, that’s these tool sets that are being developed, that are really focused on ensuring content authenticity, they’re, they’re really being built for, for that need. Right? They’re being built for the, for you to have some way to protect your assets as they’re out in the world. That’s why I come back to intent again. Gives you an, a, a, you who have an intent to, to, you know, to do this, the ability to do this. 

Chris Lacinak: 42:06

WhAt is the risk? Let’s say that, um, these organizations that, you know, all of which are using digital asset management systems today, choose not to pay attention to content authenticity. 

Bertram Lyons: 42:19

It depends on what your company has, you know, what your organization collects and manages, but you know, with these generative AI tools that are out there. Content that makes it out of your company’s hands, if it’s yours and you created it and it has something that has meaning to you, um, it’s very easy for someone to, if you don’t have any protections inside of those files in any way, it’s very easy for someone to, to take that, move it into another scenario and change the interpretation of it and put it back out into the world. This happens all the time, right? So, the, the what, the why there is about protecting, protecting the reputation of your, of your company. That’s a big one. Um, The, the other why is about, I, there’s a, there’s a why that’s not about, you know, the public. It’s the internal why is increased efficiency and, you know, and reducing mistakes. I don’t know how many times we’ve seen, um, companies or organizations that have, uh, mis, misattrib, have misattribution to what’s the original of, of, of an object and what’s the, you know, uh, uh, access copy. And some cases lost the original and are only left with the access copy. And the only way to tell the difference would be some kind of database record, if it exists. If it doesn’t exist, you’d have someone whose experience has to do some kind of one to one comparison. But with the content credentials, um, there would be no, no, no, um, question at all between what’s, what was the original and what was a derivative of that original. From a file management perspective, I think there’s a lot of efficiencies to be gained there. Um, and then, and then, in essence, potentially reducing labor, right? So if, if you think about National Geographic, they have photographers out all over the world doing, you know, all kinds of documentary work. If that documentary work is, from the beginning, has content credential aware tools, there’s, there’s cameras out there, um, etc. Or if those , those photographers are then, or maybe they, maybe it’s not, maybe the content credentials don’t start at the camera, but they start at post process, right, you know, into, into Adobe. I’m not, I don’t work for Adobe, I’m not trying to sell Adobe here, but I’m just using it as an example. But, know, it goes into a product like that, that is, that is C2PA aware, for example. And that photographer can create all of that useful provenance data at that moment, as it makes it to National Geographic, if their dam is C2PA, C2PA aware, imagine all of the reduction in typing and data entry that happens at that point. We trust this data inherently because it was created in this cryptographic way. The DAM just ingests it, creates the records, you know, updates and supplements the records. Um, there’s a lot of opportunity there both for DAM users and for actually DAM providers. 

Chris Lacinak: 45:07

Yeah, so to kind of pull it up to the, maybe the most plain language sort of, uh, statements or questions that, that this answers would be again, kind of going back to who created this thing. So a bad actor edits something that’s put out there, posts it, you know, maybe under a, uh, uh, an identity that looks like an entity Walt Disney, for instance, and is trying to say this thing came from Walt Disney. Uh, so this, this sort of suite of tools around content, content authenticity would let us know who actually created that thing and, and allow us to identify that it was not in fact, Walt Disney in that hypothetical. It also sounds like, um, the ability to, um, help identify, you know, something that’s stated as real and authentic, whether it is in fact real and authentic. I’ve got this video, I’ve got this artifact, an image of an artifact. Is this, is this, is this digital object a real thing or not? And vice versa. Someone claiming, and I think we’ll see more and more of this, people claiming that something that is real Is AI generated, right? That’s not real. That’s AI generated. That, doesn’t exist. Uh, the ability to actually, in fact, prove the veracity of something as well, that’s claimed to be non authentic. Um, does that kind of, those three, those are kind of three things that I think what we’ve talked about today points at, like, why would this be important for an organization to be able to answer those questions? And you can imagine in the list of organizations, we, we listed there that there could be a variety of scenarios in which answering those questions would be really critical to their, to those organizations.

Bertram Lyons: 46:51

You give yourself ability to protect yourself and to protect your assets.

Chris Lacinak: 46:56

Right. So, you have really drilled home today the importance of this ecosystem, um, that, that exists and kind of a bunch of people. playing, working, agreeing on, um, building tool sets around, uh, you know, an ecosystem. Are you seeing DAM technology providers opt into that ecosystem yet? Are there digital asset management systems? And I know you don’t know all of them. I’m not saying to give me give us a definitive yes or no across the board. But are you aware of any that are, um, adopting C2PA, implementing Medex Forensics, or similar types of tools into the, into their digital asset management platforms.

Bertram Lyons: 47:43

Not yet, Chris. I haven’t seen a, a DAM company buy into this yet. Um, you know, to be honest, um, I think it’s, this, this is new. This is very much emerging technology. Um, I think a lot of people are waiting to see where it goes and what the adoption is. Um, I will say that two years ago when I started working on, collaborating within the C2PA, uh, schema team, I was, I was feeling like there was very little chance of quick uptake. Um, you know, I, I thought this is a mountain to climb. This is a huge mountain to climb to get technology companies on board, to create C2PA aware technology, whether they have hardware, whether they’re camera, phone companies, whether they’re, whether they’re post processing companies like Adobe, whether they’re browser, you know, serve, like, services, uh, like Chrome, like Google, whether they’re search engines, whether they’re social media. I thought this is just, it’s a, it’s a, mountain. In two years time, however, I don’t know if it was accelerated because of all that’s happened with AI so quickly and the fact that, you know, interests have elevated up to the government level. We have a presidential executive statement on AI, you know, that mentions watermarking. Um, basically mentions C2PA. Um, in two years time, there’s so much change that all of a sudden, um, that mountain feels a lot smaller to climb. It’s like it can be done. Just in the past few months massive organizations have jumped into the Content Authenticity Initiative. Uh, from Intel to NVIDIA, uh, you know, important players in that ecosystem are now coming on board. Um, and I think that, I think that, you know, there’s a chance here. So I, I think we will see DAM folks, uh, who provide systems taking a much stronger look. I will say in the digital evidence management, which we call DEMS, uh, community. there is, there is definite interest in authentic, authentication, right? It’s already happening in the DEMS world, and I think that will bleed over into the DAM world as well. Um, in that content coming into these systems, it’s another signal that the systems can automatically work with to populate and supplement what’s happening behind the scenes. Um, and we, and we know that, that DAMS, DAMS work closely with anything they can to authentic, to, uh, automate, um, their pipelines and make things more efficient for the end user.

Chris Lacinak: 50:18

So I know you’ve done a lot of work. Uh, we’ve talked about today, you know, the kind of law enforcement and legal components of this. Uh, we’ve talked about, uh, digital asset management within, uh, collecting institutions and corporations and things like that. But you’ve done some really fascinating work, I know, with journal within journalism. And within human rights stuff. And I would love, could you just talk a bit about that and maybe tell us some of the use cases that Medex has been used, uh, within those contexts?

Bertram Lyons: 50:52

Think about the context of journalism and, and, you know, human rights organizations is, is really one of, of, of collecting and documenting evidence. Uh, and it may be on the human rights side, a lot of it is collecting evidence of something that’s happened and that evidence is typically going to be video or images. Uh, so we have, we have people documenting atrocities or documenting, you know, any kind of rights issues that are happening and wanting to get that, that documentation out and, and also to have that documentation trusted so it can be believed. So that they can actually serve as evidence of something, whether it’s evidence for popular opinion or evidence for a criminal court, you know, from the UN, both and all, right? So there’s that, that’s the process that has to happen. So there’s, you know, often challenging questions with that kind of evidence, um, to document its authenticity. And in some ways, things like C2PA have come out of that world, you know, there’s an effort with that WITNESS out of New York, um, worked on, and I know they had other partners in it, and I don’t know the names of everybody, so I’ll just say I know it’s not just WITNESS, but I know that they’ve collaborated in efforts for many years to create these camera focused, um, systems that allow that authentic, authent, signal to be stored and processed within the camera upon creation. And then securely share it out from that camera to, you know, to another, another organization or location, um, with all of that authentication data present. And what I mean when I say authentication data there is like hashes and dates and times. Um, and, and, and usually to do it, and the more, the more challenging thing is to do it without putting the name of the person who created the content in the authentication. Because that’s a, that’s a. It’s a dangerous thing for, for some people, for their names to be associated with the evidence of a human, of a human rights atrocity. Um, so they, you know, you think about that’s a really challenging scenario to design for and human rights orgs have been really focused and put a lot of effort into figuring, trying to figure that out. So that you don’t reduce the ability of people to document what’s happened by making it too technologically challenging or costly. Um, and also you don’t want to add harm to that. You want the person who’s, who’s created this to be noted. But then again, at the end of the spectrum, you need to trust it. You need someone else to trust it. But you can’t say who did it, you can’t say anything, you know, right? So, so there’s been a lot of excellent work. And I know we’ve been involved a lot on the side of, of helping to provide input into authentication of, of video from, from these kinds of scenarios. Um, to add weight to trust, right? Ultimately, it’s all around trust. Can we trust it? Uh, what signals do we have that allow us to trust it? And do they outcompete any signals that would want us to distrust it? Um, so that’s, that’s been really exciting. That work, you know, is, is continually going on. And I know there’s a lot of organizations involved, but we’ve partnered closely with WITNESS over the years and they, they do excellent work. Um, and I know that there, there’s a lot more out there, but you know, that’s, On the journalism side, it’s a little different than that, right? On the journalism side, you have, uh, journalists who are writing investigative reports, right? And their job is to, in a little bit of a different way, is to receive or acquire, um, documentation that’s of world events or local events, um, and to quickly attain or assess the veracity of that content so they can make the correct interpretation of it. And also decide the risk of actually using it as evidence in a piece, in an article. Um, we work closely with a variety of organizations. The New York Times is a good example of, of a group we work closely with where, you know, it’s not always easy. You know, even if you’re receiving from a particular human being on, you know, in some location, you’re receiving evidence from them, you know, you want to you want to evaluate it with as many tools as you can. You want to watch it. You want to look at its metadata. You want to look at its, uh, authentication signals. And you want to ultimately make a decision on, can we write, are we going to put this as the key piece of evidence in an article? It’s never first person, right, from the journalist’s perspective. They’re not the first person usually, right? So they’re, they’re having to take this as, uh, from someone who delivered it to him, who is also, they can’t prove is first person. You know, they have to decide how first person is the content in this, in this video or image or, or audio. So, um, I don’t know if that answers your question, but that’s, you know, we, you see a lot of need for the question of content authenticity in both of those worlds. And, and a lot of focus on it.

Chris Lacinak: 55:53

Yeah. So, well, maybe just to pull it up to a hypothetical or, or even hinting at real world example here, like, uh, let’s say a journalist might get a piece of video, um, out of, let’s say Ukraine or Russia, uh, and they’re reporting on, on that war. And, and, uh, they have gotten that video, let’s say, through Telegram or something like that. Uh, so, their ability to make some, uh, calls about the veracity of it are really critically important. And I, they could use Medex and other tools to say, for instance, that, yes, this came, you know, if it looks like it’s cell phone footage, that, yes, this came, this was recorded on a cell phone. Uh, uh, yes, this came through Telegram. Um, no, it was not edited, no, it did, it was not created through an AI generation tool or a deep fake, uh, piece of software, things like that. That would not tell them yes or no, they definitively can or can’t trust it, but give them several data points that would be useful for making a judgment call together with other information on whether they can trust that and use it as, as journalism. Uh, in their journalism. 

Bertram Lyons: 57:04

That’s right. Yeah, it’s always the human. At the end, and I’ve stressed this, uh, as much as I like automated tools, we really need, in scenarios like that, a human to say, this is my interpretation of this, all of these data points that I’m seeing. Um, and, and that’s a great example. And that’s a real example. We actually dealt with it. Remember when the nuclear, um, when that war just originally broke out, there was challenges to, um, nuclear facility there. There were, um, it was still under the control of Ukraine and there were Ukraine scientists in the facility sending out Telegram videos saying we’re here, there’s bombing happening around this nuclear facility, this is extremely dangerous, please stop. Um, and, and the video was coming out from Telegram. But the only, the only way to evaluate it was, was from a secondary encoded version of a file that, you know, Um, initiated somewhere, uh, and then it was passed through Telegram to a Telegram channel and then extracted by news agencies and then they want to, um, as quickly as possible say is this real? We want to report on this. We want to amplify this, um, this information coming out from Ukraine. It’s challenging, you know, in that case, you know, we, we, in the case of, in the files that we were asked to evaluate in that case, you know, we could say, yeah, you know, it was. It’s encoded by Telegram, um, and, and it was, you know, it has, it has some signals left over that we’re able to ascertain that, that, that would only be there if this thing originated on a cell phone device, on a Samsung, for example. Um, so there’s the census, maybe that’s all the signal you have, and you have to make a judgment call at that point. point Um, now. In the future, what if Telegram embedded C2PA data, you know, and, and that, and that was still there and we could, you know, maybe that’s a stronger signal at that point. 

Chris Lacinak: 59:00

Yeah. Or combined. It’s another data point, right? 

Bertram Lyons: 59:08

Yeah, it’s just another data point, right?

Chris Lacinak: 59:09

Great. Well, Bert, I want to thank you so much for your time today. Uh, in closing, I’m going to ask you a totally different question, uh, that I’m going to ask of all of our guests on the DAM Right Podcast, which Uh, help shed a little light, I think, into the folks we’re talking to. Get out of the weeds of the technology and, and, and details. And, and that question is what’s, what’s the last song you liked or added to your favorites playlist? 

Bertram Lyons: 59:33

The last song That I added to my like songs was Best of My Love by The Emotions 

Chris Lacinak: 59:43

That’s great. Love it. 

Bertram Lyons: 59:46

Ha ha ha ha You know, I mean, actually I’ve probably added that like 3 or 4 times over the years It’s probably on there, different versions of it Um, that’s great, great track, I used to have 45 of it. You know that track. 

Chris Lacinak: 59:59

Yep. It’s a good one.

Bertram Lyons: 60:00

I recommend you play it as the outro from today’s DAM 

Chris Lacinak: 60:03

If I had the licensing fees to pay, I would. Alright, Well, Bert, thank you so much for all of the great insight and, and, and contributions you made today. I really appreciate it. And, uh, it’s been a pleasure having you on the podcast. 

Bertram Lyons: 60:17

Thanks for having me, Chris. 

Chris Lacinak: 60:18

Thanks for listening to the DAM Right podcast. If you have ideas on topics you want to hear about people, you’d like to hear interviewed or events that you’d like to see covered, drop us a line at [email protected] and let us know. We would love your feedback. Speaking of feedback. Please give us a rating on your platform of choice. And while you’re at it, make sure to follow or subscribe so you don’t miss an episode. If you’re listening to the audio version of this, you can find the video version on YouTube using at @DAMRightPodcast and Aviary at damright.aviaryplatform.com. You can also stay up to date with me and the DAM Right podcast by following me on LinkedIn at linkedin.com/in/clacinak. And finally, go and find some really amazing and free DAM resources from the best DAM consultants in the business at weareavp.com/free-resources. You’ll find things like our DAM Strategy Canvas, DAM Health Scorecard, and the “Get Your DAM Budget” slide deck template. Each resource has a free accompanying guide to help you put it to use. So go and get them now.