The countdown is on to Keyfactor Tech Days     | Secure your spot today!

Why Trust and Transparency Are Imperative in the Age of Generative AI

Industry Trends

It’s impossible to go a day without hearing about generative AI. But if we look beyond the buzz, what does it really mean for your business?

We recently sat down with Ryan Sanders, Sr. Director of Product and Customer Marketing at Keyfactor, Ellen Boehm, SVP of IoT Strategies and Operations at Keyfactor, and Jason Slack, Director of Product Engineering at Truepic, to answer this question in depth. The discussion centered around the opportunities, threats, and business implications of generative AI, the importance of identity and authenticity in a zero-trust world, and how to establish trust with PKI.

Introducing the age of generative AI

AI is nothing new. However, recent developments, most notably GPT-3 and ChatGPT, have made AI very practical and accessible for the average person. 

According to Gartner, since the release of ChatGPT in November 2022, 45% of organizations have increased their investment in AI. Gartner also predicts that most technology products and services will incorporate some sort of generative AI capabilities within the next year.

So, what exactly are we talking about? Generative AI can produce text, images, and videos. It can also review and refactor code, or even serve as a co-pilot for IT and security teams. And those examples just scratch the surface of what’s possible. 

Ultimately, generative AI makes process automation much more accessible, which creates both opportunities and risks – because you can bet that while organizations are increasing their investment in AI, attack groups and hackers are adopting the exact same strategies. 

The impact of generative AI: opportunities vs. risks

As with any new technology, generative AI comes with opportunities and risks, and it’s important to understand both sides of the equation. On the opportunity front, we see four key use cases:

  • Code generation and testing: Generative AI allows developers to create code much more quickly, which teams can then iterate on. That allows for more efficient performance and QA testing, and a faster path to delivering a minimum viable product.
  • Security automation: Generative AI can also automate the process of testing against attacks to improve security for new technology.
  • Content generation: Whether it’s new marketing messages, images and videos, scripts or anything else, generative AI has made a big splash in terms of content generation, helping people deliver a message to a broad audience quickly and effectively.
  • AI assistants and innovation: Finally, there’s potential for generative AI to serve an assistant-type role, helping teams innovate by coming up with different business models or new concepts to support how work gets done.

Meanwhile, some of the biggest risks of generative AI that we see include:

  • AI-generated malware: The ability of generative AI to create malware at a fast pace can not only lead to smarter attacks on software and devices, but also more frequent attacks.
  • Attack automation: AI can automate attacks, like DDoS, to the point where humans don’t even need to be involved. For example, rather than using a network of compromised IoT devices to target a server, attackers can create a network of AI-powered bots to flood traffic and take down servers at a low cost.
  • Deep fakes: Deep fakes, or videos and images that have been digitally altered to misrepresent subjects, can spread misinformation. We’ve already seen examples of this on the news, with nation-states producing AI-generated political ads attempting to influence how people around the world make decisions.
  • Data privacy and copyright concerns: Generative AI can make it easy to create fake content and information, and when this is passed off as genuine, it can create serious copyright concerns or even infringe on data privacy in cases where it impacts ongoing communications.

How to use generative AI safely and responsibly

Being responsible about generative AI comes down to authenticity and transparency. These two principles sit at the heart of Truepic, a customer of Keyfactor, and a pioneer in image authenticity that offers an industry-leading Controlled Capture technology to ensure the integrity of digital photos and videos from the instant they are captured. 

Jason explains: “One of the most important principles in the founding of Truepic was authenticity – using a camera you trust and ensuring what you see has not been edited. Now we view transparency as an important piece of that, recognizing that it’s okay to have edits as long as you’re transparent about where something came from.”

These concepts extend to generative AI, as companies using AI to generate language or images should be transparent about the tools and prompts used to create the result. Ultimately, Jason sees this as becoming something like an “ingredients label” for images and videos.

That type of thinking has led organizations like Truepic to establish the Coalition for Content Provenance and Authenticity (C2PA), which provides an open technical standard for publishers, creators, and consumers to trace the origin of different types of media. At its core, this type of security centers around the use of PKI-based digital signatures to prove authenticity.

banner image showing a person silhouetted in front of the moon with the title The Dark Side of Digital Trust

The importance of trust and transparency in an AI world

In a world of AI-generated content, how do we know what’s real or synthetic, original or edited? Consider the following video from Truepic and Revel AI, which shows the power of a deep fake video.

With the content we see online, the rule used to be “trust and verify,” but advances like generative AI have changed that. Now, we must take an approach of “never trust and always verify,” and in this world, it’s important to understand where content comes from.

So how do we ensure trust and transparency?

Adopting the C2PA standard across the media ecosystem can help by making it easy to sign images and videos and cryptographically tie information like where they came from, how they’ve been edited, and who created them to the media. This provides transparency so that people can trust what they see is real or know where altered files came from.

These efforts need to extend elsewhere, too. Consider the case of software code. Generative AI can be used to create code and test quickly, allowing development teams to become much more agile. But with that increased efficiency comes the need to prove that whoever is writing the code is really the intended developer and that even a small piece of counterfeit code or malware doesn’t get written into a bigger package. As a result, the ability to do proper digital signatures on new code becomes even more important.

This brings us to the role of PKI and code signing, which provide the foundation for verifying the authenticity and origin of content. Fortunately, PKI is very well established, supporting digital trust in the enterprise for decades by proving authenticity, validating the origins of new things, and allowing us to send data over encrypted channels to avoid interception by malicious parties.

Case study: Truepic and Keyfactor

When the C2PA standard first came about, Truepic decided to incorporate it into their products so that users could cryptographically seal the C2PA standard of information into their visual files. In the search for a partner to support that cryptography, Keyfactor emerged as best in class for PKI infrastructure thanks to EJBCA and a partnership with SignServer.

Truepic’s Controlled Capture technology and Lens product use EJBCA and SignServer to acquire provenance data and cryptographically sign content to verify authenticity. This use of PKI makes the process more scalable and flexible for users versus relying on a blockchain ledger.

Here’s how it works:

AI is here, and trust and transparency must come with it

Whether we’re ready or not, the age of generative AI is here, and the only way that we can move forward successfully is to focus on trust and transparency.

Fortunately, PKI principles like digital signatures provide the necessary building blocks to foster trust and transparency. The next steps are to embed these principles into new areas – like images and videos – to ensure they get used properly, and educate the public on what to look for to verify authenticity. And when we do that, we can begin to take advantage of all the benefits (and there are many!) that generative AI has to offer.

Watch the full webinar for a deeper look at what it takes and how Truepic is working with Keyfactor to lead the charge.