Home Data-Driven Thinking 5 Tips For Drafting An Ethical Generative AI Policy

5 Tips For Drafting An Ethical Generative AI Policy

SHARE:
Betty Louie, General Counsel, The Brandtech Group

Generative AI has become an invaluable content creation tool for marketers. But, while it is fun to experiment with the tool and watch it generate seemingly intelligent outputs in seconds, it’s essential that the technology be deployed in a responsible manner.

But how can we establish guardrails while supporting creative exploration?

As banal as it sounds, the answer is to create a comprehensive generative AI policy.  

Here are five practical steps to consider when creating such a policy and how to effectively implement it across the enterprise.

1. Define key terms

Start by agreeing on terminology. 

Don’t assume that everyone is familiar with generative AI terms. When employees agree on common language, it eliminates misunderstandings or varied notions of definitions. 

Key terms include input, output, generative AI, generative AI tool, generative AI-assisted content … and there are many more. Given the evolving landscape, these definitions will need to be refined and supplemented over time.

2. Create a clear list of do’s and don’ts

The best way to distill complex ideas about ethics is by creating a practical list of acceptable behaviors.

Such a list is often framed as an appendix to the generative AI policy, but it’s actually the cornerstone. 

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Following a list of do’s and don’ts can help employees comply with intellectual property rights around text, imagery, sound and video sourced from third parties.

For example:

Don’t include your company’s or client’s proprietary, confidential or sensitive information in the input or training data. 

Don’t use prompts or input with the intent to recreate existing work. 

Don’t use personally identifiable information in the input. 

Do keep a human in the loop when using any generative AI tool. 

Do keep comprehensive records of data sources, licenses, permissions, inputs and outputs. 

Do be transparent that a generative AI tool created a piece of content. 

3. Communicate approval on the use of tools

It is essential to clearly articulate which tools are approved for companywide use and communicate this to all employees.

Some companies opt to provide no blanket approvals on the use of generative AI tools. Any time a team member wants to use a tool, an approval or steering committee is involved. This is a slow and unwieldy process. 

You don’t want to have a chilling effect on experimentation and innovation. It’s better to encourage play, but within clearly defined ethical and legal parameters. 

At my company, we performed initial due diligence on generative AI solutions and created a green list of approved tools that is continuously reviewed and updated. We have readily accessible forms for teams to complete so that new and potentially useful generative AI tools can be reviewed and green listed (or not) for use. 

If you choose this green-listing approach, share succinct reasons for why one tool is approved and another is not. For example: “This tool is on the red list because we would not own any of the inputs or the outputs.” Or: “This tool is on the red list because it doesn’t have terms and conditions.”

4. Articulate clear ethical guidelines

Generative AI tools are still relatively new, and there are justified concerns about how these solutions are trained and used. Your policy should clearly articulate the moral compass of the company. 

For example:

Don’t use output to create misleading, fraudulent, biased or harmful content. 

Select and use inputs that avoid discriminatory, offensive and harmful content.  

Respect privacy laws and protect the rights of individuals. 

5. Specify governance and oversight

Specify who within the organization is responsible for overseeing your generative AI green list and the deployment of approved tools, who is responsible for reviewing potential new and upcoming tools and features and who will be tasked with answering legal questions and handling incident response. Communicate this widely and clearly. 

These are just five of the basics to get you started. But remember: Treat your policy as a living document. Generative AI is constantly evolving, and the policy will need to change, too, with frequent reviews and updates.

Once the policy is ready, everyone on the team should be trained on it and have access to it on a company portal. Training should particularly focus on the do’s and don’ts and any green lists and red lists. 

Last but not least, stay open to feedback from your teams. Much like how a generative AI training set needs to include a breadth of sources to be effective, your generative AI policy will benefit from diverse and varied viewpoints.

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Follow The Brandtech Group and AdExchanger on LinkedIn.

Must Read

Amazon Juices Profits, With A Big Assist From The Ads Biz

Wall Street wanted profits. Big Tech delivered. That was the case for Google, Meta, Microsoft, Apple and – more than any other US tech giant – Amazon.

Comic: Welcome Aboard

Google’s Ad Revenue Rockets Upward Again, But The Open Web Is Getting Less

Google has always been the internet waystation. People arrive to be shuttled someplace else. Increasingly, though, Google is the destination.

How Bayer Is Using Creative Analytics To Cure Its Data Divide

Bayer partnered with its data agency, fifty-five, to develop a custom in-house creative analytics dashboard built on Google Cloud to more effectively measure and evaluate creative performance.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

First-Party Data On Ice? How Conagra’s Birds Eye Brand Navigates The New Video Ecosystem

Conagra-owned brand Birds Eye brings a new approach to online video, social shopping and first-party data.

As The Open Web Wobbles, Index Exchange Is Betting On Curated Deals

Index Marketplaces activates the curation capabilities of DSPs, DMPs and RMNs – and the demand for their PMP deals – across Index Exchange’s network of publishers.

an almost handshake

LUMA: 2024 Will Be Better For M&A (No, Seriously This Time)

Overall deal activity in the ad tech market was down 10% year over year in 2023, according to LUMA Partners. But 2024 may be looking up.