Senator Michael Bennet | Official U.S. House headshot
Senator Michael Bennet | Official U.S. House headshot
Denver — On June 29, Colorado U.S. Senator Michael Bennet called on the leaders of major technology and generative artificial intelligence (AI) companies – including Meta, Alphabet, Microsoft, Twitter, TikTok, and OpenAI – to identify and label AI-generated content, and to take steps to limit the spread of AI-generated content designed to mislead users.
“Online misinformation and disinformation are not new. But the sophistication and scale of these tools has rapidly evolved and outpaced our existing safeguards,” wrote Bennet in the letter. “In the past, creating plausible deepfakes required significant technical skill; today, generative AI systems have democratized the ability–opening the floodgates to anyone who wants to use or abuse the technology.”
In the letter, Bennet points to several examples of AI-generated content causing alarm and market turmoil, as well as the appearance of AI-generated content in political social media posts. Bennet points to testimony from OpenAI CEO Sam Altman before the Senate Judiciary Committee identifying AI’s ability to spread disinformation as an area of serious concern, and notes the stakes of allowing misinformation to go unchecked.
“Americans should know when images or videos are the product of generative AI models, and platforms and developers have a responsibility to label such content properly,” wrote Bennet. “Fabricated images can derail stock markets, suppress voter turnout, and shake Americans' confidence in the authenticity of campaign material.”
Bennet acknowledges the initial steps taken by technology companies to identify and label AI-generated content, but highlights that these existing measures are voluntary and can be easily bypassed. He lays out a framework for labeling AI-generated content, and concludes by requesting the companies provide their identification and watermarking policies and standards and a commitment to removing AI-generated content designed to mislead users.
“Continued inaction endangers our democracy. Generative AI can support new creative endeavors and produce astonishing content, but these benefits cannot come at the cost of corrupting our shared reality,” concluded Bennet.
Bennet has repeatedly advocated for digital regulation, youth online safety measures, and enhanced protections for emerging technologies. Last week, Bennet spoke on the Senate floor to make the case for a new federal body able to regulate digital platforms and AI. Last month, Bennet reintroduced the Digital Platform Commission Act, the first legislation in Congress to create a dedicated federal agency charged with overseeing large technology companies and protecting consumers, promoting competition, and defending the public interest.
In June, Bennet introduced the Global Technology Leadership Act to establish an Office of Global Competition Analysis able to assess how the U.S. fares in key emerging technologies – such as AI – relative to other countries, informing U.S. policy and strengthening American competitiveness. He recently introduced the Oversee Emerging Technology Act and the ASSESS AI Act to ensure government use of AI complies with fundamental rights and civil liberties. In May, Bennet joined his colleagues to introduce the REAL Political Ads Act to require a disclaimer on political ads for federal campaigns that use content generated by AI.
The text of the letter is available HERE and below.
Dear Mr. Zuckerberg, Mr. Musk, Mr. Altman, Mr. Chew, Mr. Pichai, Mr. Nadella, Mr. Mostaque, Mr. Holz, and Dr. Amodei:
I write with concerns about your current identification and disclosure policies for content generated by artificial intelligence (AI). Americans should know when images or videos are the product of generative AI models, and platforms and developers have a responsibility to label such content properly. This is especially true for political communication. Fabricated images can derail stock markets, suppress voter turnout, and shake Americans' confidence in the authenticity of campaign material. Continuing to produce and disseminate AI-generated content without clear, easily comprehensible identifiers poses an unacceptable risk to public discourse and electoral integrity.
Online misinformation and disinformation are not new. But the sophistication and scale of these tools has rapidly evolved and outpaced our existing safeguards. In the past, creating plausible deepfakes required significant technical skill; today, generative AI systems have democratized the ability–opening the floodgates to anyone who wants to use or abuse the technology.
We have already seen evidence of generative AI being used to create and share false images. In some instances, these have been relatively benign–such as Pope Francis depicted wearing a large white down jacket. Others are more disturbing. In May, an AI-generated image of a purported explosion at the Pentagon went viral, causing a dip in major stock indices. Fake news accounts recirculated these images alongside real outlets, including RT, a Russian state-backed media organization.
The proliferation of AI-generated content poses a particular problem for political communication. In his recent testimony before the Senate Judiciary Committee, OpenAI CEO Sam Altman identified the ability of AI models to provide “one-on-one interactive disinformation” as an area of greatest concern. We have entered the beginning of this era.
In June, the official rapid response Twitter account of Florida Governor Ron DeSantis, a candidate for the 2024 Republican nomination for president, shared images that experts say appear to be AI-generated. Both official and unaffiliated accounts supporting former President Trump have posted AI-generated content targeting his political rivals.
In May, I joined colleagues to introduce the REAL Political Ads Act, which would require a disclaimer on political ads for federal campaigns that use content generated by AI. However, as political media increasingly shifts from regulated television, print, and radio advertising to the free-for-all of social media, broader disclosure requirements must follow.
AI system developers and platforms will have to collaborate to combat the spread of unlabeled AI content. Developers should work to watermark video and images at the time of creation, and platforms should commit to attaching labels and disclosures at the time of distribution. A combined approach is required to deal with this singular threat.
Companies have started taking steps to better identify AI-generated content for users. For example, non-profit organizations, like the Partnership on AI, have released suggested guidelines. Microsoft has committed to watermark AI-generated content, and Google will begin attaching a written disclosure on Google Images. OpenAI’s DALL-E 2 adds a watermark to images it generates, and Stable Diffusion embeds watermarks into its content by default. Midjourney, Shutterstock, and Google have committed to embedding metadata indicators in AI-generated content.
However, these policies remain easily bypassed or alarmingly reliant on voluntary compliance. Google’s process for labeling AI-generated images from third-party systems depends on self-disclosure. Stable Diffusion’s open source structure allows users to circumvent the watermarking code. DALL-E 2’s watermarks are inconspicuous and easily removed. And while some platforms –including Meta, Twitter, and TikTok– have existing policies for AI-generated images and video, such content continues to appear on users’ feeds.
Platforms must update their policies for a world where everyone has access to generative AI tools. They should require clear, conspicuous labels for AI-generated video and images, and where users fail to comply, should label AI-generated content themselves. Platforms should consider particular rules for official political accounts, and should release regular reports detailing their efforts to identify, label, or remove AI-generated content.
Similarly, generative AI system developers must scrutinize whether their models can be used to manipulate and misinform, and should conduct public risk assessments and create action plans to identify and mitigate these vulnerabilities. We cannot expect users to dive into the metadata of every image in their feeds, nor should platforms force them to guess the authenticity of content shared by political candidates, parties, and their supporters.
Continued inaction endangers our democracy. Generative AI can support new creative endeavors and produce astonishing content, but these benefits cannot come at the cost of corrupting our shared reality.
To that end, I request answers to the following questions by July 31, 2023:
For generative AI developers:
- What technical standards, features, or requirements do you currently employ to watermark or otherwise identify content created using your systems?
- When were these standards, features, or requirements developed?
- When were these standards, features, or requirements last updated?
- What auditing processes, if any, does your organization have in place to evaluate the effectiveness of these standards, features, or requirements?
- What policies do you currently have in place for users that repeatedly violate a watermarking or identifying requirement, either by removing the identifier or avoiding it in some other way?
- How many accounts, if any, have you suspended or removed for violating a watermarking or identifying requirement?
- What tracking system do you currently have in place, if any, to monitor the distribution of content created using your systems?
- Before deploying a model, what tests or evaluations do you use to estimate potential capabilities relating to misinformation, disinformation, persuasion, and manipulation?
- What processes do you use to estimate risks associated with misinformation, disinformation, persuasion, and manipulation? Under what circumstances would you delay or restrict access to a generative AI system due to concerns about these risks?
- What interoperable standards currently offer the highest degree of provenance assurance?
- Will you commit to removing AI-generated content designed to mislead users?
- What technical processes are currently in place to identify AI-generated content?
- How many pieces of AI-generated content did you identify in 2022 and the first quarter of 2023?
- Of those identified, how many were removed for violating a policy?
- If removed, what policy did they violate?
- If not removed, was a label or other clear identifier affixed?
- If not labeled, please provide a rationale for declining to do so.
- Do you have specific policies in place for AI-generated content posted by an official political campaign account?
- If so, what are they?
- If not, describe why not.
- Do you have specific policies in place for AI-generated content related to campaigns and elections?
- If so, what are they?
- If not, describe why not.
Sincerely,
Original source can be found here