(Some) platform providers agree to support US AI initiative, may ‘watermark’ AI-generated content

Sponsor ad - 728w x 90h (at 72 dpi)

By Steven Hawley

In an effort “to seize the tremendous promise and manage the risks posed by Artificial Intelligence (AI) and to protect Americans’ rights and safety,” the Biden Administration announced a new initiative “toward (the) safe, secure, and transparent development of AI technology.”  The intent is to reduce the likelihood that generative AI platforms could create disinformation or deepfakes that could mislead the public, compromise trust and undermine public safety.

Sponsor ad

Voluntary participation by seven companies was announced at a White House ceremony on July 21: by Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.  The announcement was made in advance of a forthcoming executive order and an Administration promise to work in support of bi-partisan legislation.

Those joining the initiative are committing to principles in three areas:

Product safety: testing their offerings before release and sharing their experiences – both risks and rewards – transparently across governments, civil society and academia

Security: investing in cybersecurity and against insider threats of early release, and commitment to principles of accountability, including third-party evaluation

Trust: Providing tools and methods that reduce risks of fraud and deception, “such as a watermarking system.” Reporting in public forums about their systems’ capabilities, limitations and appropriate use.  Reducing societal risks that result from bias, discrimination and lax protection of privacy.

New meaning for ‘Watermarking’

Anti-piracy advocates react like Pavlov’s Dog when the “W-Word” (Watermarking) is invoked.  Few specifics were given in the Administration’s Fact Sheet, however, other than to identify content that has been generated artificially, and that the marks will be “robust.”

It’s difficult to say how “robustness” can be achieved, short of modifying AI-generated content itself.  Fine if it’s a ‘produced’ piece of content such as a video, audio or an e-book but what about text?  Blockchain could be one answer, but can it be mandated by government policy?

In the definition of watermarking traditionally used in anti-piracy, media content is modified to embed information used to identify and verify instances of content and programming that are suspected to be in violation of copyright or contractual terms; in realtime or forensically after the fact.

The word “robust” means that the watermark will be difficult to compromise, and that it would survive attacks.  For video, this means the ability to survive transformations such as conversions between analog and digital, attacks that obscure the watermark (e.g. collusion), geometric transformations, cropping, chroma and luma transformations and other damage.

Who was missing?

The biggest open question is: “What happens if some players don’t play?”   Apple did not join the announcement.  Surprisingly, neither did Adobe, which has a vested interest in protecting vast image and content libraries that it owns and monetizes through licensing.

Apple has not yet announced a generative AI platform that would be available to the masses along the lines of Bing, Bard or ChatGPT, although the speculation is rife about work that Apple is said to be doing.  On a more personal level, Apple incorporates AI/ML functionality into its consumer device chipsets, called Neural Engine, and unveiled some assistive technologies at its recent Worldwide Developers Conference, but these are focused on user experience, not on content development.

If a technology company – or a creative enterprise – decides not to participate, what happens?  It would be ironic if a rights-owner that used AI as a creative tool is unable to challenge a pirate if it doesn’t sign on to some future AI guidelines.

What does this have to do with piracy?

The most obvious connection with piracy is fraud.  If AI source has the effect of deceiving a consumer into using content that the consumer is not licensed to use, illegal or malicious content, damage can result.    Another connection is that content generated by an AI engine could incorporate unlicensed content that finds its way to an end user.  Is the user liable?  Is the AI platform?  What happens when AI passes the Turing Test?

Part of a broader strategy

In May 2023, the Administration released a National AI R&D Strategic Plan to advance responsible AI, which includes a Blueprint for an AI Bill of Rights, an AI Risk Management Framework by NIST, and a roadmap for instituting a National AI Resource.  It also issued a Request for Information (RFI) to seek input on national priorities toward “mitigating AI risks, protecting individuals’ rights and safety, and harnessing AI to improve lives.”   A report on the risks and opportunities related to AI in education was released by the US Department of Education. The R&D Strategic Plan also cites active work-in-progress to address national security concerns stemming from AI.

Further reading

FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI. Statements and Releases. July 21, 2023. The White House public affairs office.

FACT SHEET: Biden-Harris Administration Takes New Steps to Advance Responsible Artificial Intelligence Research, Development and Deployment. Statements and Releases. May 23, 2023. The White House public affairs office.

Why it matters

While criticism is inevitable, the Biden Administration deserves credit for its endeavors to garner some consensus about generative AI.  It’s a weighty matter when you consider implications for policy across multiple Federal agencies at a national level, let alone to build concensus for an international framework to govern the development and use of AI.

The administration identified consultations-in-progress with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.

Any resulting AI policies will have impact on the productivity of creative professionals, on all levels of industry, and on our information society at large.

“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI,” said the Biden Administration statement.

The pace of development will only accelerate.

Print Friendly, PDF & Email
From our Sponsors