The Select Committee on Adopting Artificial Intelligence (AI), established by a March 2024 resolution commissioned by the Australian Parliament on the trends, opportunities and impacts arising from the uptake of AI technologies in Australia, released its Final Report on November 26.
The Committee was to pay particular attention to generative AI, risks and harms stemming from the adoption of AI technologies and international approaches to mitigating them; as well as ways to foster a responsible AI industry for Australia.
Risks of transparency and copyright
The concept of transparency of AI systems was understood by the Australian committee as “the ability to see into an AI system to ‘understand the nature of the data, connections, algorithms and computations that generate a system’s behaviour including its techniques and logic.”
The report referenced Stanford University’s Centre for Research on Foundation Models ‘Foundation Model Transparency Index,’ which assesses models on 100 transparency indicators, split across three categories: upstream (the resources involved in developing a model); the model itself and its properties; and the downstream use of the model. In the most recent (May 2024) edition of the Stanford Index, OpenAI’s GPT-4, Google’s Gemini, and Amazon’s Titan, received among the lowest scores: 49, 47 and 41 out of 100, respectively. Across all foundation models, the key area of opacity is around data, specifically on the presence of copyrighted, licenced or personal information in training datasets.
Note: The Danish Rights Alliance has also researched transparency extensively, specifically with respect to copyrighted materals. Piracy Monitor has run two articles about their efforts (linked below), which made similarly alarming findings – that all but one of the platforms that they eveluated had significant transparency concerns.
Risks to privacy
The Attorney-General’s Department (AGD) submission explained that “Incorporating AI technologies into products and services can amplify privacy risks through increases in scale, scope, frequency or intensity of personal information handling…also include the ‘inappropriate collection and use of personal information, and as well as leakage and unauthorised disclosure or de-anonymisation of personal information.
The committee raised the issue of privacy with the large multinational technology companies developing general purpose AI models. In response to questions asking how they scrape and curate data for their training sets, Meta, Amazon and Google each said they use publicly available information to train their products, and pointed to the robots.txt exclusion protocol as a way for web domain holders to block access to the data scraping process on an opt-out basis.
There were numerous other important questions about Meta’s use of user data to train its AI products that the company chose not to respond to. Meta was also asked about whether a user of its social platforms in 2007 could have knowingly consented to their content being used to train AI technology that would not exist for over a decade, to which Meta’s Director of Global Privacy Policy responded: ‘I can’t speak to what people did or did not know.”
Google’s Product Director for Responsible AI said that ‘in the context of Google Cloud and Workspace…we promised that by default Google does not use customer data for model-training purposes unless a customer has provided written permission to do so or has opted in.” That policy may be the case in Australia, but in the US, users are “opted in” by default, and must consciously opt-out.
Additional risks
The Law Council of Australia submission noted that other risks may only come to light as the technologies mature, and as new technologies enter the market. Another commenter acknowledged the potential for generative AI systems to produce errors in generated results—also referred to as ‘hallucinations’.
The Committee noted that there were significant instances of disinformation employed in the US election, and considerd it critical that Australia continue to monitor the use and impact of AI-generated deepfakes and content on elections to identify policy and legislative responses that can maintain and bolster trust in democratic processes and institutions, while protecting free speech.
The report also found that the use of AI systems often inadvertently exacerbates issues of bias against population groups and communities that are already marginalised by virtue of sex, gender, class, race or other attribute, including disability
Methodology
The Australian committee solicited public submissions and received 245 of them. It also conducted six public hearings in Sydney and Canberra between May and September.
The report recognized different forms of AI platforms, which it generally defined as “an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming.”
It also recognized emerging and established AI models including generative AI, large language models, molti-modal foundation models (a type of generative AI that can process and output multiple data types, for example, text, images and audio) and automated decision-making to make, support, recommend or automate the process of making decisions.
AI-based applications by the report included Google and social media advertising algorithms, Netflix recommendations, computer voice recognition using machine learning, computer vision (where computers are able to identify and understand objects and people in images or videos); and non-media-related applications in medicine, aviation and weather forecasting.
Further reading
Select Committee on Adopting Artiicial Intelligence (AI). Final Report. Published November 26, 2024. Parliament of Australia
Australia needs an AI law to stop copyright pirates. Article. by Jennifer Dudley-Nicholson. November 26, 2024. The Canberra Times (Australia)
Report: Data transparency and enforcement of copyright by AI midel providers found lacking. Article. by Steven Hawley. September 13, 2024. Piracy Monitor
Platform transparency is crucial to rights protection once ingested by AI, says Rights Alliance. Article. by Steven Hawley. November 15, 2024. Piracy Monitor
Why it matters
Australian regulators envision three possible approaches to mandating the guardrails for high-risk AI: adapting existing regulatory frameworks to include the proposed mandatory guardrails; introducing framework legislation, with associated amendments to existing legislation; or pursuing a whole-of-economy approach via the introduction of new, cross-economy and AI-specific legislation.
With the rapid advances and increasing use of AI technology in recent years, governments in Australia and around the world have been developing a range of policy responses seeking to address its very significant potential risks and harms.
While Australia already has some safeguards in place for AI and the responses to AI are at an early stage globally, it is not alone in weighing whether further regulatory and governance mechanisms are required to mitigate emerging risks.