AI Act: Wary of limitations, bias, and errors, EC adds researcher guidelines

Sponsor ad - 728w x 90h (at 72 dpi)

The European Union’s AI Act, which is the world’s first legal framework on AI, is designed to ensure that Europeans can trust what AI produces; especially since it is diffcult to discover how or why an AI system produced a decision, a prediction or took a particular action.

To complement the Act, a set of guidelines was issued by the European Commission and European Research Area countries and stakeholders in March, specifically aimed at research.

Sponsor ad

Key principles of the research guidelines (quoted)

The set of principles framing these guidelines are based on pre-existing relevant frameworks:

Building on the commonalities of the currently emerging guidelines from various stakeholders, the key principles behind these guidelines for the responsible use of generative AI in research are:

  • Reliability in ensuring the quality of research, reflected in the design, methodology, analysis and use of resources. This includes aspects related to verifying and reproducing the information produced by the AI for research. It also involves being aware of possible equality and non-discrimination issues in relation to bias and inaccuracies.
  • Honesty in developing, carrying out, reviewing, reporting and communicating on research transparently, fairly, thoroughly and impartially. This principle includes disclosing that generative AI has been used.
  • Respect for colleagues, research participants, research subjects, society, ecosystems, cultural heritage and the environment. Responsible use of generative AI should take into account the limitations of the technology, its environmental impact and its societal effects (bias, diversity, non-discrimination, fairness and prevention of harm). This includes the proper management of information, respect for privacy, confidentiality and intellectual property rights, and proper citation.
  • Accountability for the research from idea to publication, for its management and organisation, for training, supervision and mentoring, and for its wider societal impacts. This includes responsibility for all output a researcher produces, underpinned by the notion of human agency and oversight.

Further reading

Living guidelines on the responsible use of generative AI in research. PDF document. March 20, 2024. Directorate-General for Research and Innovation. European Commission

AI Act. Overview of the legal framework and links to supporting documentation.  Accessed March 20, 2024. European Commission

Why it matters

In its introduction, the Guidelines document positions generative AI as a tool that opens great opportunity. “However, it also harbours risks, such as the large-scale generation of disinformation and other unethical uses with significant societal consequences. Research is one of the sectors that could be most significantly disrupted by generative AI. AI has great potential for accelerating scientific discovery and improving the effectiveness and pace of research and verification processes.”

The authors are concerned that a combination of compromised research practices, proprietary tools and processes, lack of open access to data and concentration of ownership can produce flawed results.  Furthermore, AI technology is immature and rapidly changing, which can also introduce unintended – or intentional – errors.

Print Friendly, PDF & Email
From our Sponsors