IMDA identifies six key risks arising from the use of generative AI and propose action points to mitigate the impact of these risks and to develop trusted AI use for the public good.

Insights

IMDA Highlights 6 Key Risks of Generative AI in Discussion Paper

Date
June 9, 2023
Author
OrionW

The Info-communications Media Development Authority (IMDA) identifies six key risks of developing and using generative AI in the discussion paper Generative AI: Implications for Trust and Governance published on 7 June 2023 (Discussion Paper).  The Discussion Paper proposes practical approaches that public and private policymakers can adopt to address those risks and foster trusted and responsible generative AI use.

Key Risks of Generative AI

While acknowledging the benefits of generative AI, the Discussion Paper warns of the following attendant risks of generative AI:

  • Mistakes and "hallucinations” – Generative AI can produce erroneous, and sometimes fictitious and misleading, content that convincingly appears to be true.
  • Privacy and confidentiality issues – In some cases, generative AI can memorise their training datasets and other information fed to it. Therefore, it is possible for personal data or proprietary information to be "leaked”.
  • Mounting disinformation, toxicity and cyber-threats –Currently, it is very hard to detect generative AI-created information and distinguish it from one that is prepared by a human.  Therefore, generative AI could be used to propagate fake news, deepfakes and other inflammatory discourse and to cause harm.
  • Copyright challenges – Generative AI can replicate an original artwork or literary piece, or can assist in art forgery.  Also, generative AI could be trained using copyrighted material.  Current copyright laws do not clearly address whether these acts may fall within appropriate fair use exceptions or who owns copyright over the AI-generated content.
  • Applications reinforcing embedded biases– Generative AI can acquire the inherent biases found in the training datasets on which it was developed.  Therefore, if not dealt with appropriately, these biases can be reinforced when they are reflected in downstream applications.
  • Difficulty of good instructions – Generative AI operates based on objectives defined by its designers and may blindly follow those objectives.  If objectives are notwell-articulated, adverse outcomes may result. For example, a generative AI model that is trained to provide helpful responses may prioritise that objective even though it produces harmful results.

Addressing Key Risks

The Discussion Paper calls on various stakeholders including the government and private sector, to collaborate to address generative AI’s risks. In particular, the Discussion Paper proposes a practical, risk-based and accretive approach to address generative AI-related risks and to promote a trusted AI ecosystem.  

The approach covers six dimensions:

  • Accountability – Accountability for AI model development should be shared by its various stakeholders, albeit with clearly delineated responsibilities.
  • Data use – There should be guardrails to address transparency, privacy, copyright and bias issues when using data for generative AI purposes.
  • Model development and deployment – Designers should be transparent regarding AI model development and deployment.  Policy makers should develop standards for evaluating AI models, such as safety, performance and efficiency metrics, without stifling innovation.
  • Enhancing evaluation and assurance – Independent assessments, coupled with evaluation standards and “crowding in” expertise, will enable the development of comprehensive processes for improved safety.
  • Safety and alignment research – Research on safety and alignment should be accelerated to ensure that AI models can be controlled and are aligned with human values.
  • Generative AI for public good – The public should be educated on responsible use, skills training should be enhanced to reduce the impact of AI on jobs and guidelines on safe AI use should be developed to promote AI adoption by organisations while mitigating its risks.

Key Takeaway

While there are many benefits to using generative AI, users should be mindful of its attendant risks.  Therefore, users should carefully define instances where generative AI can be used and have a policy in place for generative AI use, where applicable.

For More Information

OrionW regularly advises clients on artificial intelligence matters.  For more information about responsible development and deployment of artificial intelligence, or if you have questions about this article, please contact us at info@orionw.com.

Disclaimer: This article is for general information only and does not constitute legal advice.

Newsletter

Subscribe to
our newsletters

To subscribe, select the newsletter options that interest you (TMT, FinTech or DPC - Data Protection and Cybersecurity) and provide your details.

  • TMT - Technology, Media and Telecommunications
  • FinTech
  • DPC - Data Protection & Cybersecurity
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.