Various Singapore regulators have issued guidelines for organisations intending to develop and/or use AI-powered software. While not legally binding, these guidelines complement existing legislation in promoting responsible AI development

Insights

Overview of Singapore's AI Regulatory Landscape

Date
March 12, 2024
Author
OrionW

While there is no general legislation governing the development and use of AI in Singapore, the Info-communications Media Development Authority (IMDA), Personal Data Protection Commission (PDPC), the Monetary Authority of Singapore (MAS), and the Intellectual Property Office of Singapore (IPOS) have issued various guidelines on the topic to promote responsible, safe and values-based AI.  Although not binding, these guidelines give insight into Singapore regulators’ possible approach to regulating AI development and use, and organisations are urged to follow them where applicable.

Currently No Overarching AI Regulation

Given the desire to foster AI innovation, and the lack of effective technical tools for effective regulatory implementation, the Singapore government is currently not pushing to develop general AI regulations.  Rather, Singapore intends to rely on existing laws, such as data protection, copyright and other sectoral legislation, to regulate AI at this time.  That said, the guidelines discussed below intend to complement existing laws and to lay the groundwork for the possible enactment of future general AI regulations.

Model Artificial Intelligence Governance Framework

First unveiled on 23 January 2019 by the IMDA and the PDPC, the Model Artificial Intelligence Governance Framework (Governance Framework) sets out best practices for organisations adopting AI at scale, based on the guiding principles that (a) the AI decision-making process should be explainable, transparent, and fair, and (b) AI solutions should protect the interests of humans.

The Governance Framework focuses on 4 main areas:

  • establishing internal governance structures and measures, such as providing for well-defined and delineated responsibilities in AI deployment strategies;
  • determining the extent of human involvement and oversight in AI-augmented decision-making – i.e., whether humans can supervise or take over the decision-making process;
  • developing operations management processes, such as minimizing bias on datasets; and
  • transparent stakeholder interaction, such as disclosure to customers and other end-users.  

The Governance Framework is intended to apply broadly– it does not discriminate across algorithms, technologies, sectors, and scale and business models – and can be adapted to suit an organisation’s needs

(See also our article on the 6 key risks of generative AI examined in IMDA’s discussion paper Generative AI: Implications for Trust and Governance.)

Proposed Model Governance Framework for Generative AI

On 16 January 2024, IMDA and the AI Verify Foundation issued the Proposed Model AI Governance Framework for Generative AI (Proposed Generative AI Framework).  This framework seeks to look into the following areas to develop a trusted AI ecosystem:

  • Accountability – Incentivising different players in the AI system development lifecycle to be responsible to end-users.
  • Data – Ensuring data quality and addressing potentially contentious training data in a pragmatic way, as data is core to model development.
  • Trusted Development and Deployment – Adopting industry best practices in AI model development and evaluation and enhancing transparency on safety measures.
  • Incident Reporting – Implementing an incident management system for timely notification, remediation and continuous improvements, as no AI system is foolproof.
  • Testing and Assurance – Providing external validation through third-party testing and developing common AI testing standards for consistency.
  • Security – Addressing new threat vectors that arise through generative AI models.
  • Content Provenance – Using technical solutions to enhance content origin transparency for more informed end-users.
  • Safety and Alignment R&D – Accelerating R&D through global cooperation among AI Safety Institutes to improve model alignment with human intention and values.
  • AI for Public Good – Leveraging AI for the public good by broadening access, facilitating public sector adoption, upskilling workers and promoting sustainable AI system development.

Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector

In 2018, MAS set out the FEAT principles to guide financial institutions (FIs) using AI and data analytics (AIDA) in decision-making when providing financial products and services. Together with FIs’ own internal governance frameworks, the principles intend to address risks of systematic misuse and boost trust and confidence in AIDA use.

Briefly, the FEAT principles are as follows:

  • Fairness: systemic disadvantages, unjustified use of personal information, and unintentional biases should be avoided;
  • Ethics: AIDA use should meet the same ethical standards as decisions by humans;
  • Accountability: clear internal and external responsibility for AIDA use should be established;  and
  • Transparency: AIDA use should be disclosed, and the data used for, and the consequences of, AIDA-based decisions should be clearly explained.

In June 2022, MAS issued an info paper on the implementation of the fitness principle following a thematic review of selected financial institutions on their FEAT principles roll-out.  After discussing their findings, MAS said that financial institutions may calibrate their FEAT compliance depending on their assessment of their AI models and emphasized that fairness assessments should be sufficiently justified that can be validated.

Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems

On 1 March 2024, the PDPC issued the Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems (AI Advisory Guidelines).  While not legally binding, the AI Advisory Guidelines clarify how the Personal Data Protection Act 2012 applies to the development, testing, and monitoring, deployment, and procurement AI systems.  

Notably, the AI Advisory Guidelines recognise that consent may not always be required in processing personal data in relation to AI systems (see also our article on the Proposed AI Advisory Guidelines).

IP and Artificial Intelligence Information Note

IPOS released an information note on how Singapore’s intellectual property (IP) regime can protect AI inventions.  For example, software utilising AI tools can be subject to patent protection or be treated as trade secrets, and AI algorithms can be copyright protected.  However, the information note does not discuss IP protections granted to AI-generated inventions or creations.

Conclusion

Organisations should develop and use AI in accordance with Singapore regulators’ guidelines because, although they are not legally binding, they reflect Singapore’s approach to possible future AI regulation and they are helpful in mitigating AI-related risks. Organisations should also regularly keep abreast of changes in Singapore AI regulations, given the rapidly changing developments – both technologically and legally – in this area.

For More Information

OrionW regularly advises clients on artificial intelligence matters.  For more information about responsible development and deployment of artificial intelligence, or if you have questions about this article, please contact us at info@orionw.com.

Disclaimer: This article is for general information only and does not constitute legal advice.

Newsletter

Subscribe to
our newsletters

To subscribe, select the newsletter options that interest you (TMT, FinTech or DPC - Data Protection and Cybersecurity) and provide your details.

  • TMT - Technology, Media and Telecommunications
  • FinTech
  • DPC - Data Protection & Cybersecurity
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.