Various Singapore regulators have issued guidelines for organisations intending to develop and/or use AI-powered software. While not legally binding, these guidelines complement existing legislation in promoting responsible AI development

Insights

Overview of Singapore's AI Regulatory Landscape

Date
March 12, 2024
(Updated: July 1, 2024)
Author
OrionW

While there is no general legislation governing the development and use of AI in Singapore, the Info-communications Media Development Authority (IMDA), Personal Data Protection Commission (PDPC), the Monetary Authority of Singapore (MAS), the Ministry of Health (MOH) and the Intellectual Property Office of Singapore (IPOS) have issued various guidelines on the topic to promote responsible, safe and values-based AI.  Although not binding, these guidelines give insight into Singapore regulators’ possible approach to regulating AI development and use, and organisations are urged to follow them where applicable.

Currently No Overarching AI Regulation

Given the desire to foster AI innovation, and the lack of effective technical tools for effective regulatory implementation, the Singapore government is currently not pushing to develop general AI regulations.  Rather, Singapore intends to rely on existing laws, such as data protection, copyright and other sectoral legislation, to regulate AI at this time.  That said, the guidelines discussed below intend to complement existing laws and to lay the groundwork for the possible enactment of future general AI regulations.

Model Artificial Intelligence Governance Framework

First unveiled on 23 January 2019 by the IMDA and the PDPC, the Model Artificial Intelligence Governance Framework (Governance Framework) sets out best practices for organisations adopting AI at scale, based on the guiding principles that (a) the AI decision-making process should be explainable, transparent, and fair, and (b) AI solutions should protect the interests of humans.

The Governance Framework focuses on 4 main areas:

  • establishing internal governance structures and measures, such as providing for well-defined and delineated responsibilities in AI deployment strategies;
  • determining the extent of human involvement and oversight in AI-augmented decision-making – i.e., whether humans can supervise or take over the decision-making process;
  • developing operations management processes, such as minimizing bias on datasets; and
  • transparent stakeholder interaction, such as disclosure to customers and other end-users.  

The Governance Framework is intended to apply broadly– it does not discriminate across algorithms, technologies, sectors, and scale and business models – and can be adapted to suit an organisation’s needs

(See also our article on the 6 key risks of generative AI examined in IMDA’s discussion paper Generative AI: Implications for Trust and Governance.)

Model Governance Framework for Generative AI

On 30 May 2024, IMDA and the AI Verify Foundation issued the Model AI Governance Framework for Generative AI (Generative AI Framework).  This framework seeks to look into the following areas to develop a trusted AI ecosystem:

  • Accountability – Incentivising different players in the AI system development life cycle to be responsible to end-users.
  • Data – Ensuring data quality and addressing potentially contentious training data in a pragmatic way, as data is core to model development.
  • Trusted Development and Deployment – Enhancing transparency around baseline safety and hygiene measures based on industry best practices in development, disclosure and evaluation and disclosure.
  • Incident Reporting – Implementing an incident management system for timely notification, remediation and continuous improvements, as no AI system is foolproof.
  • Testing and Assurance – Providing external validation and added trust through third-party testing and developing common AI testing standards for consistency.
  • Security – Addressing new threat vectors that arise through generative AI models.
  • Content Provenance – Transparency about where content comes from as useful signals for end-users.
  • Safety and Alignment R&D – Accelerating R&D through global cooperation among AI Safety Institutes to improve model alignment with human intention and values.
  • AI for Public Good – Harnessing AI to benefit the public by democratising access, improving public sector adoption, upskilling workers and developing AI systems sustainably.

Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector

In 2018, MAS set out the FEAT principles to guide financial institutions (FIs) using AI and data analytics (AIDA) in decision-making when providing financial products and services. Together with FIs’ own internal governance frameworks, the principles intend to address risks of systematic misuse and boost trust and confidence in AIDA use.

Briefly, the FEAT principles are as follows:

  • Fairness: systemic disadvantages, unjustified use of personal information, and unintentional biases should be avoided;
  • Ethics: AIDA use should meet the same ethical standards as decisions by humans;
  • Accountability: clear internal and external responsibility for AIDA use should be established;  and
  • Transparency: AIDA use should be disclosed, and the data used for, and the consequences of, AIDA-based decisions should be clearly explained.

Artificial Intelligence in Healthcare Guidelines

On October 2021, the Ministry of Health, Health Sciences Authority and the Integrated Health Information Systems published the Artificial Intelligence in Healthcare Guidelines (AIHGle) to complement the existing regulatory framework, including the regulations applicable to AI medical devices (AI-MD), and to provide a set of good practices for developers and implementers of AI in the healthcare setting.

Below are some of the key recommendations of AIHGle for developers and implementers of AI-MD:

Developers and Implementers
Developers Implementers
Design:
  • Seek clinical and end-user input when designing and developing AI-MD
  • Reduce and rectify unintended biases
  • Design to prevent, detect, respond to and recover from cybersecurity risks
  • Demonstrate the effectiveness of the AI-MD
Use:
  • Exercise clinical governance and oversight over the adoption and implementation of AI-MDs to ensure responsible and safe implementation
  • Seek approvals from organisational leadership and document the decision to implement the AI-MD
  • Introduce appropriate oversight based on the intended use, workflows and clinical context
  • Ensure end-users are clearly informed they are interacting with an AI-MD to make informed decisions
Build:
  • Adopt existing regulatory guidelines and industry best practices in developing AI-MD
  • Ensure all changes to AI-MD are properly documented
  • Have self-validation mechanisms to detect anomalous performance
Monitor and Respond:
  • Monitor AI-MD performance post-deployment
  • Establish appropriate escalation pathways if the AI-MD falls below the deployment baseline
  • Have processes to receive, respond to and investigate any reports of adverse events resulting from AI-MD use
Test:
  • Periodically evaluate and validate AI-MD’s performance to ensure it meets the clinical practice baseline
  • Validate results through peer reviews
  • Clearly document the intended use of the AI-MD in clinical workflows
Review:
  • Undertake ad-hoc reviews of any errors resulting from AI-MD use
  • Conduct regular reviews to ensure the AI-MD continues to have clinical relevance
  • Perform maintenance on the AI-MD medical device at least once a year to ensure continued functionality

Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems

On 1 March 2024, the PDPC issued the Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems (AI Advisory Guidelines).  While not legally binding, the AI Advisory Guidelines clarify how the Personal Data Protection Act 2012 applies to the development, testing, and monitoring, deployment, and procurement AI systems.  

Notably, the AI Advisory Guidelines recognise that consent may not always be required in processing personal data in relation to AI systems (see also our article on the Proposed AI Advisory Guidelines).

IP and Artificial Intelligence Information Note

IPOS released an information note on how Singapore’s intellectual property (IP) regime can protect AI inventions.  For example, software utilising AI tools can be subject to patent protection or be treated as trade secrets, and AI algorithms can be copyright protected.  However, the information note does not discuss IP protections granted to AI-generated inventions or creations.

Conclusion

Organisations should develop and use AI in accordance with Singapore regulators’ guidelines because, although they are not legally binding, they reflect Singapore’s approach to possible future AI regulation and they are helpful in mitigating AI-related risks. Organisations should also regularly keep abreast of changes in Singapore AI regulations, given the rapidly changing developments – both technologically and legally – in this area.

For More Information

OrionW regularly advises clients on artificial intelligence matters.  For more information about responsible development and deployment of artificial intelligence, or if you have questions about this article, please contact us at info@orionw.com.

Disclaimer: This article is for general information only and does not constitute legal advice.

Newsletter

Subscribe to
our newsletters

To subscribe, select the newsletter options that interest you (TMT, FinTech or DPC - Data Protection and Cybersecurity) and provide your details.

  • TMT - Technology, Media and Telecommunications
  • FinTech
  • DPC - Data Protection & Cybersecurity
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.