Introduction

The development of Artificial intelligence (AI) has increasingly become a race among countries. With the growing strategic rivalry between China and the US, AI has become the new frontier in China-US competition, with the winner expected to wield significant power in defining and dominating the world economically and geopolitically (Cheng and Zeng, 2022; Khanal et al., 2024).

Owing to their large domestic and external markets, unparalleled access to financial resources, and large pool of skilled labor, both countries have led AI research and education and have invested billions of dollars in AI commercialization. However, for countries with limited public resources, small domestic markets and limited access to global knowledge networks, successful mobilization of AI has become an extremely challenging undertaking. In this article, we outline these crucial challenges that plague most countries in their quest to remain at the forefront of AI development and commercialization through an in-depth study of Singapore.

Despite being a small nation, Singapore is arguably one of the most exciting successes of AI development and governance. A study by Oliver Wyman Forum placed Singapore as the most competitive large city in its AI Readiness Index (OWF, 2022). Similarly, another study by Oxford Insights placed the Singapore government in the second position amongst 160 countries in its Government AI Readiness Index (Nettel et al., 2022). The rankings consistently commend Singapore for its overall planning, vision, and supporting infrastructure. By exploring the case of Singapore, a small island state with very limited resources and a very small domestic market, caught right between the great AI competition between two superpowers, we show how countries which are neither the US nor China can overcome these challenges to emerge as a global destination for research and development (R&D) and a hotbed of experimental commercialization of AI.

This paper explores the role of policymaking in Singapore’s AI success. In doing so, it answers three primary questions: A) What are the various risks associated with AI in a technology aspirant country? B) What role was played by the government in mitigating those risks? C) What lessons can be provided as insights into the governance of emerging technologies for governments worldwide?

Conceptual framework

All emerging technologies, including AI, carry various risks. We utilize the classification from Li et al. (2021) and classify risks into six different dimensions (Table 1) (Li et al., 2021). The highly cited framework was developed based on a comprehensive review of the literature on the varied risks associated with technology. As per the framework, market risks are associated with poorly formed markets where emerging technologies are associated with inadequate demand for the technology and/or poor supply conditions. Technological risks are specific and inherent in the technology. In the case of AI, safety, privacy, cybersecurity, and ascertainment of liability have been highlighted as important technological risks (Taeihagh, 2021; Taeihagh and Lim, 2019; Tan and Taeihagh, 2021). While environmental risks pertain to potential damages any emerging technologies might incur on the environment, organizational risks relate to the ability of the government to understand, govern, and potentially implement such technologies within their own domain. Social risks are potential social harm such technologies might incur. In terms of AI and AI-based systems, unequal access to skilling and benefit sharing, large-scale unemployment, and potential discrimination have been acknowledged as potential social risks. AI poses at least three security risks: a) malicious use of AI; b) use of AI in defense; and c) geopolitical risks of AI (see supplemental file for greater details about each of these risks of AI).

Table 1 Risks associated with the governance of AI.

We argue that risks associated with AI follow a temporal logic, i.e., different risks manifest at different points of technological development. The technology life-cycle hypothesis argues that all new technologies are cyclical and pass through four major stages: A) emergence of new technology; B) era of ferment; C) early maturity; and D) era of incremental change, after which we notice another technological discontinuity (Anderson and Tushman, 1990). Given the stage of development, technologies embody different risks and, therefore, require different governance strategies.

Figure 1 shows the temporal dimension of risks associated with each stage of technological development. Moving at the earliest stages of technological discontinuity and the era of ferment, emerging technologies are associated with a high degree of uncertainty and lack of exploratory capacity. Purposeful integration of emerging technologies requires enabling academia and industry to explore and experiment with such technologies. As Fig. 1 shows, during these stages of technological development, there is lack of adequate interest from these actors implies the existence of supply-side market risks (Smith and Raven, 2012). Policies during these phases aim to provide long-term applications and strategic direction and ensure knowledge accumulation (Jacobsson and Bergek, 2004; Suurs and Hekkert, 2009). As the technologies gradually mature towards achieving a dominant design, the supply-side capacities improve as well. Simultaneously, uncertainty associated with design and use of technology also reduces and risks associated with technologies become more apparent. Likewise, improvement in supply-capacity does not correspond with the rise in technology demand (Smith and Raven, 2012; Frishammar et al., 2015). This requires generating confidence in the applicability and use of such technologies. The emergence of a “dominant” technological form also exposes lack of capacity within public organizations to understand and manage them (organizational risks). At the same time, governments also need to ensure continuity of exploration and experimentation (Taeihagh, 2021; Taeihagh and Lim, 2019). Finally, as technology becomes “mainstream” and reaches a period of incremental change, governments should be mindful of the emerging social, and environmental risks posed by the technology. Moreover, in case of highly emergent technologies such as AI, security risks become increasingly prominent. Policies at this stage are designed to mitigate such risks.

Fig. 1
figure 1

Risks associated with various stages of technological development.

Method

We constructed an original dataset of the AI-related policy mix (which refers to cases where policymakers use bundles of policy instruments that are expected to attain their policy objectives) (Li and Taeihagh, 2020) of the Singapore Government. Based on the extensive review of news reports, historical archives, government websites and government reports, we identified policies implemented by the Singapore government to promote AI. The news reports were obtained through a search via the Factiva database, which is provided by Dow Jones & Company. We restricted ourselves to English-language articles and identified all media articles printed on the topic of artificial intelligence in Singapore. We used the keywords “artificial intelligence” AND “government” to identify all English-language newspaper articles, and other business intelligence reports published in Singapore between the years 1980 and 2021 from the database. The results yielded 6271 overall articles, of which we found 306 articles with relevant information on government policies undertaken by the Singapore government. Additionally, individual government websites, such as the webpages of various ministries and government agencies, were visited to identify any existing or past policies that included promoting or using artificial intelligence. The first author was responsible for the initial screening process, while the first and the second author were responsible for initial data extraction.

All three authors agreed upon the framework and the process of coding prior to the coding process. The articles extracted from the Ministry websites and Factiva were hand-coded. The first author initially coded 25 per cent of the articles, after which the results were discussed between the three authors. After the first author’s initial round of coding, the second author examined and verified all the coded articles. The discrepancies between the two authors were discussed and settled by all three authors. During the coding process, the dates of the publications were extracted along with the year of the introduction of the policy based on the information provided in the news articles. In cases where the dates of introduction of policies were not clearly specified, the Google search engine was used to identify their year of commencement. A similar approach was undertaken to identify the government and the non-government agencies involved in implementing the said policies. The OECD AI policy instruments framework was utilized to divide S&T policies into one or more of the four categories of policies (AI enablers, financial support, governance, and guidance) and 26 policy instruments identified by the OECD.Footnote 1 The policy instruments were also classified based on the risks they addressed as per the risk framework. Two authors coded the policy instruments, which were further examined and verified by the third author. The final database of policies contains 261 different policy instruments, which were categorized as per their instrument types and the risks they address.

AI development in Singapore and government policies

Singapore’s strategic approach to AI development

Over the years, Singapore’s government has introduced several national strategies and plans to promote the AI ecosystem (Fig. 2). The foundation was laid with the earlier efforts to bring whole-of-the-nation transformation in ICT and digitization. Long-term plans such as the National Computerization Plan and the Civil Service Computerisation Program were implemented and updated to digitize the economy as well as the public sector since the 1980s. These early initiatives were crucial in integrating digital services within the government and the private sector. Early plans like the National Computerization Plan (1980), the National IT Plan (1986), the IT2000 Masterplan (1992), the Infocomm 21 Masterplan (2000), the Connected Singapore (2003) and the Intelligent Nation 2015 (2006) were introduced at different periods that all aimed at improving access to ICT infrastructure and services and laying the foundation for the development of ICT-based industries (including AI) in Singapore. Although computerization of the government formed an essential component of these strategic plans, the government also introduced several parallel initiatives such as Civil Service Computerisation Programme (1981), the e-Government Action Plan (2000), iGov (2010) and the eGov (2015), to ensure the competitiveness of the government in terms of ICT literacy, use and service delivery was in par if not ahead of the private sector (refer to supplementary material for details about evolutionary dimension of Singapore’s policy).

Fig. 2
figure 2

Policies implemented by the Singapore government to tackle risks associated with AI development over time.

In the early 2010s, significant breakthroughs were made in AI (Hinton et al., 2012; Le, 2013). The Singapore government made necessary adjustments to prepare for the emergence of AI. The Smart Nation Initiative (2014) formed the cornerstone of policy interventions aimed at promoting AI (among other technology initiatives) in Singapore. The Smart Nation Initiative (SNI) aims to utilize emerging technologies to address the challenges of Singapore’s urban environment. While several national projects have been launched under the SNI, the Smart Nation Sensor Platform (SNSP), Smart Urban Mobility (SUM), and Punggol Smart Town are involved in the use of AI. The SNSP involves using IOTs in public spaces to design solutions that rely on accurate real-time data. Similarly, the SUM project includes the potential use of self-driving vehicles for urban mobility, and the Smart Town is envisioned to be a high-tech industrial district with state-of-the-art infrastructure that aims to attract enterprises specializing in AI.

The goal of the SNI to digitize government, economy, and society has meant that the government has introduced a wide variety of instruments to promote the sector in multiple domains. The strategic focus on AI in SNI was also reflected in other important long-term strategic plans of the government. The five-year Research, Innovation and Enterprise 2020 Plan (RIE 2020) prioritized using AI in urban mobility, healthcare, and service productivity. Likewise, the Infocomm Media 2025 Masterplan prioritizes using data analytics to develop AI and AI-based autonomous systems.

To promote R&D, programs such as the National Robotics Program were launched, and various centers for research were established. Significant attention was paid to improving the capacity of the private sector. A major thrust for the intervention in the private market came from the Industry Transformation Maps (ITMs). The ITMs were introduced under another strategic plan - the $4.5 billion program is designed to provide an integrated and systematic framework for developing 23 industries under 6 clusters. Each industry map contained specific vision and strategies for that industry. While AI is not one of the 23 industries under the ITMs, it is prominently featured amongst several industry-specific ITMs as an emerging technology with promising future solutions.

The growing use and maturity of the technology led the government to take a more hands-on approach to governing AI. As such, Singapore introduced the Model Framework for AI Governance (MAGF) in 2019. The framework, which was subsequently revised in 2020, is a non-binding guidance for the private sector. The MAGF highlights ethical issues that might arise due to the adoption of AI and provides guidance for addressing some of the ethical, technological, and social risks. Other similar soft and ancillary regulations were introduced to support the development of the sector, including the Trusted Data Sharing Framework (TDSF), a Cybersecurity Act (CSA), Safer Cyberspace Masterplan 2020, the Fairness, Ethics, Accountability and Transparency (FEAT) principles; and the revised Personal Data Protection Act.

Besides the regulatory framework, a sequence of sectoral promotion strategies was announced. The National AI Strategy 2030 was launched in 2019 and aimed to establish Singapore as the global hub for developing trustworthy AI solutions. The Strategy targeted nine domains as priority sectors for AI deployment and kickstarted five national AI projects and five areas of intervention. The nine domains included transport and logistics, manufacturing, finance, safety and security, cybersecurity, smart city and estates, healthcare, education and government. The five national AI projects included intelligent freight planning, municipal services, chronic disease prediction and management, personalized education, and border clearance operations. Similarly, the five areas of intervention included creating a triple helix partnership between universities, industry and government, developing AI talent, developing secure and high-quality data structures, developing trust in AI systems, and promoting international collaboration. Two other policies, the National AI Programme in Finance and the National AI Programme in Government, were also introduced to support the National AI Strategy. The two programs aim to increase the adoption of AI and involve various smaller sub-initiatives that involve improvement in AI and regulatory regimes associated with such systems to integrate such systems.

More recently, in 2023, the Singapore government updated its National AI Strategy (NAIS 2.0) with an ambition to scale up AI. The Strategy envisions Singapore emerging as a potential global leader in choice sectors. The Strategy targets three systems for intervention. The agency-centric system approach includes interventions to support the triple-helix actors, and the workforce-centric system approach includes supporting talent generation and attraction in the field of AI. Finally, the infrastructure-centric system approach involves support to provide adequate compute, data and regulatory support. From a functional perspective, the Strategy envisions AI use at multiple levels – at the application level (across various economic domains), at the scientific application level (through enhancing research productivity), and through pioneering research in specific AI systems – reasoning AI, resource-efficient AI, and responsible AI.

These efforts have propelled Singapore to quickly emerge as one of the global leaders in AI development. Over the past few years, the adoption of AI was further boosted by the Covid-19 pandemic. It was reported that almost half of IT companies in Singapore had quickened the roll-out of AI tools following the Covid-19 pandemic (Tham, 2021).

Singapore’s AI risk governance

This section investigates Singapore’s approach to address the seven types of risks associated with AI.

Market risks—demand

Singapore faced the dual problems of the underdeveloped and small domestic market for AI. Edler identified three direct interventions through which governments intervene on the demand side: a) public procurement of technology, b) demand-based incentives, and c) informational campaigns. Singapore has intervened in the market through all of these measures (Edler, 2006). Public procurement of the government’s tech-based solutions has been a historical feature of Singapore’s industrial policy (Kit, 2021). For instance, the SNSP involves the installation of IoT-based sensors across the island. The sensor hardware and software that establishes communication between the sensors have both been procured by local firms (Lamppost-as-a-Platform project). This is not an isolated case. Software for facial recognition in the parliament, a personalized learning platform for students, and a technical platform that can predict financial risks for financial institutions (NOVA!) are developed through the procurement process.

Generating demand for AI in the private sector has also been a government priority. Historically, the country has provided several incentives for local companies to implement some form of automation. These have included tax incentives, direct grants, and in-house R&D support. In recent years, several new policies have been launched to further bolster this support. These include government programs such as the AI Business Partnership Program, Enterprise Development Grant, Productivity Solutions Grant, SMEs Go Digital Program, Start-up SG Equity, Artificial Intelligence and Data Analytics (Aida) program, and the Automation Support Package scheme. Singapore has also emphasized a long-term sectoral transformation approach. In 2016, the government launched Industry Transformation Maps (ITMs) that provided roadmaps for transforming 23 industrial sectors in Singapore that contributed to 80% of the country’s GDP. Several ITMs, such as education, professional services, electronics, commercial media industry, and land and sea transport, have identified AI as a key technology for sectoral transformation. Beyond the ITMs, soft support such as the Operation & Technology Roadmap, Smart Industry Readiness Index, and the Implementation and Self-Assessment Guide (ISAGO) have been developed for businesses to assess their readiness and to examine the alignment of their practices against the MAGF. Various government agencies have provided advisory services, including the Go Business Platform, the A*STAR Collaborative Commercialization Marketplace, the Chief Technology Officer as-a-Service in the Infocomm Media Development Authority (IMDA), and the SME Digital Tech Hub. Demonstration zones (Model Factory) to show the application of autonomous systems in the factory setting have also been initiated. More recent initiatives, such as the AI Trailblazers (in partnership with Google), aim to stimulate the use of AI within organizations by providing AI toolsets for free to interested companies and accelerating their use through an innovation sandbox.

Singapore has deployed a variety of policy mixes, such as informational campaigns, to raise awareness of AI amongst researchers and end users. These include organizing expert-led conferences, hosting competitions and awards, teaming up with news media to produce informational content, and organizing exhibitions and symposiums.

Beyond the three interventions, Singapore has also made efforts to expand the demand for its services in external markets. Singapore has partnered with several Chinese cities, such as Chengdu, Nanjing, and Chongqing, to set up smart cities and business parks that welcome Singapore’s technological investments and products. The country has also signed digital agreements with several countries, including Chile, New Zealand, South Korea, and the UK, that facilitate digital trade. These agreements also form part of Singapore’s broader economic diplomacy, which includes being at the forefront of institutionalizing regulations on AI that ensure free trade and enable its firms to exploit international markets.

Market risks—supply

Supply-side interventions have also been key components of Singapore’s AI policy mix. These interventions can be classified into two categories: a) human resource development and b) enhancement of research capacity.

The earliest initiatives on the governance of AI involved interventions related to education, research and development. During the 80s, the National University of Singapore (NUS) and Nanyang Technological University (NTU) established the Department of Information Systems and Computer Science and the School of Applied Sciences, respectively—that were engaged in research on natural language processing and neural network computing.

The combined focus to train students and professionals was complemented by organizing conferences, seminars and competitions and awards at various levels to reward learning, research and exploration. Today, the initiatives to train qualified human resources start at the school level (changes in curriculum as well as the introduction of programs like AI for Kids, AI for Students, Code for Fun and the Code in the Community Programme) and extend to university level through the provision of scholarships (A*Star Scholarship or the Smart Nation Scholarship). The workforce is provided with financial support for online education. To meet the demand for human resources, the government has also introduced programs that allow companies and universities to import human resources from abroad.

The effort to develop human resources has been aided by providing research support. Since the 80s, the Government has worked with companies to set up research centers such as the Xerox Singapore Software Centre, Groupe Bull Computer Laboratory for Artificial Intelligence, and Engineering Applications. Now, several new research centers have been set up, and private-sector research centers have received support from the government. The Centre of Excellence for Testing & Research of Autonomous Vehicles (CETRAN) under the NTU is a research center that explores the safe deployment of autonomous vehicles. A variety of other research centers now either exist or are under construction at different educational institutions. A recent example is the American Express Decision Science Centre of Excellence, which focuses on research on using machine learning to detect credit and fraud risk.

Technological risks

Technological risks arise due to the performance or non-performance of technological artefacts. Singapore has taken various measures to address these risks. The first step has been to introduce regulatory changes. To address safety-related issues of specific AI-based systems such as autonomous vehicles (AVs) and unmanned aerial vehicles (UAVs), regulations such as Technical Reference 68 (TR 68) and Unmanned Aircraft (Public Safety and Security) Act 2015 have been introduced and the Road Transport Act has been amended for ensuring the safety of these systems (Tan and Taeihagh, 2021). Another crucial regulatory change has been to amend the Personal Data Protection Act (PDPA) to address privacy-related concerns.

The second step has been to introduce soft regulations. These include the MAGF, the Trusted Data Sharing Framework (TDSF), and the FEAT Principles. These frameworks provide best practices for private companies to address some of the risks identified earlier. Regulatory and advisory bodies have also been created. The Security Operation Centre (SOC) under the Cyber Security Group has replaced the Cyber-Watch Centre as the body responsible for protecting government ICT infrastructure from cybersecurity threats, and the Data Protection Advisory Committee has been established under the Personal Data Protection Commission to review and implement the provisions of the PDPA. For physical systems like AVs and healthcare-based AI systems (such as Selena + or the elderly monitoring systems), pilots have been introduced under controlled environments to monitor their potential safety threats.

Organizational threats

Emerging technologies like AI pose special challenges to the government regarding their capacity to utilize and regulate the technology. However, innovativeness has been a historical feature of Singapore’s bureaucracy, and the development of several new technologies in the country has taken place under the public administration’s stewardship (Quah, 2010; Shamsul Haque, 2004). As early as 1981, under the Civil Service Computerization Programme, the Singapore Government initiated public sector reform to introduce a level of government digitization. Such reforms have continued under subsequent long-term technology policies and public sector policies (Public Sector 21) have emphasized consistent technological improvement to provide quality service to the population. This trend has continued with AI as well. Historically, government agencies like the Port Authority of Singapore and the Building Control Division have been the earliest implementors of AI in Singapore. Over the years, the integration of AI into the governance structure has been a consistent practice within Singapore’s public administration. From designing systems to protect against public security (Risk Assessment and Horizon Scanning System) to the integration of chatbots into government websites (OneService) to the use of large language models as integrative office tools (Pair), public authorities have tried to keep abreast with developments in AI. More recently, the government has initiated the National AI Program in Government as a part of the National AI Strategy. The Programme aims to improve public service delivery through the use of AI and has identified key government bodies and government projects as exemplary pathfinders. Priority has also been placed on equipping government officials with necessary data science and AI skills (Government of Singapore, 2020). Singapore government has launched a Digital academy and introduced courses on machine learning and artificial intelligence have already been introduced under the Civil Service College of Singapore, providing easy access to learning for government officials.

Institutions have been set up to equip the government with the necessary tools to govern AI. In 2016, a new governance unit called Government Technology Agency (GovTech) was formed to guide public services using the latest technologies, including AI. The Data Science and Artificial Intelligence Division under the GovTech has been set up to understand and utilize AI to create citizen-centric solutions as well as to formulate policies on the technology. Another body – Singapore Home Team Science and Technology Agency (HTX) – has been set up under the Ministry of Home Affairs to manage security threats associated with emerging technologies, including AI.

Social risks

Several policy instruments have been introduced to tackle potential frictional unemployment. The Professional Conversion Programme and Career Conversion Programme have been designed for Professionals, Managers, Executives and Technicians (PMETs) to potentially move into AI by providing them with training and placement. Other similar policies include the SGUnited Mid-Career Traineeships and the SGUnited Mid-career Pathways Programme (i.am-vitalize), which are aimed at mid-career professionals. Support for job redesign has been another intervention theme. Under Workforce Singapore’s initiation, funds have been made available and sector-specific workforce transformation guidelines and instructions have been designed to prepare employers and employees to redesign their workspaces.

Finally, attention has been paid towards the ethics and fairness of AI algorithms. The guiding principles of the MAGF are explainability, transparency, fairness, human-centricity, and the safety of AI. Such goals are echoed in the Monetary Authority of Singapore’s approach as well, which guides the financial sector to design AI-based solutions that are fair, ethical, accountable and transparent (FEAT). These guidelines have been complemented by the AI Ethics & Governance Body of Knowledge. The Body of Knowledge is a document comprising case studies and guidelines on designing and managing ethical AI created by sectoral experts. Similarly, an Advisory Council on the Ethical Use of AI and Data has been formed, consisting of experts from various domains to advise the government on the social and ethical challenges of AI. Besides the changes in soft regulations, the Singapore government has been active in creating discussions and deliberations around the ethics of AI to better inform policymaking. Conferences such as the Future of Skills Forum, the National Health Summit and the Singapore Conference on the Future of Work: Embracing Technology; Inclusive Growth have been organized where participants discuss the social implications of AI.

Security risks

The Singapore government has taken the following approaches to cope with the security risks associated with AI.

First, a new Cybersecurity Act was enacted in 2018 to address cybersecurity risks. The Act identifies 11 sectors as critical information infrastructure (CII) sectors and clarifies responsibilities for protecting the CIIs. The Act also gives the Commission of Cybersecurity authority to investigate potential threats or incidents and undertake necessary measures to mitigate their potential impact. The amendment of PDPA was another step the government took to address emerging cybersecurity concerns. The new amendments require businesses to report data breaches, criminalize deliberate mishandling of personal data, and introduce provisions for deemed consent.

Second, the government uses AI to strengthen its counter-terrorism and cyber defense capabilities. For example, authorities utilize data from different IOT-sources, like cameras and “ground sensors”, to detect threats and improve counter-terrorism responses. Historically, the government has used AI (the RAHS system) as early as 2014 to determine internal threats to security. In 2021, Singapore adopted a new artificial intelligence data processing system under the Singapore Maritime Crisis Centre (SMCC) that can identify threatening ships along Singapore’s shores in real time (Yong, 2021).

Third, recognizing the use of AI in the military, the Singapore Armed Forces has heavily invested in AI. In 2017, Singapore set up a lab under the Defence Science and Technology Agency to develop artificial intelligence and analytics in defense use. In 2021, Singapore unveiled a new command and control system that uses AI and data analytics to recommend weapons and help make faster and more effective decisions.

Fourth, regarding geopolitical security, Singapore has adopted a highly pragmatic approach in dealing with the great AI rivalry between the United States and China (Zhang and Khanal, 2024). Singapore has been a close ally of the United States in the development of AI. Take the Global Partnership on AI (GPAI) as an example. The US views GPAI as a useful geopolitical weapon against China (National Artificial Intelligence Initiative of the United States, 2019). Singapore joined other nations as a founding member of the GPAI. AI Partnership for Defense is another example. It was initiated by the US Defence Department in 2020 to work with American allies to develop the military applications of AI, and Singapore is a part of the initiative. While Singapore is developing close ties with the United States in AI governance, it has also tried to forge links with China in AI development and governance. Singapore and China have signed several cooperation agreements on AI at national and subnational levels. In 2018, at the 6th Singapore-Nanjing panel meeting, Singapore and Nanjing agreed to deeper collaboration on artificial intelligence (AI) and sustainable urban solutions. In 2021, China and Singapore pledged to deepen pragmatic cooperation in multiple fields, including AI. Many Chinese companies, such as Huawei and Alibaba, have set up AI labs in Singapore. In the meantime, Singapore also welcomes China’s participation in global AI governance.

Discussion

Our analysis shows that there is a temporal dimension to the risks of emerging technologies such as AI. Our findings also show that given the dynamic nature of risks, the government’s policy approach has also varied. As our framework suggests, supply-related market risks emerge as the primary source of risk in the initial days of technology deployment. For governments attempting to promote the commercialization of emerging technologies, demand-related risks then emerge that entail the threat of non-acceptance of technology by potential users. Gradually, technological risks also emerge as ensuring safety and security becomes a topic of concern. Subsequently, as technology becomes more popular in the public and commercial domains, the government must ensure its own organization is suitably well-prepared to deal with the potential risks of emerging technology. Early signs of social risks also emerge during the commercialization phase.

The evolution of the policy mix in Singapore shows a gradual change in policy instruments that accompanied changes in AI and the nature of risks (Fig. 3a, b). An immediate observation from Fig. 3a is that there has been a gradual intensification of most policy instruments over the years. Compared to the earlier years of 2014–2016, which focused on specific policy instruments, there has been a gradual increase in all policy instruments. Patterns also emerge when examining the scope of the policies (Fig. 3b). As the framework predicted, the early stages of policy mixes were designed to address the knowledge gap (supply-side risks). Policies were formulated to encourage education, R&D, and technology transfer (financial support—Fig. 3b). To ensure that technology suppliers had an adequate market for their services, the government itself emerged as the buyer of technology, often procuring its services for application to the private sector (governance). Indeed, government’s own usage of the technology was also accompanied by drafting of long-term plans and strategies to encourage sectoral development (governance). While the policies addressing supply-risks continued, with the advancement of technology and the emergence of new risks (especially demand-related), the goals of the policy mix also subsequently broadened. The government actively participated in the market to improve the capacity of businesses and ensure greater demand for and supply of technology (AI enablers). Capacity improvement related measures were introduced to promote the entry of human resources and businesses in the field of AI. At the same time, information campaigns were launched, and businesses were provided financial and procedural support to incorporate AI into their own domains (AI enablers). Subsequent steps were also taken to address technological and social risks. Existing regulations were modified, and new laws were introduced to ensure threats related to privacy, security and, to a certain extent, safety were addressed (guidance). Furthermore, soft rules such as MAGF and FEAT Principles were introduced to mitigate potential ethical and social challenges arising from AI (guidance). Over time, we notice a gradual intensification and diversification of policy instruments involved in the policy design process (Fig. 3a).

Fig. 3: Timelines of AI Policy Instruments in Singapore.
figure 3

a Timeline of policy instruments used to address AI risks in Singapore. b Timeline of policy instruments used in Singapore as per the OECD AI policy instruments framework.

Singapore’s effort to improve the AI-related capacity of its public organizations also deserves a mention. Starting with the establishment of the National Computer Board and initiation of several long-term plans to improve the adoption of ICT in civil service, such as the Civil Service Computerisation Programme (1981) and the e-Government Action Plan (2000), public organizations have always been at the forefront of Singapore’s pursuit of technological excellence (Quah, 2010, 2013). In AI’s case as well, starting with the introduction of the Ship Planning System (SPS) by the Port Authority of Singapore in 1991, the government has consistently been one of the earliest consumers and promoters of the technology. As was indicated earlier, the government’s proactive role addresses several risks of AI systems. First, the government as a procuring/purchasing agency creates much-need demand for such emerging technologies (Jones, 2002; Wong and Singh, 2008). The government’s use of AI systems also reduces the public’s perceived risks and improves tolerance towards such new technologies. Second, it improves public sector readiness in utilizing new technologies such as AI to improve efficiency and productivity and generally improves technology literacy amongst public sector employees to help them design better policies.

Another important feature of Singapore’s governance system has been the use of soft laws, indirect regulations and regulatory sandboxes in place of “hard” regulations to govern AI systems. Singapore does not have any specific regulations that directly govern AI systems. Governance frameworks such as the MAFG, the TDSF, and the FEAT Principles have provided best-practice guidelines for the private sector that are suggestive but not excessively prohibitive. Similarly, provisions of personal data and database protection and sharing in the PDPA provide adequate flexibility for companies to maximize benefits from their productive use while maintaining a level of privacy protection for the consumers. Similarly, various ministries, such as the IMDA and the MAS, have made provisions for data regulatory sandboxes and financial sandboxes that allow flexibility in rules and regulations to develop innovative AI-based solutions in various private sector domains. While the provision of such soft regulatory mechanisms can present challenges (at least in the case of technological risks), they also encourage exploration and innovation at the early stages of technology’s development when the contours of technological risks have not solidified (Lee and Petts, 2013).

Amid the intensifying AI race between China and the United States, the Singapore government’s active role in the “virtual” enlargement of its market should also be highlighted. This includes Singapore’s ability to be an attractive destination for technology transfer and to find markets for its own products beyond its borders. Singapore has made an effort to create business-friendly institutions in terms of safety, security, zero corruption, and protection of intellectual property rights. Such broader measures have been crucial in technology transfer. Foreign investments, in the form of research and development centers, for instance, have been a historical norm in Singapore, and a similar tendency can also be seen in AI, where pioneer companies in AI technology have worked with the Singapore government to establish local knowledge centers. Additionally, rather than taking sides in the current tech/AI rivalry between the US and China, efforts have been made at the G2G level and B2B level with several national and local governments, including the US and China, to attract foreign investments and to expand their internal markets to accommodate Singaporean firms. These measures have been designed in tandem with Singapore’s active involvement in various multilateral and plurilateral governance mechanisms that design norms and rules governing AI markets to ensure free and fair access to international markets.

Establishing such a presence on the international stage also closely aligns with the high degree of importance the country has attached to geopolitical security. Given the intensification of the AI race between the US and China, Singapore’s strategy has been to secure its interests while keeping close ties with both countries.

The findings provide important insights into Singapore’s approach to the governance of artificial intelligence. First, given the absence of natural resources, a small domestic market, and a population of just five million, the Singapore government has played a highly active role in planning and guiding the development of the AI ecosystem. This has involved designing AI-based plans and strategies and creating an enabling ecosystem that facilitates R&D, adoption and experimentation. A whole-of-government approach involving multiple agencies across multiple periods has been the hallmark of Singapore’s efforts (Fig. 4). More importantly, however, the government itself has been a steward and led from the front in integrating AI into its area of operations since some of the earliest adopters of the technology were government entities and other government-linked companies (GLCs) themselves. Attention should also be given to the design of some or many of these initiatives, which have involved the direct participation of private companies or GLCs in design or implementation (Fig. 4). The spectrum of involvement of non-government bodies includes R&D (various research centers have been opened through private initiatives), pilot programs, procurement of government services, and integration into their own systems. The inclusion of non-state actors, especially businesses, in the policy implementation process has led to improved capacity of non-state actors to adopt these technologies.

Fig. 4
figure 4

Agencies involved in the policy process in the governance of AI in Singapore.

Therefore, Singapore’s evidence shows that success in the governance of new technologies requires an evolutionary approach to policy design where the policy instruments reflect the risks attached to the stages of policy development (Fig. 5). Li et al. (2021) had suggested six types of government strategies for coping with technological risks (no response, prevention-oriented, precaution-oriented, control-oriented, toleration-oriented, and adaptation-oriented). Our findings suggest that these strategies are not necessarily mutually exclusive but are subject to the temporally changing nature of AI risks.

Fig. 5
figure 5

Risk governance framework associated with technology life cycle.

Despite the remarkable attention the government has placed towards the development and adoption of technology, concerns remain on the capacity of existing governance mechanisms to deal with potential technological and social risks posed by AI. Caution must be taken, however, when resorting to soft forms of regulations. The government must undertake clear and necessary steps to assess the technological risks associated with AI to ensure some form of precautionary regulation framework exists to restrict potential future harms from the technology. Privacy, safety and liability-related measures for AI systems in Singapore are based on existing overarching frameworks of personal data protection, and health and safety might not sufficiently address the unique demands that AI systems are likely to make. For instance, concerns remain on the adequacy of the PDPA and existing intellectual property rights regulations to address issues associated with personal and non-personal data protection and sharing, database protection, and protection of AI-generated intellectual property rights (Tan and Taeihagh, 2021; Ramesh et al., 2020). Similarly, while the current Computer Misuse Act or the Penal Code can sufficiently address the intentional negligence, misuse and harm caused by AI systems, regulations are still silent on potential unintentional negligence or harm such systems might cause (Ramesh et al., 2021). Furthermore, legal challenges on establishing legal liability for criminal or civil liability for the intentional or unintentional malfunction of AI systems, including AVs, have also not been sufficiently addressed (Tan and Taeihagh, 2021; Ramesh et al., 2021; Report on the Attribution of Civil Liability for Accidents Involving Autonomous Cars. Law Reform Committee, Singapore Academy of Law, Singapore, 2020).

A significant gap remains in understanding the environmental risks associated with AI. In the present study, we did not find any policy instrument that aimed to understand the potential environmental impact of the technology. As mentioned earlier, conceptual studies and some empirical findings have pointed towards the potential threat of AI to the environment. Given the Singapore government’s various initiatives to reduce emissions and protect the environment, including the Singapore Green Plan 2030, more needs to be done to understand the potential implications of AI and AI-based systems on the environment.

Conclusion

In this study, we examined Singapore’s approach to the governance of artificial intelligence. Despite being a small and resource-deprived nation, Singapore has successfully mobilized its policy space to create an environment conducive to research, experimentation, and adoption of AI. While the island nation has its own unique features, its example can provide important pointers to other ambitious nations that aim to create an AI ecosystem that can foster such emerging technologies.

The analysis above is based on secondary publicly available materials. As a result, we have not conducted any policy evaluation or have not referred to documents that have conducted such ex-post analysis of policies. Furthermore, the framework has been tested using Singapore’s case. Singapore is a unique case characterized by a historically dominant single-party majority in the parliament, the absence of a multi-level governance structure, and a geopolitical strategy that focuses on economic competitiveness (Tan and Taeihagh, 2021).