The Peak Power List 2023: AI Singapore’s Simon Chesterman on balancing AI innovation with ethical governance and the critical role of human oversight – The Peak Magazine

The Peak Power List 2023: AI Singapore's Simon Chesterman on balancing AI innovation with ethical governance and the critical role of human oversight - The Peak Magazine

This story is one of the six in The Peak Singapore’s Power List. The list is an annual recognition that celebrates and acknowledges individuals who have demonstrated exceptional leadership, influence, and impact within their respective fields and the broader community. 

Our theme for this year is Quiet Power, a force that brings about transformative shifts in the lives of ordinary people through strategic collaboration and concerted efforts with like-minded individuals. Quiet leaders are dedicated to creating positive and lasting change within the community, leading to fundamental and permanent shifts in how the community functions on a day-to-day basis.

“I don’t lie awake at night worrying about robots taking over the world,” Professor Simon Chesterman shares casually. “But the ever-speedier rush to market of ever-more sophisticated AI systems does worry me — particularly when combined with the downsizing of safety and security teams in many of the larger tech firms.”

Chesterman is the senior director at government-funded research emporium, AI Singapore. Here, Chesterman heads AI Governance, charged with funding research that ensures that AI technologies are developed and utilised responsibly, ethically, and safely.

Two programs allude to this. The first is AI Singapore’s $1.1 million Research Grant which funds multi-disciplinary research that reflects novel ideas and visions that are underexplored within the field of AI. The other is two Postdoctoral Fellowships awarded to researchers studying, amongst other things, bias detection and prevention and values and trust in AI. 

A seat at the table

It all makes for exciting times in this nascent field of AI Governance, especially in Singapore which, in typical fashion, is finding the precarious balance between being cautiously preventive and deferring the industry to market forces. It’s a journey Simon Chesterman finds hope in. “We have a highly educated population, with real strength in AI research and industry. We may not be the world leader in either, but that capacity buys us credibility.”

Today, most of the discussion about AI governance is centred on Europe and the United States. Simon tells me that recently, China has become a more credible player but these three groups aren’t really speaking to each other. “So Singapore and AI Singapore has this potential to punch above our weight not just as a player but also as a convenor — a platform for discussions about AI and how we can get the benefits of it while minimising or mitigating the harms.”

Within Singapore, Chesterman is looking at two broad questions that aim to move the needle forward in AI adoption. The first is looking at whether we should trust AI. “How do we ensure that it is fair, accountable, appropriately transparent, and so on? That’s important to ensure that people aren’t harmed by ‘bad AI’.”

The second is to explore how humans can engage with AI more responsibly. “That’s important to ensure that people aren’t harmed by bad decisions in the use of AI — whether that means relying on AI when you shouldn’t or not taking advantage of ‘good AI’ when you should.”

At its core, 50-year-old Simon Chesterman — who is also a vice provost at the National University of Singapore and the founding Dean of NUS College — is studying issues of ethics in an unexplored field. When asked for his views on ethics in AI, Chesterman is academic, referring to concepts of consequentialism and deontology. He admits the challenge of articulating the everyday ethical principles we live by, though he acknowledges that we generally know what we should or should not do through our own intuition. 

“That’s an important insight, because sometimes the clearest path to seeing what’s ethical is by ruling out what is clearly unethical,” Chesterman adds. “Law gives us some guidance, but ethics usually goes above and beyond that.”

Circling back to specific examples of ethical pitfalls in AI, Chesterman offers three. The first is bias. He explains that AI is not inherently biased, but the data it’s fed on may be. “At least when you ask AI if it’s biased, it will try to tell you the truth. Good luck asking a human that.”

The second is human morality, specifically pertaining to the decision making process, some of which he emphasises, should be made by humans. “Not necessarily because humans will make better decisions,” he elaborates, “but because these fundamentally moral questions should be grappled with by a human who can be held to account for them.”

The third manifests in situations involving moral dilemmas. Warfare or legal judgments, Chesterman tells me, necessitate human oversight to uphold accountability and legitimacy. Delegating such responsibilities to AI could undermine human ethics and the moral gravity carried by individual decisions.

“We’ve seen some experimentation with that in sentencing, for example, so I was relieved when Singapore’s chief justice came out last year to say we wouldn’t be going down that path in Singapore.”

Still, for all his concerns about AI and its impact on humanity, Chesterman acknowledges some use cases that show huge potential. He points to the recently concluded Asian Undergraduate Symposium at NUS College where, over the course of a week, teenagers from across ASEAN came up with innovative ideas about how AI and similar tools can connect communities, modernise farming, mitigate climate change, and support mental wellness.

“AI is ultimately a tool, like a hammer,” he adds in closing. “It’s up to us whether we use that to build things or to destroy them.”

For more stories on The Peak Power List, visit here.