Roger Taylor on innovation ethics – InnovationsAus.com

There is an “urgent imperative” for democratic governments to establish clear ethics and governance frameworks around emerging technologies to capitalise on the enormous opportunities on offer, according to the chair of the UK government’s Centre for Data Ethics and Innovation.

Roger Taylor, who is also the chair of Ofqual (the UK’s Office of Qualifications and Examinations Regulation) and a member of the advisory panel to Her Majesty’s Inspectorate of Propagation, delivered the annual Pearcey Oration at the University of Melbourne this week, focusing on the need for governments to address ethical and governance issues in relation to artificial intelligence.

Mr Taylor drew on his experience in Silicon Valley and leading the UK government’s ethical data body to detail the importance of these discussions, and the interplay between ethics and innovation.

There would always be ethical, privacy and surveillance issues in new technologies like AI, but through transparency and effective regulation, these could be mitigated to build public trust, he said.

Governments could ensure clear separation of powers between the main bodies working with AI and public data, Mr Taylor said, and should stop positioning ethics and innovation against each other.

“We are in a state of uncertainty about ethics, which is doing little to impede unethical behaviour in some areas, while at the same time holding back innovation that could be beneficial,” Mr Taylor said.

“Clarity about what is ethical is necessary for us to innovate. Ethics and innovation need to be seen as comfortable bedfellows. There is an imperative for democratic societies to get on top of this technology and use it.”

This is something that must be addressed by democratic governments around the world immediately, Mr Taylor said.

“There is a degree of urgency about this mission. This is not something we can walk away from and come back to later,” he said.

“These technologies and these powers are being developed now, these technologies are coming. We want this to happen since there is clearly a huge potential for them to do good,” he said.

“The worst case scenario is one in which democratic societies fail to adopt the technology because of an inability to agree on governance while the rest of the world presses ahead. This would place us at a considerable disadvantage. It would mean the rules around innovation and deployment are being set by others who may not share our same values.

“The potential for benefit is enormous … we want to take the insights that connected data systems and AI generate in order to understand our own societies and our own behaviour better, not so that we can be manipulated by others but so that we are able to shape our own lives and our own societies in a way we believe is fair.”

One major reform needed from government is a clear separation of the powers governing the use of technologies like artificial intelligence, such as agencies that establish what can be known from public data, and those that use the data to actually make decisions.

“We need to establish trusted institutions which are able to understand what can be known from data, but which are prevented from ever acting on it,” Mr Taylor said.

“This depends on them getting access to the data which is often too closely guarded by different arms of government, making it difficult to maximise the value of it,” he said.

“We need institutions with the skills to determine when it is appropriate to adopt algorithmic systems, who have the capacity to independently assess their performance and the consequences of their real use in the world. Such bodies would need to be independent of those organisations that then use data to make decisions about us.”

The Centre for Data Ethics and Innovation, launched by the UK government earlier this year, was focused on identifying governance systems that are “trusted and trustworthy”, and providing recommendations to government for policy and regulation in the space.

“We are in a situation where we have allowed the most powerful means of information distribution ever created to site outside the traditional regulatory structures which have been placed over the previous generations of powerful media technologies.

“Rather late in the day we have recognised this is something of an enormous loophole.”

Regulators in the UK and around the world are increasingly focusing on data-driven systems, fears that algorithmic decision-making systems will produce unfair or biased results and the fear of untrammelled surveillance.

Any algorithmic system will be bias in some way, Mr Taylor said, and it’s the role of governments to make sure this process is as transparent as possible.

“We cannot expect that any algorithmic system is going to be free of bias. But we might expect that thorough investigations are conducted into the degree of bias exhibited by the algorithm and the degree to which it might be mitigated by adapting the model or training it on different data,” he said.

“People can then expect to know what steps those using the technology are taking to do this.”

Technologies like artificial intelligence would only be effective if the personal data of citizens is used by them, so governments also need to make sure there is public trust in this usage.

“The challenge for democratic societies is to take advantage of the potential of artificial intelligence in society requires the systematic use of data about citizens. But the creation of such systems is troubling – it feels like we are creating a power over which our existing democratic checks and balances will fail to exercise adequate control,” Mr Taylor said.

“Our challenge then is to come up with the right sort of checks and balances that allow these powers to be created and used safely.”

The Australian Government has recently made some movements in this space, allocating $29.9 million in the 2018 budget to artificial intelligence and machine learning research and development.

But CSIRO boss Larry Marshall has admitted that Australia is “behind the curve” on artificial intelligence, with Data61 recently consulting on the development of an AI ethics framework.