BEYOND LOCAL: AI innovation can create dangerous products – Guelph News
This article by David Weitzner, York University, Canada originally appeared on the Conversation and is published here with permission.
This past June, the U.S. National Highway Traffic Safety Administration announced a probe into Tesla’s autopilot software. Data gathered from 16 crashes raised concerns over the possibility that Tesla’s AI may be programmed to quit when a crash is imminent. This way, the car’s driver, not the manufacturer, would be legally liable at the moment of impact.
It echoes the revelation that Uber’s self-driving car, which hit and killed a woman, detected her six seconds before impact. But the AI was not programmed to recognize pedestrians outside of designated crosswalks. Why? Because jaywalkers are not legally there.
Some believe these stories are proof that our concept of liability needs to change. To them, unimpeded continuous innovation and widespread adoption of AI is what our society needs most, which means protecting innovative corporations from lawsuits. But what if, in fact, it’s our understanding of competition that needs to evolve instead?
If AI is central to our future, we need to pay careful attention to the assumptions around harms and benefits programmed into these products. As it stands, there is a perverse incentive to design AI that is artificially innocent.
A better approach would involve a more extensive harm-reduction strategy. Maybe we should be encouraging industry-wide collaboration on certain classes of life-saving algorithms, designing them for optimal performance rather than proprietary advantage.
Every fix creates a new problem
Some of the loudest and most powerful corporate voices want us to trust machines to solve complex societal problems. AI is hailed as a potential solution for the problems of cross-cultural communication, health care and even crime and social unrest.
Corporations want us to forget that AI innovations reflect the biases of the programmer. There is a false belief that as long as the product design pitch passes through internal legal and policy constraints, the resulting technology is unlikely to be harmful. But harms emerge in all sorts of unexpected ways, as Uber’s design team learned when their vehicle encountered a jaywalker for the first time.
What happens when the nefarious implications of an AI are not immediately recognized? Or when it is too difficult to take the AI offline when necessary? Which is what happened when Boeing hesitated to ground the 737 Max jets after a programming glitch was found to cause crashes — and 346 people died as a result.
We must constantly reframe technological discussions in moral terms. The work of technology demands discrete, explicit instructions. Wherever there is no specific moral consensus, individuals simply doing their job will make a call, often without taking the time to consider the full consequences of their actions.
Moving beyond liability
At most tech companies, a proposal for a product would be reviewed by an in-house legal team. It would draw attention to the policies the design folks need to consider in their programming. These policies might relate to what data is consumed, where the data comes from, what data is stored or how it is used (for example anonymized, aggregated or filtered). The legal team’s primary concern would be liability, not ethics or social perceptions.
Research has called for taking an approach that considers insurance and indemnity (responsibility for loss compensation) to shift liability and allow stakeholders to negotiate directly with each other. They also propose moving disputes over algorithms to specialized tribunals. But we need bolder thinking to address these challenges.
Instead of liability, a focus on harm reduction would be more helpful. Unfortunately, our current system doesn’t allow companies to easily co-operate or share knowledge, especially when anti-trust concerns might be raised. This has to change.
Re-thinking the limits of competition
These problems demand large-scale, industry-wide efforts. The misguided pressures of competition pushed Tesla, Uber and Boeing to release their AI too soon. They were overly concerned with the costs of legal liability and lagging behind competitors.
My research proposes the somewhat counter-intuitive idea that the ethical positions a corporation takes should be a source of competitive parity in its industry, not a competitive advantage. In other words, a company should not stand out for finding ethical ways to run its business. Ethical commitments should be the minimum expectation required of all who compete.
Companies should compete on variables like comfort, customer service or product life, not on whose autopilot algorithm is less likely to kill. We need an issues-based exemption to competition, one that is centred around a particular technological challenge, like autonomous driving software, and guided by a shared desire to reduce harm.
What would this look like in practice? The truth is that more than 50 per cent of Fortune 500 companies already use open-source software for mission-critical work. And their ability to compete has not been stifled by giving up on proprietary algorithms.
Imagine if the motivation to reduce harm became a core target function of technology leaders. It would end the incentive individual firms currently have to design AI that is artificially innocent. It would shift their strategic priorities away from always preventing imitation and towards encouraging competitors to reduce harm in similar ways. And it would grow the pie for everyone, as customers and governments would be more trusting of technology-driven revolutions if innovators were seen as putting harm reduction first.
David Weitzner, Assistant professor, Administrative Studies, York University, Canada
This article is republished from The Conversation under a Creative Commons license. Read the original article.