Persuasive AI could corrupt human behaviour, study suggests – Techerati
Researchers demonstrate the corrupting influence of Natural Language Processing
A lot of fearmongering surrounding the rise of AI relates to the (possibly exaggerated) concern that a super-intelligent might enslave the entire human race.
While that debate rumbles on, researchers have unearthed a more immediate challenge presented by the AI-powered speech generator GPT-2, released by OpenAI last year.
If you recall, the AI research lab’s chatty tool wowed the developer community last year with its ability to generate convincingly coherent prose from any arbitrary input.
Following GPT-2’s release, it didn’t take long for observers to warn that the impressively powerful NLP algorithm wasn’t all fun and games, highlighting an array of risks that the tool could pose in the wrong hands.
One such concern is that GPT-2 generated text could corrupt and persuade people to break ethical norms. In a new study, researchers from University of Amsterdam, Max Planck Institute, Otto Beisheim School of Management, and the University of Cologne, tested this hypothesis.
Their sobering findings read: “Results reveal that AI-generated advice corrupts people, even when they know the source of the advice. In fact, AI’s corrupting force is as strong as humans”
Dyadic test
The team asked 395 participants to write down ethically sound or questionable advice, forming a dataset which was used to train GPT-2 to generate new advice that promoted honesty or dishonesty.
A group of 1572 then read the instructions and were immediately presented with a classical psychological task designed to test honesty and dishonesty.
Participants were paired in dyads, composed of a first and second mover. The first mover then rolled a die in private and reported the outcome. The second mover learned about the first mover’s report, rolled a die in private, and reported the outcome as well.
Only if the first and second mover reported the same outcome (a double), were they paid according to the double’s worth, with higher doubles corresponding to higher pay. If they reported different outcomes, they were not paid.
The die-rollers were randomly assigned to honesty-promoting or dishonesty-promoting advice that was either human written or AI-generated and either knew the source of the advice or that there was a 50-50 chance that it came from either humans or AI.
Those who didn’t know the source of the advice could earn additional pay if they guessed the source correctly. A control group of participants did not receive any advice from either source.
Compared to the no advice control, honesty-promoting AI did not have any influence on behaviour. Whereas dishonesty-promoting AI significantly increased financially-motivated behaviour even when die-roller knew it was AI.
If participants did not know the source of the advice, the effect of AI-generated advice was indistinguishable from that of human-written advice.
“Even when knowing that an algorithm, not a human, crafted the advice, people followed it,” the researchers reflected. “The power of self-serving justifications to lie for profits seems to trump aversion towards algorithms.”
As a way of limiting AI’s corruptive force, the researchers called on developers to rigorously test its negative potential influence before deployment ‘as a key step towards managing AI responsibly’.