Artificial Intelligence and human behaviour · EIA
Many advertisers, tech giants, and border forces are using varying types of software in an attempt to monitor our emotions or detect whether we are being dishonest. The justification for this invasion on our privacy is that these systems will make our lives easier, safer, and even more enjoyable. The issue, however, is that all of these ‘behaviour analysis’ systems, currently, suck!
Before exploring this area of Artificial Intelligence(AI) in more detail, I’d like to stress that I’m talking specifically about the field of AI development focusing on understanding human behaviour… I love my Google home and the fantastic little things it can do – like bathing the living room in a purple glow while simultaneously lining up the next episode of ‘Better Call Saul’ on the TV. With that said, Google Home or Alexa, are only (at present) task-focused computers. We ask them a question, or to do a simple task, and they do it well, for the most part. They, however, are not able to ‘think’ or ‘hypothesise’ the way humans can. This limitation often leads to personal assistants making mistakes. If you own one of these devices, I suspect you too have had moments of infuriating frustration.
The comforting thing about Google Home or Alexa misbehaving is that the stakes are usually low. At worse, you may have to make an agonising walk over to the light switch to turn it on manually. But when the stakes are high, such as decision-making based on the emotional signals from humans, mistakes can quickly go from frustrating to damn right dangerous.
Already we have seen examples of wrongful arrests through facial recognition, with significant bias in labelling innocent black people as criminals. We’ve also seen recruiting software downgrading CV’s that simply contained the contained the word ‘women.’ Or marketing software choosing certain ads to show you based on whether your expressions are ‘positive’ or ‘negative’, and further bulls**t, like claiming IQ can be detected through analysing the face.
One of the problems with AI is with the understanding of what AI actually is. Movies like Her, Terminator and Ex Machina present a particular level of AI that doesn’t yet exist in real life. This is Artificial General Intelligence (AGI) – and is, basically, superhuman intelligence. The AI we have today is far from this level of sophistication and does not have the intellect to compete with humans, never mind outperform them in a way that is required to analyse human behaviour. It is therefore very concerning when companies and researchers come forward with their ‘solutions’ to complex industries that rely on reading and understanding human behaviour – HR, marketing, security, education, healthcare to name just a few.
The Future:
Once AI reaches superhuman levels of intelligence (AGI), this blog post will need revisiting (or rewriting by a computer smarter than me). When this does happen (maybe in a few decades time), the challenge will not be IF the systems can accurately detect emotions and deception in humans, but in what context is that okay to do so, if any.
In the words of Alan Turing, one of the first great AI thinkers – “It is customary to offer a grain of comfort, in the form of a statement that some peculiarly human characteristic could never be imitated by a machine. I cannot offer any such comfort, for I believe that no such bounds can be set.”