Human behaviour and the tools of the scammers – ThePaypers
From RATs to psychological tricks, bad actors are using every trick in the book to defraud their customers – with a high success rate. Callsign’s Andreas Eliasson says that needs to change
Despite the perception that fraud is a victimless crime, the reality could not be further from the truth. The impact on customers who have been defrauded is very real and very damaging – aside from the financial aspects, those who have fallen prey to fraudsters and scammers suffer huge amounts of emotional and mental stress as a result.
Equally, it’s a major headache for the organisations whose customers suddenly become victims. As well as the monetary impact, every instance of fraud can quickly escalate to huge reputational damage and lost business.
It doesn’t help that detecting and preventing fraud is anything but child’s play. The bad actors perpetrating these crimes are constantly evolving tools and techniques to strengthen their hands, all of which share a common thread – exploiting the behaviours of their very human victims.
RATs and other malware
Remote Access Tools / Trojans (RATs) and their mobile counterparts mRATs are highly prevalent, and among the most frequently used tools by bad actors. Once installed, they give a fraudster deep access to a user’s device and all of the activities that take place on it, paving the way for the bad actor to clean out their accounts.
Fraudsters might rely on website popups, email attachments, or links to load the RAT onto a user’s device but more often than not, they’ll use social engineering techniques to persuade their victims to install the RAT.
Frequently, this will be achieved by the bad actor masquerading as a bank representative or other trusted person, persuading their victim to download what they claim to be anti-malware software – which is in fact quite the opposite.
The APP approach
Social engineering is key for another attack vector – authorised push payment (APP) fraud. Here, the scammer will again pose as a trusted person, telling their victim that there has been unauthorised activity on their account.
Able to coach the legitimate user past any security warnings, they’ll convince them that they need to transfer their funds to a new account.
Because the transfer is apparently initiated by the actual customer, it makes it very difficult to detect, to the extent that it might seem that the customer is trying to commit first-party fraud rather than being a victim of third-party fraud.
The psychology of scams
But why are scammers so successful – so often?
As with the social engineering approaches that bad actors employ, there’s a strong psychological aspect to actually executing the scams, which is why people who feel that they wouldn’t be susceptible to being scammed are frequently taken in.
When people normally see warning messages or advice about fraud, they’re in a relaxed ‘cold’ state. That changes when they find themselves or believe themselves to be genuinely at risk. In this ‘hot’ state, they’re under stress, angry or panicking – and as a result more liable to make an error of judgement.
That’s why fraudsters will use every trick in the book to tip their victim over from one state to the other. Often, they’ll have invested considerable time building a relationship with their victim – days, or even weeks – to reach those crucial few moments where the victim is likely to be coerced into making a critical and expensive mistake.
It’s one more reason that these forms of fraud are hard to prevent. In the few seconds that a bank has to react to the threat, it’s usually all over.
Detect, intervene, protect
The outlook might seem at first appear to be bleak. But if organisations are to combat RATs, APP, and social engineering, then they need to take the fight to the fraudsters in the arena in which they operate. That means going beyond just passwords and out-of-band SMS OTPs. A sophisticated attack demands a sophisticated response.
The tools to do this exist. Solutions such as Callsign layer threat and device intelligence with behavioural biometrics, to consider not only whether a transaction is taking place from a recognised device in a familiar location, but also whether the patterns of keystrokes, taps, and swipes correspond to the normal behaviour of the user.
And by looking for red flags such as a customer suddenly making a high-value transaction to a new beneficiary, while the line is busy – a strong indicator of a fraudster on the phone trying to coach the user through the user journey – the customer can be given a contextual warning in real-time, outlining the fact that they’re almost certainly being defrauded. This triple-punch detect, intervene, protect (DIP) approach means that a would-be scammer can’t anticipate this dynamic intervention or even guess at its content, thus exposing their fraudulent activity to the user.
The human element
This highlights an important fact. Fraud is perpetrated by human beings. They’re not infallible, and neither are the techniques and technologies that they use. That means that businesses can fight back against them.
It’s a challenge for sure, but Callsign’s whitepaper Online fraud, they psychology of scams and how technology can prevent them provides valuable insights into the technologies and approaches that will keep the fraudsters at bay.
About Andreas Eliasson
Andreas Eliasson has been working with fraud prevention and security for 10 years and is passionate about helping organisations protect their users. He is a Certified Information Systems Security Professional (CISSP) and has two patent applications for how to improve usability and effectiveness of authentication and online fraud prevention systems.
About Callsign
Callsign has a simple vision: we want to make digital identification seamless and secure. Our unique positive identification approach balances high security and user experience, allowing customers to interact online safely, with minimal friction, while ensuring that bad actors are blocked to protect customers’ identities and business interests.