Web
Analytics Made Easy - StatCounter

By Doros Hadjizenonos, Regional Director Southern Africa, Fortinet

Looking for love in the digital world can be a treacherous journey, fraught with scammers and catfishers. And now, with the rise of deepfakes, the dangers have only escalated. As cybercriminals harness the power of AI to carry out their malicious schemes, the risk of being misled by seemingly genuine individuals has never been higher.

In its 2023 Threat Landscape Predictions, FortiGuard Labs highlights how these advanced technologies are increasingly being used to impersonate human behaviour with astonishing precision.

Confident tricksters have been defrauding victims for generations, but the emergence of ever more sophisticated technology is enabling them to do so faster, in greater numbers, and at lower risk to themselves. Attackers are even more likely to strike at romance-focused times like Valentine’s Day.

From Romance Scams to Social Engineering Attacks

Romance scams usually involve a cybercriminal developing a relationship with the victim to gain a victim’s affection and trust and then using the close relationship to manipulate and steal from the victim. Some also request intimate photos and videos and later use these to extort money.

According to the FBI’s latest report on Internet Crime, between 2019 and 2021 there was a staggering 25% increase in the number of complaints the agency received in the USA about romance scams . Those affected lost a record high of $547 million in 2021 alone after being swindled by their cyber sweetheart.

The ease of fostering an online relationship via a dating app from the comfort of their own home provides the perfect opportunity for hackers to create enticing and appropriate lures. By also relying on social engineering tactics such as phishing, smishing or even vishing, cybercriminals will try to fool people online and seize their sensitive data.

Cybercriminals Use AI to Master Deepfakes

Artificial Intelligence (AI) is already used defensively in many ways, such as detecting unusual behaviour that may indicate an attack, usually by botnets. However, this technology could also be used to create “deepfakes” – convincing hoax images, sounds, and videos.

Deepfake technology can be used within social engineering scams, with audio fooling people into believing trusted individuals have said something they did not. It can also be used to spread automated disinformation attacks or even to create new identities and steal the identities of real people. This could eventually lead to impersonations over voice and video applications that could pass biometric analysis posing challenges for secure forms of authentication such as voiceprints or facial recognition.

Even if deepfake technology continues to evolve, it can be spotted by recognizing unusual activity or unnatural movement such as a lack of blinking or normal eye movements, unusual or unnatural facial expressions or body shape, unrealistic looking hair, abnormal skin colours, bad lip-syncing and jerky movements and distorted images when people move or turn their heads.

As this technology become mainstream, we will need to change how we detect and mitigate threats, including using AI to detect voice and video anomalies.

Verified by MonsterInsights