top of page

Deepfakes - an evolving threat for financial institutions

Imagine a friend of yours asking for a video call. You accept – besides, you know this person very well.


During the video call, the friend told you that is involved in a profitable business deal. He desperately asked for a big amount of money he needed as a bidding deposit, promising to pay it back soon.


You trust this person so you agree to send him the money.


This is what happened to a man in northern China. However, when he talked to his real friend the man realized he was a victim of deepfake scam.


Yes, the person he was talking to and transferred 4.3 million yuan ($622,000) was not his friend.


The scammer used AI-powered face-swapping technology to impersonate the victim's friend during the video call to convince him to transfer the money.


What are Deepfakes?


Deepfakes are a type of artificial intelligence that can be used to create realistic videos or audio recordings of people saying or doing things they never did.


They are created by training AI algorithms on large amounts of data, such as photos, videos, and audio recordings, of the target person.


This allows the AI to learn the person's mannerisms, facial expressions, and voice, and then generate new content that is almost indistinguishable from the real thing.


Why Financial Institutions Should Care


Deepfakes pose a significant threat to financial institutions due to their ability to manipulate both customers and employees. Scammers can employ deepfakes to deceive customers into revealing sensitive personal information or authorizing fraudulent transactions.


On the other hand, deepfakes can also be used to trick employees into believing they are interacting with legitimate customers, unknowingly enabling fraud.


How Financial Institutions Can Protect Themselves


There are a number of things that financial institutions can do to prevent deepfake fraud, including:


  • Educating customers and employees: Raising awareness about deepfakes and how they work can help both customers and employees identify and avoid these scams.

  • Implementing multi-factor authentication (MFA): MFA adds an extra layer of security by requiring multiple verification factors, such as passwords, codes, or biometric authentication, to access accounts and perform sensitive transactions.

  • Consider deepfake risk in your policies and procedures: When designing your identification and verification procedures, consider deepfake risk. Use reliable technology when onboarding remotely in accordance with your national law.

  • Enhancing cybersecurity measures: Robust cybersecurity measures, including advanced firewalls, intrusion detection systems, and data encryption, can help protect against deepfake attacks and other cyber threats.

  • Continuously monitoring and adapting: Financial institutions should continuously monitor emerging deepfake techniques and adapt their security measures accordingly to stay ahead of evolving threats.

  • Collaborating with law enforcement: Collaboration with law enforcement agencies can help identify and apprehend the individuals behind deepfake scams and bring them to justice.

  • Using deepfake detection technology: There are a number of companies that develop deepfake detection technology. This technology can be used to identify deepfakes with a high degree of accuracy.

  • Monitoring customer activity for anomalies: Financial institutions should monitor customer activity for any suspicious patterns that could indicate fraud. For example, if a customer suddenly makes a large wire transfer or a purchase from an unfamiliar merchant, this could be a sign of fraud.

Deepfakes are a serious threat to the financial industry, but there are a number of things that financial institutions can do to protect themselves – and their customers.

Comments


bottom of page