Hackers Used AI to Duplicate LastPass CEO’s Voice

LastPass is warning of the danger AI poses to security, citing a hackers' attempt to attack the company by duplicating CEO Karim Toubba's voice....
Hackers Used AI to Duplicate LastPass CEO’s Voice
Written by Matt Milano
  • LastPass is warning of the danger AI poses to security, citing a hackers’ attempt to attack the company by duplicating CEO Karim Toubba’s voice.

    Deepfakes, both audio and visual, have been a growing concern among AI and security experts. There have already been incidents in which hackers have used AI deepfakes to compromise companies. LastPass is warning of just such an attempt, although the company was successful in thwarting it, thanks to a vigilant employee.

    While reports of these sorts of deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.

    The company assures users the attack was unsuccessful.

    To be clear, there was no impact to our company. However, we did want to share this incident to raise awareness that deepfakes are increasingly not only the purview of sophisticated nation-state threat actors and are increasingly being leveraged for executive impersonation fraud campaigns. Impressing the importance of verifying potentially suspicious contacts by individuals claiming to be with your company through established and approved internal communications channels is an important lesson to take away from this attempt. In addition to this blog post, we are already working closely with our intelligence sharing partners and other cybersecurity companies to make them aware of this tactic to help organizations stay one step ahead of the fraudsters.

    This incident underscores the importance of training employees to recognize and identify potential threats, including those from sources that appear legitimate. As AI usage continues to propagate, these kind of attacks will become all too common.

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit