McAfee unveils Project Mockingbird to combat AI voice scams

McAfee has announced Project Mockingbird, AI-powered Deepfake Audio Detection technology at CES 2024. This industry-leading advanced AI model from the McAfee Labs is trained to detect AI-generated audio.

Of late, cybercriminals are employing advanced AI models for convincing scams, including voice cloning to impersonate distressed family members, asking for money.

These types of cheap fake scams may involve manipulating authentic videos, like newscasts or celebrity interviews, by splicing in fake audio to change the words coming out of someone’s mouth.

To combat this growing issue, McAfee’s Project Mockingbird technology uses a combination of AI-powered contextual, behavioral, and categorical detection models to identify whether the audio in a video is likely AI-generated.

It is said to have an accuracy rate of 90%. The first public demos of Project Mockingbird are kept at CES 2024 with wider availability expected later this year.

Regarding the announcement, Steve Grobman, Chief Technology Officer, McAfee said,

With McAfee’s latest AI detection capabilities, we will provide customers a tool that operates at more than 90% accuracy to help people understand their digital world and assess the likelihood of content being different than it seems. So, much like a weather forecast indicating a 70% chance of rain helps you plan your day, our technology equips you with insights to make educated decisions about whether content is what it appears to be.

The use cases for this AI detection technology are far-ranging and will prove invaluable to consumers amidst a rise in AI-generated scams and disinformation. With McAfee’s deepfake audio detection capabilities, we’ll be putting the power of knowing what is real or fake directly into the hands of consumers.

We’ll help consumers avoid ‘cheapfake’ scams where a cloned celebrity is claiming a new limited-time giveaway, and also make sure consumers know instantaneously when watching a video about a presidential candidate, whether it’s real or AI-generated for malicious purposes. This takes protection in the age of AI to a whole new level. We aim to give users the clarity and confidence to navigate the nuances in our new AI-driven world, to protect their online privacy and identity, and well-being.