Deepfakes to cost $40 billion by 2027 as adversarial AI gains momentum
Don’t miss OpenAI leaders Chevron, Nvidia, Kaiser Permanente and Capital One only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three-day event. Learn more
Now one of the fastest growing forms of adversarial AI, losses related to deep counterfeiting are expected to grow from $12.3 billion in 2023 to $40 billion by 2027, growing at a compound annual growth rate of 32 %. Deloitte sees deep counterfeiting spreading in the coming years, with banking and financial services a prime target.
Deepfakes typify the edge of adversarial AI attacks, achieving a 3,000% increase in the past year alone. False-deep incidents are predicted to increase by 50% to 60% in 2024, with 140,000-150,000 cases predicted globally this year.
The latest generation of AI generating applications, tools and platforms gives attackers what they need to create deep fake videos, simulated voices and fraudulent documents quickly and at a very low cost. The Pindrops Voice Intelligence and Security Report 2024 estimates that deep fake fraud targeting contact centers costs around $5 billion annually. Their report underscores just how serious a deep fake technology threat is to banking and financial services
Bloomberg reported last year that “there is already an entire cottage industry on the dark web selling scam software from $20 to thousands of dollars.” A recent infographic based on Sumsub’s 2023 Identity Fraud Report provides a global view of the rapid growth of AI-powered fraud.
Countdown to VB Transform 2024
Join enterprise leaders in San Francisco July 9-11 for our landmark AI event. Connect with peers, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. Register Now
Source: Statista, How dangerous are Deepfakes and other AI-powered scams? March 13, 2024
Enterprises are not prepared for deep counterfeiting and adversarial AI
Adversary AI creates new attack vectors that no one sees coming and creates a more complex and nuanced threat landscape that prioritizes identity-driven attacks.
Surprisingly, one in three enterprises do not have a strategy to address the risks of an adversarial AI attack that would most likely start with deep fakes of their key executives. Ivanti’s latest research reveals that 30% of enterprises have no plans to identify and defend against adversarial AI attacks.
The Ivanti State of Cybersecurity 2024 report found that 74% of enterprises surveyed are already seeing evidence of AI-powered threats. The vast majority, 89%, believe AI-powered threats are just getting started. Of the majority of CISOs, CIOs and IT leaders Ivanti surveyed, 60% fear their enterprises are not prepared to defend against AI-powered threats and attacks. Using a deepfake as part of an orchestrated strategy involving phishing, software vulnerabilities, ransomware, and API-related vulnerabilities is becoming increasingly common. This aligns with the threats that security professionals expect to become more dangerous due to the generation of AI.
Source: Ivanti 2024 State of Cyber Security Report
Attackers focus deep fake efforts on CEOs
VentureBeat regularly hears from enterprise software cybersecurity CEOs, who prefer to remain anonymous, about how fakes have progressed from easy-to-identify fakes to recent videos that appear legitimate. Audio and video deepfakes seem to be a preferred attack strategy of industry executives, aiming to defraud their companies of millions of dollars. The threat is compounded by how aggressively nation-states and large-scale cybercriminal organizations are doubling down on developing, employing, and increasing their expertise with generative adversarial network (GAN) technologies. Of the thousands of CEO deepfake attempts that have occurred this year alone, the one targeting the CEO of the world’s largest advertising firm shows just how sophisticated attackers are becoming.
In a recent Tech News briefing with the Wall Street Journal, CrowdStrike CEO George Kurtz explained how improvements in AI are helping cybersecurity practitioners protect systems, while also commenting on how attackers are using it. Kurtz spoke with WSJ reporter Dustin Volz about AI, the 2024 US election and the threats posed by China and Russia.
“Deepfake technology today is very good. I think that’s one of the areas that you really worry about. I mean, in 2016, we were tracking this, and you would see that people were actually just having conversations with bots, and this was in 2016. And they’re literally arguing or promoting their cause and they’re having an interactive conversation, and it’s like nobody’s behind the thing. So I think it’s very easy for people to get wrapped up in what’s true, or there’s a narrative that we want to leave behind, but a lot of it can be directed and has been driven by other nation states,” he said. Kurtz.
CrowdStrike’s Intelligence team has invested significant time in understanding the nuances of what makes a deep fake compelling and in which direction technology is moving to achieve maximum impact on viewers.
Kurtz continued, “And what we’ve seen in the past, we’ve spent a lot of time researching this with our CrowdStrike intelligence team, is a little bit like a pebble in a pond. Like you’re going to get a topic or you’re going to hear a topic, anything related to the geopolitical environment, and the pebbles fall into the pond and then all these waves ripple out. And it’s this reinforcement that happens.”
CrowdStrike is known for its deep expertise in AI and machine learning (ML) and its unique single-agent model, which has proven effective in driving its platform strategy. With such deep expertise at the company, it’s understandable how its teams would experiment with deep fake technologies.
“And if now, in 2024, with the ability to create deepfakes, and some of our inside guys have made some funny video pranks on me and just to show me how scary it is, you can’t say I wasn’t me. the video. So I think that’s one of the areas that really concerns me,” Kurtz said. “There’s always concern about infrastructure and things like that. In those areas, a lot of it is still paper voting and the like. Some of them aren’t, but how you create the false narrative to get people to do things that a nation-state wants them to do, that’s the area that really concerns me.”
Enterprises must rise to the challenge
Enterprises are risking losing the AI war if they don’t keep pace with the rapid pace of attackers weaponizing AI for deep-fake attacks and all other forms of adversarial AI. Deepfakes have become so common that the Department of Homeland Security has issued a guide, The Rising Threat of Deepfake Identities.
#Deepfakes #cost #billion #adversarial #gains #momentum
Image Source : venturebeat.com