What’s the answer to the cybersecurity and AI dilemma?
Generative AI first rocked the world with excitement, then many concerns started to rise...
When generative AI (GenAI) tools like ChatGPT, Bard, Stable Diffusion and others hit the market last year, heralding new business use cases at scale, the world shook with excitement. But soon after, fear and uncertainty kicked in, as is often the case with new technologies.
While GenAI tools offer undeniably incredible benefits to the world, they could also be used for nefarious purposes. On one hand, there are concerns about the existential risk of AI, as I wrote in a previous article on BrainBox. On the other hand, bad actors could (and they already are) use these tools to launch malicious attacks with devastating speed and accuracy.
And so, there’s a dilemma: How do we stop good from becoming bad?
A cat and mouse game
Last month I was at Upgrade 2024, NTT Research’s annual conference, in San Francisco, where I listened to several sessions curated by industry experts across cybersecurity, AI, photonics, and watched with keen interest the demo of some of the most exciting innovations I’ve seen in the past year.
From the concept of an ‘AI constellation’ where multiple AI models work together to the IOWN (Innovative Optical and Wireless Network) initiative which aims to power the future of the communications infrastructure with photonics and computing technologies, NTT remains resolute in its quest to build the foundational technologies that’ll power the future of compute. This is perhaps unsurprising when you consider NTT invests nearly $4 billion in R&D every year.
But while every session at the conference was impactful, the intricate dance between AI and cybersecurity has always been of great interest to me. So, I spoke with Mihoko Matsubara, chief cybersecurity strategist at NTT, to drill deeper into this dilemma.
“It’s a cat-and-mouse game,” says Matsubara who adds that “generative AI is a double-edged sword in cybersecurity.”
The double-edged sword
Matsubara, who spoke about ‘generative AI and the cybersecurity response’ at Upgrade 2024, notes that while GenAI can enhance our defenses, it also empowers bad actors to launch more sophisticated attacks— a thought echoed by many other experts as well.
Last year, cybersecurity company and Google subsidiary Mandiant reported “evidence of financially motivated actors using manipulated voice and video content in business email compromise (BEC) scams.” And just in February of this year, hackers used a deepfake video over a conference call to scam a Hong Kong-based multinational firm of $25 million.
Given the complexity of today’s cybersecurity landscape, Matsubara emphasizes the need to empower cyber defenders by automating tasks and using AI for research. But more importantly, she stresses the importance of protecting critical data sets from data poisoning— a technique bad actors use to compromise the training datasets for AI models to influence or manipulate the models. One classic example is Nightshade, “an optimized prompt-specific poisoning tool” for confusing text-to-image generative models. While Nightshade was created to help artists fight back in the AI copyright war, it shows how data poisoning works.
“If we don’t have good data protection, it will have very negative consequences like reputational damages or harassment or losses,” Matsubara warns.
Future projections
Cybersecurity Ventures estimates global cybercrime damages will reach $10.5 trillion by 2025. For context, that’s 10% of the global GDP. Business leaders and cyber defenders must think about this to protect our global and national economies, as well as business operations, says Matsubara, who acknowledges GenAI has already accelerated malicious efforts by cybercriminals and state-sponsored actors to launch malicious cyberattacks.
She predicts an even greater rise in BEC scams using deepfake videos and state-sponsored actors leveraging AI to influence democracies. But Matsubara also says businesses can address this challenge strategically using GenAI. For example, businesses can empower cyber defenders with GenAI tools like ChatGPT to detect phishing sites “at machine speed”.
Operating at machine speed is essential for cyber defenders, especially when you consider Gartner’s report that “nearly half of cybersecurity leaders will change jobs, 25% for different roles entirely due to multiple work-related stressors.” The work is simply overwhelming and cyber defenders can’t keep up at their current pace.
“By empowering cyber defenders with generative AI, protecting against data poisoning, and addressing the cybersecurity talent gap, we can stay ahead of the curve and safeguard our critical data assets,” Matsubara says.