Helping to share the web since 1996


AI and Cybersecurity: Expert Warns of Growing Threats from Automated Attacks

teal LED panel

While many discussions at the HumanX conference focused on AI’s promise, one speaker urged caution during the event’s final sessions.

“We’re barely scratching the surface when it comes to AI and cybersecurity,” said Alex Stamos, Chief Information Security Officer at SentinelOne. “The biggest changes are still ahead of us.”

Conversations around AI security often highlight the risks posed by AI model failures. Stamos categorized these failures into three types: traditional security vulnerabilities, where attackers manipulate systems beyond their intended purpose; safety risks, where AI causes harm to individuals; and alignment issues, where AI deviates from expectations—such as a chatbot becoming uncooperative with customers.

Failing to differentiate between these issues creates challenges, he warned. “If you mix them up, you won’t be able to test them properly,” Stamos said.

However, the broader concern lies in how AI is reshaping the cybersecurity landscape itself. Having previously served as CISO at Yahoo and Facebook, and now a lecturer at Stanford University, Stamos described a future where cybersecurity is defined by AI-driven conflicts.

“The future of cyber defense is human oversight of machine-to-machine combat,” he explained.

On the defensive side, AI already automates many tasks that security analysts once handled manually. Instead of spending 30 minutes analyzing a security threat, AI now compiles the data, allowing security professionals to make informed decisions in seconds.

But the next phase removes human oversight entirely. “That’s because attackers won’t be directly involved for much longer,” Stamos said. “We’re going to see an explosion of AI-driven cybercrime, particularly from financially motivated actors.”

He highlighted one of the most aggressive players: North Korea’s hacking units, which have pulled off major cyber heists, including a $1.4 billion cryptocurrency theft. Unlike intelligence agencies in countries like China, which must operate with extreme caution, groups like North Korea’s Lazarus Group act with far fewer restrictions.

“If you’re part of China’s Ministry of State Security and your mission is to hack Lockheed Martin, failure isn’t an option—you have to get it right,” Stamos said. “But groups like Lazarus don’t have to be that careful. Their goal is simply to compromise as many systems as possible and see who they can extort.”

He explained that AI is already helping cybercriminals streamline their attacks. While security experts once assumed hackers relied on pre-built malware from underground markets, AI is making it easier for even less experienced attackers to generate their own malicious code.

To illustrate, Stamos demonstrated how readily available AI tools—such as Microsoft Copilot—could assist in developing malware. “You can’t just ask it to ‘write a Windows worm,’” he said. “But you can request each individual component you need to build one.”

This shift could dramatically change the cybersecurity landscape, making sophisticated attacks more accessible than ever before.

Newer Articles

Older Articles

Back to news headlines