Skip to main content

Microsoft deputy CISO says gen AI can give organizations the upper hand

Presented by Microsoft


Cybercriminals remain more agile than corporations – they’re not constrained by the need for C-suite approval and testing and regulatory checks and balances. That’s left a lot of organizations essentially showing up to the gun fight with a butter knife. But generative AI has the potential to radically shift that balance, Kelly Bissell, deputy CISO and corporate VP at Microsoft told VentureBeat CEO Matt Marshall, during the Atlanta stop on the AI Impact Tour.

“What I hope we can do as a community is adopt AI as quickly and safely as possible so we can have the upper hand against the attackers, and not wait to be left behind,” he said.

Supercharging security with gen AI

Security is one of the most powerful applications of practical AI. Early adopters of Microsoft Copilot for Security have found that’s especially true for incident response (particularly in reducing the impact of attacks), and improving the security operations center (SOC) while dramatically lowering costs.

Microsoft Copilot for Security is informed by large-scale data and threat intelligence, including more than 78 trillion security signals processed daily, and coupled with large language models to deliver tailored insights and guide next steps, which makes it far easier to detect more subtle attacks and significantly reduce response times, even to the point where security teams can stay ahead of cyber criminals. Its LLM, tailored for security applications, makes it user-friendly, accepting natural language queries and returning actionable responses.

These features help make brand-new, junior SOC analysts far more capable right out of the gate, reducing the time and cost involved in bringing them up to speed. At the same time, it significantly reduces the time it takes to write scripts or analyze incidents, slashing it from days or hours to just minutes.

“I have a pharmaceutical company that thinks they can save, just in the security team, $50 million a year, and they can be more secure at the same time,” Bissell says. “That’s what we’re after. How do we get better security at a lower cost? That’s what we’re seeing for more than half of the companies.”

The early adopters have also revealed the need to offer both training and transparency. AI is not a tool which lends itself well to winging it, Bissell says. They’ve added logging to the tool, to give customers the ability to monitor the AI as it runs, as well as a red team tool that allows them to verify the results the platform delivers, in a trust-but-verify kind of way. That’s both improved the effectiveness of the platform as well as customer confidence in the tool’s ability.

Gen AI security in action

Bissell pointed to the pharmaceutical customer as a good example of the ways tools like Copilot for Security can offer a broad array of effective use cases. They started off with a healthy dose of skepticism, he said, but that has only unlocked more possibilities.

One of the major promises is a reduction in the fraud that often occurs around clinical trials. Bad actors have a vested interest in whether the drug will pass FDA inspections – literally. In other words, falsified results and insider trading is a big risk. Applying AI-powered security to their operational technology in the manufacturing plant or lab can monitor the equipment and not only detect signs of failure, but alert the company to potential tampering. At the same time, they’re also looking at ways to improve drug and polymer research.

“They build better products and shorten the cycle of go-to-market for drugs. That’s worth billions of dollars,” he said. “They have a patent that only lasts 10 years. If they can get to market faster, they can hold on to that market share more before it goes to the public.”

But that transformation of the SOC is potentially the most impactful of use cases, especially as cybercriminals adopt generative AI and go to work without the guardrails that encumber organizations.

“We’ve seen a dramatic adoption of what I would call open-source AI from attackers to be able to use and build models,” Bissell said. “You don’t have to be a good hacker to write malware anymore. Just like we can write code faster, like GitHub, so can the attackers. The volume of attackers is going to dramatically increase. The race is just beginning.”


VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.