Will LLM AI Close The Bad Guys’ Skills Gap? with Adrian Sanabria
This episode is a bit scary. Adrian Sanabria, who on an earlier show busted many cybersecurity myths, is back again, this time analyzing the impact of Large Language Model Artificial Intelligence on a hypothesized skills gap on the bad guy side.
Premise One: Given how many organizations that are vulnerable and that have NOT been breached, the bad guys are suffering the same skills gap we are.
Premise Two: Exploit attacks (think of exploits as ransomware, data hostage situations, threats to publish breached data, etc.) can benefit from LLM AI.
It's really that simple a connecting of the dots. Adrian and Allan deconstruct the steps of an exploit attack, analyze the capabilities of LLM AI and cross-reference the two.
If they are right, then we have a burden of leveraging and learning LLM AI ourselves, as quickly as possible...
Sponsored by our good friends at Dazz:
Dazz takes the pain out of the cloud remediation process using automation and intelligence to discover, reduce, and fix security issues—lightning fast. Visit Dazz.io/demo and see for yourself.