While most people believe that artificial intelligence is real, many still say humans pose a greater cybersecurity danger than machines. That’s according to a flash poll of webinar attendees presented by ITSP Magazine.
Ninety-one percent said AI is real, while 83% said they believed humans still pose a greater threat. The polling suggests that while AI has come a long way, advances are still largely dependent on human intervention.
That theme that was central throughout the webinar. In fact, one of the speakers was flat out skeptical of AI, at least as it’s conventionally described. While machine learning is in use now, he said, no machine can learn independently on its own. Machines are still reliant on humans to feed data and train algorithms.
Yet that was just one view reflected in the webinar – AI & Machine Learning in Cybersecurity. What Is the Difference? – which brought together an industry panel moderated by ITSP Magazine Editor in Chief Sean Martin. The polling questions and initial reaction were in part used to set up the discussion.
Below are some of our takeaways from the webinar.
1) Machine learning is a subset of AI.
According to the panel, machine learning is a core part of AI but most practical applications don’t go beyond machine learning. AI suggests the ability to reason and organize – a sort of artificial intuition – that is able to make educated guesses based on fragments data.
Another speaker generally agreed but pointed out self-driving cars are closer to “true” AI. This is because self-driving cars must perceive and react to an environment – the road – rather than merely sorting data. Movies and television are primarily responsible for the notion of machines having “general intelligence” or “strong AI” which they say, for now, is still science fiction.
A third panelist compared the differences between AI and machine learning to natural human intelligence and human learning. Sure, we can learn facts, but we also learn to interact with our environment. We learn to process geospatial information from our eyes, we learn to pick up objects with a sense of feel, and our while our ears can hear words, it’s our brains that understand the context.
2) The risks to machine learning in the wrong hands.
Views diverged as to how prominent machine learning is in current cyber-attacks. On one hand, some said most attacks are still engineered by humans. On the other, machine learning is already used to manage botnets and to sort data and identify softer targets.
The panelists offered some specific use cases, which like the evolution of the cybersecurity industry itself, ranged from a mere nuisance to the highly sophisticated:
- Captchas. These are easy targets for machine learning. Bad actors might pit two systems against each other so that they learn from each other.
- Robocalls. While not precisely a cyber threat, robocalls calls, with the help of machine learning, are becoming highly sophisticated. There’s potential for advancement given its far less expensive to host the algorithms on a server than it is to employ a warehouse of employees on telephones.
- Phishing emails. Security professionals use machine learning to detect phishing emails by feeding the machine examples of what is – and what is not – a suspect message. Hackers can easily employ the same technology to understand which emails produce clicks. The potential here is to develop customized phishing emails at scale that “approach spear-phishing accuracy.”
“Attackers are increasingly relying on highly targeted, non-payload attacks that exploit trust and leverage pressure tactics to trick users into taking action that will put their organizations at risk. Of the more than 537,000 phishing threats GreatHorn detected in its research, 91 percent (490,557) contained characteristics of display name spoofs.”
3) Opportunities for strengthening cyber security
The webinar also explored use cases for machine learning in strengthening enterprise cyber security, including the following:
- Analysis of massive data sets. Security professionals are drowning in data. The volume of data is more than humans can consume. Machine learning can be taught to sort and prioritize data.
- Faster identification of anomalies. The panel noted one of the biggest opportunities for machine learning is to help filter out the excessive noise of security alerts – and direct human attention on those alerts that really matter. The speed of detection is important because the speed of attacks is accelerating.
- Understanding behavior. Machine learning holds great promise in understanding behavior and modeling it across the enterprise. Modeling could include both routine traffic for identifying anomalies, but also the characteristics of malware other threats.
The Bricata “solution enables the security operations professional to hunt for suspicious behavior and anomalous or untrusted traffic. The addition of the Cylance engine enables protection against the latest threats, such as ransomware and zero-day malware. Bro’s file carving, analysis, and scoring provides one more layer of defense and context in filtering alerts, optionally passed to Security Information and Event Management (SIEM) and Log solutions for further review via out of the box APIs.”
Data, Context, and Replicating your Best Cyber Talent
Debriefs with top cyber talent following an incident usually finds common ground, according to the panel. The smart hunters often point to intuition or inference and say a sequence of events seemed odd, or a port was experiencing unusual activity.
These anecdotes capture a human intuition than “can be turned into math.” The goal with machine learning to feed this into a machine so that it can have the context to make the same conclusions an experienced cybersecurity professional might.
The caveat, according to one panelist, is that “math is magical, but not magic.”
The full webinar is available on the ITSP Magazine website under the “TV” section.
If you enjoyed this post, you might also like:
How to Buy Cybersecurity