If you’ve ever watched a cartoon like Tom and Jerry, you’ll notice a common theme: an elusive target escaping a formidable foe. This game of “cat and mouse” means chasing something that you barely escape every time you try, whether literally or figuratively.
Similarly, keeping persistent hackers at bay is a constant challenge for cybersecurity teams. To prevent chasing the unreachable, MIT researchers are working on an AI approach called “adversarial artificial intelligence” that simulates attackers on devices and networks to test network defenses before a real attack occurs. Other AI-based defenses can help engineers further harden systems to fend off ransomware, data theft, and other attacks.
Here, Una-May O’Reilly, Principal Investigator at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and head of the Anyscale Learning For All (ALFA) team, explains how artificial adversarial intelligence can protect us from cyber threats.
Q: What role does adversarial intelligence play for cyber attackers and what role can it play for cyber defenders?
A: Cyber attackers come in a range of capabilities. At the lowest level, there are so-called script kiddies, threat actors who spread known exploits and malware in hopes of finding networks and devices that don’t practice proper cyber hygiene. At the mid-level, there are cyber mercenaries with more resources and organization who prey on companies with ransomware and extortion. And, worst of all, there can be state-sponsored groups capable of launching “advanced persistent threats” (APTs), which are the most difficult to detect.
Consider the specialized, malicious intelligence these attackers are mobilizing: adversarial intelligence. Attackers create highly technical tools that allow them to hack into code, they choose the right tools for their targets, and attacks have multiple steps. At each step, they learn something, integrate it into their situational awareness, and decide what to do next. Advanced APTs can strategically choose their targets and devise slow, stealthy plans whose execution is sophisticated enough to evade our defenses. They may even devise plans to fake evidence that points to another hacker.
My research goal is to replicate this particular type of offensive or adversarial intelligence – the intelligence that human threat actors rely on. I use AI and machine learning to design cyber agents that model the adversarial behavior of human attackers, as well as the learning and adaptation processes that characterize the cyber arms race.
It is also important to keep in mind that cyber defense is very complex. It is growing in complexity to keep up with growing attack capabilities. These defense systems include designing detectors, processing system logs, triggering appropriate alerts, and categorizing them into incident response systems. It requires constant vigilance to protect a very large, hard to track, and highly dynamic attack surface. On the other side of the attacker-defender conflict, my team and I are also inventing AI to help with these various defense fronts.
Another thing that stands out about competitive intelligence is that Tom and Jerry learn by competing with each other. Their skills improve and they enter into an arms race. As one gets better, the other gets better to protect its life. This tit-for-tat improvement will continue forever! We are working to recreate a cyber version of this arms race.
Q: What are some examples of how artificial intelligence has helped keep us safer in our daily lives? How can we use adversarial intelligence agents to get ahead of threat actors?
A: Machine learning has been used in many ways to ensure cybersecurity. There are various detectors that filter threats, for example tuned for anomalous behavior or recognizable types of malware. There are AI-powered classification systems. There are AI-powered anti-spam tools for mobile phones.
My team designs AI-powered cyber attackers that can do the same things that threat actors do. We invented AI to provide our cyber agents with specialized computer skills and programming knowledge, allowing them to process any kind of cyber knowledge, plan attack procedures, and make informed decisions during campaigns.
Adversarial intelligent agents (such as our AI cyber attacker) can be used as practice in testing your cyber defenses. Testing the robustness of your network against attacks requires a lot of effort, and AI can help with that. Furthermore, adding machine learning to your agents and defenses creates an arms race that can be used to test, analyze, and predict countermeasures that might be used in self-defense.
Q: What new risks are they adapting to and how are they adapting?
A: There seems to be no end to new software releases and new system configuration designs. With each release there are vulnerabilities that attackers can target. These may be examples of documented code weaknesses, but they may also be new.
New configurations introduce new vulnerabilities and attack vectors. When we were dealing with denial of service attacks, we didn’t envision ransomware. Now we’re fighting ransomware with cyber espionage and IP theft. All of our critical infrastructure is being targeted: our communications networks, our financial systems, our healthcare, our cities, our energy, our water.
The good news is that there is a lot of effort being put into protecting critical infrastructure. We need to translate that into AI-based products and services that automate some of those efforts, and of course, continue to design smarter adversarial agents to stay vigilant and put cyber-assets protection into practice.