Cybersecurity today is executed by either humans or machines, which could have their own challenges – missed attacks because the parameters don’t match rules set by human experts, or systems mistakenly zeroing in on non-threats. So why not combine humans and AI to get the best of both worlds?
This is what researchers from the Massachusetts Institute of Technology accomplished with its new artificial intelligence system called AI2.
The team from the university’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and machine-learning startup PatternEx developed the new platform that can identify cyberattacks 85 percent of the time and even reduce the amount of false positives by a factor of five.
But how does this new AI get the job done?
Tested using 3.6 billion “log lines” or pieces of data from millions of people over three months, AI2 goes through data and then spots suspicious activity through unmanned machine learning. From there, human reviewers check for signs of a security breach – a one-two punch that can predict attacks with precision and eliminate the need to pursue bogus intelligence leads.
“You can think about the system as a virtual analyst,” explains CSAIL research scientist Kalyan Veeramachaneni, the co-developer of the system along with PatternEx chief data scientist Ignacio Arnaldo. “[I]t can improve its detection rates significantly and rapidly.”
AI2 uses three machine learning algorithms for detecting suspicious events, but just like other AI systems it also needs human feedback to verify its findings. And this truly demands security expertise, such as the skills to tell a DDoS attack from a legitimate traffic surge.
But since experts are busy and do not have all day to review overwhelming piles of data flagged as suspicious, this system refines its models and shows top events for analysts to label. The system is constantly being enhanced through the team’s so-called “continuous active learning system.”
As the AI improves in identifying actual attacks, an analyst may eventually only look at a lean 30 to 40 events per day. According to the team, AI2 can scale to billions of log lines every day – the more attacks detected, the more analyst feedback provided, a synergistic action toward more accurate predictions.
And while it exhibits great promise, tech cannot really replace human analysts, especially with ever-expanding threats.
“The attacks are constantly evolving,” says Veeramachaneni in a Wired report. “This system doesn’t get rid of analysts. It just augments them.”
For computer science professor Nitesh Chawla of University of Notre Dame, the research is a potential “line of defense” against fraud, account takeover, service abuse, and other attacks faced by consumer-oriented systems today.
The findings were presented in a research paper at the IEEE International Conference on Big Data Security held in New York City last week.