nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

MIT boffins build AI bot that spots '85 per cent' of hacker invasions

So ... it still lets in more than one in ten attacks

By Iain Thomson, 18 Apr 2016

Eggheads at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) claim they have trained a machine-learning system to detect 85 per cent of network attacks.

To reach that level, the software, dubbed AI2 [PDF], parsed billions of lines of log files, looking for behaviors that indicate either a malware infection or a human hacker trying to get into a network. If it spotted any suspicious connections or activity, it alerted a human analyst, who identified whether the software got it right or wrong.

After 3.6 billion log lines were scanned and three months of training passed, the AI2 system was able to hit 85 per cent accuracy in detecting malicious activity, we're told.

"This brings together the strengths of analyst intuition and machine learning," said Nitesh Chawla, the Frank M. Freimann Professor of Computer Science at the University of Notre Dame.

"This research has the potential to become a line of defense against attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems."

"You can think about the system as a virtual analyst," added CSAIL research scientist Kalyan Veeramachaneni.

"It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly. The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions – that human-machine interaction creates a beautiful, cascading effect."

This kind of software has been what security companies have spent over a decade trying to get right. So-called heuristic systems are plagued with false alerts and can miss key attacks. MIT's AI2 system is certainly an improvement, but it's not there yet – after all, it lets through 15 per cent of attacks, and only one needs to succeed. And as attacks change, the AI's knowledge will become useless unless it is continuously trained, just like a normal person has to keep learning.

At this year's RSA security conference, AI security systems were very much this year's trick, with plenty of companies putting forward so-called smart systems that use machine learning for detection. RSA president Amit Yoran warned attendees to remain skeptical, and many in the industry agree. ®

The Register - Independent news and views for the tech community. Part of Situation Publishing