This article is more than 1 year old

White hat hacker AI bots prepare for DARPA's DEF CON cyber brawl

Software must automatically find bugs in rival code. Are you not entertained?

The research wing of the US military has picked the seven teams who will compete to build machine-learning software that can find and patch bugs automatically to fend off hackers.

The DARPA Cyber Grand Challenge will be held at the DEF CON hacking conference next month. The agency has put up $2m in prize money in the unlikely event of a team building a system that can not only find flaws but write its own patches and deploy them without crashing.

The competition was inspired by DARPA's 2004 Grand Challenge to build a self-driving car. While that competition was initially a failure – with no car lasting more than eight miles before crashing out – the research inspired Google and others to build automated vehicles that have since clocked up millions of miles of travel.

Now DARPA wants to do the same for computer security. We're told software flaws go undetected in the wild for an average of 312 days; the agency has invested $55m in the Cyber Grand Challenge to build a system that can sniff out and fix programming errors automatically in seconds.

Mike Walker, the DARPA program manager organizing this year's contest, said the bar had been set deliberately high, and the agency isn’t expecting any team to produce a perfect system that can find and fix all flaws this year.

Early trials had been promising, however. In qualifying heats last year, 131 pieces of software were examined by AI rivals to find 590 software flaws that DARPA knew about. No team even came close to finding and fixing them all, but by combining the best results from each team, the test code was 100 per cent patched by the end of the competition.

For the final, the selected seven teams have each been given a DARPA-constructed high-performance computer powered by about a thousand Intel Xeon processor cores and 16TB of RAM. They have to program their machine with what DARPA calls a "cyber reasoning system" that will compete without human intervention to find and address exploitable flaws hidden in DARPA-supplied code.

"Is it possible the systems will fail at the start line? Every Grand Challenge we're had indicates that the answer is yes," Walker said. "Autonomy is incredibly hard and autonomy for the first time is breathtakingly hard. But it's not a viable proof of autonomy if we don't cut the cord for the final."

The cyber reasoning systems will also be networked so they can examine their competitors' software for flaws and get extra points if they can automatically generate proof-of-concept exploits for bugs found in their opponents.

The contest will be held over ten hours beginning at 5pm on August 4 in the Paris hotel ballroom in Las Vegas. At the end of the competition, the first-place team will win $2m, with $1m and $750,000 awards for second and third place.

"What I'm going to be interested in is not the result, but the first five minutes," Walker said. "For people who've played Capture the Flag, like myself, the first five minutes is finger-stretching and coffee time, but the machine could potentially get something done."

Once the competition is over all, the teams' code – and DARPA's test code – will be put online in perpetuity under an open-source license. Walker said DARPA was encouraging hackers to use the source for their own use. ®

More about

TIP US OFF

Send us news


Other stories you might like