The million-dollar hole in the FBI 'paying CMU to crack Tor' story
Researchers and writers blur lines, cause problems
Analysis It's something every journalist learns: if you hit on an important story, make sure every part of it is accurate. One small error is all that is needed to undermine the entire piece.
Roger Dingledine is not a journalist, but as interim chief executive of the Tor project, he should have known to be more careful when he wrote in a blog post that the FBI paid Carnegie-Mellon $1m to help identify users of the anonymizing network.
It was a single line, but one that is now being used to put a question mark over the entire story.
We have been told that the payment to CMU was at least $1 million.
The fact that the FBI was using information gleaned from a "university-based research institute" – according to court documents – to identify and prosecute individual users was a significant story worthy of further investigation.
But a financial connection, a quid pro quo, is something else entirely. And that was made plain from the sudden explosion of stories – ours included – focused on the payment.
Suddenly Carnegie-Mellon goes from a research institute that may have assisted in taking down some unsavory characters (a drug pusher and a viewer of child abuse images) to one paid to do the federal government's dirty work.
Where did Dingledine get his $1m figure? We've asked him and are waiting to hear back. But a few days ago, he told WiReD it was from "friends in the security community." Which is exactly the sort of vague response that would get a news story spiked.
The figure has been leapt on by Carnegie Mellon and the FBI. "I'm not aware of any payment," the university's press person told WiReD. "I'd like to see the substantiation for their claim." We subsequently followed up with Carnegie Mellon, which told us that it was not commenting on the "accusations."
Likewise, the FBI. A spokesman told Ars Technica that the story was "inaccurate" while not going into any detail about what exactly was inaccurate – the reports, the use of Carnegie Mellon information, the payment, or the exact payment amount.
Again, we followed up. Again, the FBI will not speak on the record. But it did make plain that it is the payment – any payment – that the FBI is questioning.
And here's where things get messy.
Both researchers believed to be at the center of the saga – Alexander Volynkin and Michael McCord – work for Carnegie Mellon University and the "Computer Emergency Response Team" – CERT – which is a division of the university's Software Engineering Institute (SEI).
Volynkin and McCord use the two organizations' names interchangeably. Volynkin describes himself on LinkedIn as a "Research Scientist at CERT" but at conferences emphasizes the university by stating he comes from "Carnegie Mellon University / CERT." McCord puts his title as "Software Vulnerability Analyst at CERT" but, again, when it comes to research papers, the university name features front and center.
This blurring, or combining, of names may seem trivial but it is at the very heart of the problem of the federal government using researchers and their work to carry out criminal investigations.
CERT's SEI parent is an FFRDC – a Federally Funded Research and Development Center – which is a very specific entity funded by the US government to carry out clearly defined long-term research.
There's nothing wrong with this. In fact, US government funding of research has proven to be enormously beneficial over the years, not least in the fact it gave us the internet.
CERT has always been a highly respected group handling computer security. It is synonymous with malware and vulnerability identification and verification.
Carnegie Mellon University, on the other hand, is a globally recognized research institute that, like all universities, prides itself on its independent and autonomous nature.
The reason CERT was set up as a separate FFRDC is exactly because the conflation of an independent university with a federally funded research center is dangerous.
What happened with Tor?
Volynkin and McCord discovered a security flaw in the Tor network while at their jobs at CERT. They then used it to carry out research into the Tor network itself.
Over a six-month period they added a group of relays to the anonymizing network which, combined with their knowledge of the security flaw, enabled them to identify specific users through their IP addresses, to track them, and to see specific websites they visited.
The researchers did not inform the Tor Project of this flaw nor their research, however – meaning that the organization was unaware who was behind the tracking activity when it shut the relays down in July. It published a blog post going into some detail, and also updated its software to close the hole that was being used.
The information gleaned from that piece of "research" found its way into the hands of the FBI, that then used it to effect real-world arrests of two people – one in connection with the Silk Road drug-trading marketplace, and the other on suspected child sex abuse images offenses. We don't know when that happened. We also don't know if there were other arrests or if more people were investigated by the FBI as a result of the material – but it seems probable.
When the two researchers wrote a paper covering their work – which they flagged up as "breaking Tor on a budget" – and planned to launch it at the annual Black Hat conference last year, it lead to a lot of interest. So much interest in fact that the talk was pulled – by lawyers from Carnegie Mellon.
"There is nothing that prevents you from using your resources to de-anonymize the network's users instead by exploiting fundamental flaws in Tor design and implementation," read the now-deleted synopses. "And you don't need the NSA budget to do so. Looking for the IP address of a Tor user? Not a problem. Trying to uncover the location of a Hidden Service? Done. We know because we tested it, in the wild."
The synopsis was then quite explicit about the fact they had used a security flaw to identify people. "In this talk, we demonstrate how the distributed nature, combined with newly discovered shortcomings in design and implementation of the Tor network, can be abused to break Tor anonymity ... we will dive into dozens of successful real-world de-anonymization case studies, ranging from attribution of botnet command and control servers, to drug-trading sites, to users of kiddie porn places."
At the time, some speculated that the talk was pulled because it violated federal wiretapping laws. Because the talk never happened, the research never became public.
All of this fits into place: researchers, intrigued at discovering a flaw in an anonymous network, carry out live tests to see if they can track people and discover their real identities. It is the sort of research that makes your name.
Just as understandable is the response of the FBI in discovering from the details of the talk that the information can be used to find people breaking the law. Shut the talk down to preserve the evidence for a trial, and then force the researchers to hand over the information so they can find and arrest people. That's good police work.
The dangerous flaw in the tale
But the $1m turns that story on its head. If true, suddenly we have the federal authorities paying a university to carry out investigative work on their behalf.
That is a very much more serious situation but it is one predicated on a single line in a single blog post: "We have been told that the payment to CMU was at least $1 million."
The evidence for this claim is weak at best. It's no wonder the FBI and Carnegie Mellon are unhappy about it, and they have every right to be – far be it from us to run to the defense of a moneybags university and powerful federal agency. The problem is that making an unsubstantiated claim that is much more serious than the original issue is liable to detract from the real issues.
There are still serious questions to be asked – but probably not of the FBI. For one, why did CERT not inform the Tor Project about a critical security flaw in its software? Is it because the Tor network is a high-priority target for law enforcement? Or does CERT itself have a flawed disclosure policy that needs reviewing?
Just how cozy is the relationship between CERT and the US government? We know, thanks to Edward Snowden, that the NSA uses security flaws in software as a way to carry out covert surveillance. But where does it find those flaws? Is CERT feeding the US establishment with at least some of the valuable software holes it uses?
At least one respected security researcher has since pondered aloud whether CERT's credibility is now under question.
Another question: how does Carnegie Mellon separate research carried out under its name and research carried by associated entities paid for by the US government? And how does it resolve conflicts and crossovers between the two? Who are researchers representing – CERT or Carnegie Mellon?
And finally: what about the other 45 FFDRCs?
Do similar policies – or lack of adequate policies – exist at the Princeton Plasma Physics Laboratory? What about at the Lincoln Laboratory at MIT? Or the Johns Hopkins University Applied Physics Laboratory?
We count eight FFDRCs at universities across the United States. And they add to the 15 "university-affiliated research centers" (UARCs) across the country. All are funded by the federal government and closely associated with researchers at independent universities.
There are very good reasons why academic study is kept separate from the often very different goals of the federal government – particular law enforcement. In this case, the suggestion that the FBI paid a university to acquire specific actionable information appears far-fetched.
But the case of Carnegie Mellon, CERT, and Tor should serve as a warning to everyone that the line is too easily broached. ®