For fighting cybercrime and boosting internet security, UCSD’s Stefan Savage wins a MacArthur award – Los Angeles Times
Stefan Savage and his students have hacked into cars and disabled brakes, used telescopes to make illicit copies of keys from 200 feet away, and joined criminal groups selling counterfeit drugs over the internet.
Lucky for you, this professor of computer science and engineering at UC San Diego is on your side.
Savage, 48, works on a wide range of projects designed to protect computer systems from attackers, whether it’s a crook trying to steal credit card information off a laptop or a foreign country gathering intelligence by hacking into a database maintained by Yahoo or Anthem Blue Cross.
What sets him apart from others who face off against cybercriminals is the holistic approach he uses to keep our inboxes free from spam and our private information from being stolen. That’s why he just won a five-year, $625,000 “genius” grant from the MacArthur Foundation.
“Instead of just saying those are emails to block, or attacks to defend against, we spend a lot of time looking at a problem from the attacker’s standpoint,” he said.
That includes asking questions such as: How is an adversary making money? What does their supply chain look like? What can be done to make an economically motivated attack unprofitable?
“If you don’t actually understand the back end of the criminal process, then you don’t really know if whatever intervention you are using is actually the most cost-effective place to get in there and do something,” he said.
The MacArthur Foundation praised Savage for his “deep insights into internet security” and his “commitment to tackling problems of immediate, real-world importance.”
Savage spoke with The Times about how he became interested in cybersecurity, computer threats most of us haven’t thought about and why an internet security guy works in a university instead of a Silicon Valley start-up.
It was really a series of accidents. In the late 1990s I was working on network measurement projects, and we started getting some weird anomalous data.
Eventually we realized that someone was attacking someone else and then lying about what’s called the source address in the packet — that’s like the return address on an envelope. Instead of identifying themselves, they were putting a random number in there. Since we had a lot of addresses, some of those numbers were ours.
All of a sudden, we could see who they were attacking and for how long and how many different people were being attacked. I just found that fascinating.
I tend to follow an approach driven by serendipity. It has to do with whatever project emerges that seems like it has the most opportunity to make a difference, combined with some reason why our group is uniquely suited to do it. That could be because we have data other people don’t have, or that we are prepared to do it quickly enough.
A lot of it is also driven by students. When I started as a professor, I had this mental model that you go and tell your students what to do. Maybe that works for some people. I know that it does not work for me.
We add computing to things because it lets us do them more efficiently, or more safely, or with greater accuracy. And then we’ll add networking so we can get remote management. And this is all great.
The combination, though, creates the systemic risk that if someone sends a bad command, what happens?
Well, modern cars are basically distributed computers that happen to have wheels on them. Increasingly, planes are the same way. So are boats. Stoplights are that way. The management of the power system is the same way, and the management of the pumps that drive our water system is the same.
All these systems work better and more efficiently now than they did before, but they also have a lot of fragility against someone who educates themselves about the details of how they work and is interested in disrupting them.
It’s true that academics are not well-represented on the front lines of computer security. But I like being in academia in part because of the range of things that we work on. And for the parts of our work that affect public policy, it is much easier to do that from an academic platform then being an engineer somewhere.
The thing I’m most excited about now is what we are calling evidence-based security.
Right now, the vast majority of security decisions that are made by companies and individuals are ad hoc — like, stick your finger in the air and make a guess.
There is a bunch of received wisdom and a lot of what is called “best practices literature” that will help you defend yourself in a lawsuit, but it’s not based on any well-founded empirical measurement.
We have become interested in trying to bring to the security world the same kind of approach that the healthcare industry uses. Let’s look at outcomes and treatments. Let’s measure this stuff.
The hope is that we get to a point where we can talk about risk and make decisions that are based on data and not on people’s gut feeling based on experience. I’m pretty excited about that.
This interview has been edited for length and clarity.
MORE IN SCIENCE