🎂 John Scott Railton & Joe Sullivan Fireside Chat
Presented by: John Scott Railton , Joe Sullivan
Originally aired on September 27, 2021 @ 4:00 PM - 4:30 PM EDT
2021 marks Cloudflare’s 11th birthday, and each day this week we will announce new products and host fascinating discussions with guests including product experts, customers, and industry peers.
In this Cloudflare TV segment, we will have a fireside chat between Joe Sullivan (SVP, Chief Security Officer of Cloudflare) and John Scott Railton (Senior Researcher at The Citizen Lab).
Find all of our Birthday Week announcements and CFTV segments at the Birthday Week hub
English
Birthday Week
Transcript (Beta)
All right, we are on air, live on Cloudflare TV. Hi, everyone. My name is Joe Sullivan.
I'm the chief security officer here at Cloudflare. I have a guest here with me today.
You want to introduce yourself to the audience? Hi, folks. My name is John Scott Railton.
I'm a senior researcher at the University of Toronto Citizen Lab.
Thank you for joining me today. It's as we've discussed, it's Cloudflare's birthday week.
It's a week where we like to, as a company, try and give back a little bit.
And also talk to other people and share experiences with people who are, I think, having a positive impact on the Internet and technology in general.
And so I'm really excited to have this conversation with you. We've gotten to know each other a little bit over the years through through our work.
And I think, you know, for me, as someone who works in the world of security and technology and Internet security in particular, one of the best things about working in this space is the people you get to meet because we all have a shared mission, right?
And we all want the Internet to be better and safer. And the people we get to meet and work with are are the people who are dedicated to doing that.
And I put you on the shortlist of people that I'm I'm most proud to have gotten to know a little bit and excited to share the work.
So so you currently are employed at at Citizen Lab.
Will you tell our audience a little bit about Citizen Lab?
So Citizen Lab kind of addresses this interesting problem space, which is sophisticated nation state attackers are targeting governments.
They're targeting industry and they're targeting civil society.
But civil society is the odd group out because although they have the same really sophisticated threat actors pressure, they can't pay for security.
And often their level of security is really low.
The logical result of this is an epidemic of breaches and harm done to journalists and reporters, truth tellers and democratic processes around the world.
It's part of the resurgence of authoritarianism.
And so what we do at the Citizen Lab and the work that I help direct is we try to track and understand what those threats look like and then use whatever tools are available to us, like publications, publications, collaboration with the platforms to get patches out.
And do we have naming and shaming to try to write the balance a little bit between these like very deserving, but very vulnerable groups and everybody else?
Now, Citizen Lab, is it sort of a nonprofit?
It's associated with the university. How does that work?
So Citizen Lab is a research laboratory at a university, which means that we sit under the university for all things.
Ethics approval. We have offices in the university actually looking at it in my Zoom background here.
But at the same time, we have a degree of autonomy.
So a lot of the work that we do is funded by major philanthropy, not through the university.
That's actually pretty common for university research groups.
One of the things that I think I'm most proud of about the lab is that we have had a policy for a long time.
And this is something that comes from our director, of not accepting direct operational funding from governments or corporations.
This means that we can honestly say that we have a degree of independence, especially when it comes to working with victims of hacking other kinds of digital surveillance and harassment who may be justifiably paranoid and suspicious about everybody's motives.
Nice.
How did how did you come to find your place at Citizen Lab? So I think one of the things that I've discovered is that cybersecurity as a discipline is still in some ways young enough that everyone, for the most part, has their own winding, interesting story.
And it's a fascinating thing often to ask people what their what their journey was.
Mine is that I was doing a Ph.D. in something unrelated. I was tracking the impact of rapid changes in climate on political violence in West Africa.
And to do this, I was flying kites with robotic stabilized cameras to map flooding and to see how people in villages in West Africa modified their environments to respond to the flooding.
And what I was trying to do was understand how it could be that people working together alone to respond to like a climate disaster were in some cases making the situation worse.
Then the Arab Spring happened and I had lived in Egypt for a while and knew people who were in Cairo.
And when the Egyptian government shut down the Internet, I wondered whether there's something I could do and ultimately built a big collaborative project getting information out of Egypt during the Internet shutdown in the middle of the Arab Spring.
So we're talking about 2011.
The project worked to get information out. We then sort of franchised the project and did it again in Libya.
And it was during that work that I began to see something that was really making me uncomfortable and really caught my attention, which is part of the project was using satellite phones and other kind of sideways techniques to get around like nation scale Internet blackouts.
But I began to notice something, which is it seemed like some of the people who were agreeing to talk to me and get information out, often anonymously, were telling me that they were having weird things happening on their computers.
And it started dawning on me that maybe the Libyan government was hacking these opposition groups.
And that led to a journey that involved me ultimately finding Citizen Lab as researchers who helped me understand what I was looking at.
And I just found this problem set so interesting of people holding dictators accountable, using technology to try to change the world and then getting hacked through it so compelling that I changed my career and I've been working on this ever since.
That's awesome. I remember working on security issues during the Arab Spring as well.
And I was the CSO at Facebook back then, and we had to deal with our own challenges of nation states in the region attempting to modify our services and capture logins and and things like that.
But the other thing I remember about the Arab Spring was there was an optimism about the use of technology that has kind of waned.
Yeah. But there was this excitement that technology could be a means for good, helping give voices to people.
And so it was that part of what attracted you to it?
Or was it more the like, yeah, the oh, shoot, the governments are actually using technology to like.
Get even more aggressive. Well, I had kind of had like a personal experience that I feel like encapsulated those two realities, because me and my collaborators had used Twitter and sat phones and all these other sort of mixture of like old school tech and technology and then social media to get information out.
We had restored a voice for a lot of Egyptians who were prevented from telling the world what was going on.
This felt heady and really optimistic.
But at the same time, I was already feeling the flip side of this, which is although technology reduced a historic asymmetry between people and their governments and the ability to get information out, it didn't get rid of the other historic asymmetries in power and risk between people and governments.
And what's happened over the last decade is that governments have not only tried to right the balance, but in some cases, they've really pushed to turn the technologies that we all use into tools that allow them to exercise power over us.
I think people who have seen the Arab Spring from different perspectives remember the kind of evolution of of narrative.
They were already skeptics at the time.
Benny Morazzo is one of them who was saying, look, this tech isn't going to fix everything.
But I think for a lot of us, it actually took experiencing both the promise and the threat and the flaw in that thinking.
And I think it has led a lot of people, including myself, to think, OK, look, technology is great, but it is rarely of itself a solution to problems.
And at the end of the day, everything's political.
And if we don't think about the meaning of the technology that we're using and about the interplay of interests and power when we're doing things online, we're burying our heads in the sand.
And ultimately, we're going to get exploited. The things that we're trying to do are going to get eroded in ways that we can predict.
Now, one of the things that's impressed me about Citizen Lab is how you get involved, not just on the public policy debate and narrative and discussion, a little bit like what we're talking about now, about like what what what should be allowed.
But it seems like you really roll up your sleeves and dig into the technology.
And I'm so curious about, like as an organization, how you chose, like what do you think are the are the priorities of Citizen Lab and how does and where does this kind of like technical research fit into it?
So the lab is kind of an interesting animal because our director is a political scientist, and on staff, we have people who are political scientists, computer science PhDs and everything in between.
And what I think that does is it means that when we look at problems that have a technological component, we don't kind of stop our analysis with the indicators of compromise or the attribution.
That's often the jumping off point for kind of the second half of what we do, which is trying to understand.
The realities of why this technology is being used in certain ways and how people are getting harmed from it.
At the end of the day, for me, the most rewarding part about the work that we do is the work with victims.
Most of our work would not happen if people didn't share with us the fact that they were targeted or that they had suspicions.
And so a lot of our work is built off the bravery of people like journalists and others who will come to us and say, look, something weird is going on with my phone.
And it happened more or less at the same time that I was photographing the super yachts that, you know, the prime minister's buddies were hanging out on in the Mediterranean.
Case in point, that's a case that actually just published a couple of weeks ago with a Hungarian journalist who was being targeted with Pegasus while documenting the prime minister's buddies pounding around the Mediterranean.
So what's interesting about our work is that, well, there's this huge technological component and there's some context.
A huge part of what we do is actually direct engagement with brave people.
And it's really trying to take their agency and expand it, try to take their braveness and their cause and help move them forward.
Right. Yeah, we should dig in a little bit on this.
I guess this project in particular, I think a lot of the world started to pay attention to it when I think when Amnesty did the disclosure of the really long list.
But for the benefit of people tuning in who don't know much about Pegasus, could you kind of give a high level explanation and then we can dig in?
So Pegasus is the name for a piece of spyware or some would say a service provided by a company called NSO Group.
And it's basically kind of like a towel in a box, hacking in a box.
It allows its government clients to remotely and often without even a click infect phones and turn them into real spies in the pocket of people who get targeted.
The technology is not new in the sense that a number of states have had this ability for a while to turn a smartphone into a surveillance device.
What differentiates Pegasus is that the company that makes it, NSO Group, has aggressively sought investment and growth, and as a result has sold their technology to a huge number of countries around the world.
And predictably, many of those country users wind up doing bad things with it.
Targeting journalists, targeting human rights defenders, targeting human rights organizations, political opponents.
The president of Panama was monitoring his own mistress. States abusing surveillance.
And so when we talk about Pegasus, we're talking about that problem set.
You mentioned the Pegasus project. For years, researchers, including myself, some of my colleagues, notably Bill Marsek, absolutely brilliant guy at the citizen lab, have been tracking Pegasus at the same time.
Amnesty International and their Amnesty Tech branch has been tracking Pegasus.
But the things that we would find would really be hard ones.
So we might find like 30 cases in a given country.
It'd be a big deal. But then this year, something really interesting happened.
Amnesty, working as a partner with a news group called Forbidden Stories, which is actually a coalition of a lot of different major media organizations, published just a raft of stories highlighting the global scale of Pegasus hacking and targeting.
And that was really a wake up call to a lot of people because the targets were not just, you know, sort of human rights defenders or journalists.
They were potentially the French prime minister and a bunch of members of the French cabinet and a number of other heads of state officials, celebrities and so on.
And that, I think, helped to make the threat that this kind of software, which is deliberately designed to fly below the radar, helped to really highlight how not only problematic this stuff is and how spooky it is, but also how widely it gets used and abused.
Yeah, so I mean, if NSO Group went away, this problem wouldn't go away, would it?
No, NSO Group is like in so many ways, it's just the most visible current version of a much larger ecosystem of companies that sell with mercenary spyware and other offensive capabilities to states.
We know a lot about Pegasus and NSO Group for some partly path dependent reasons.
They're what both we and Amnesty have been looking at for a long time, also because they have a big market.
But ultimately, behind any player like NSO Group are just a host of other companies in a lot of different countries that are all contributing to that offensive ecosystem.
Part of the problem is that because that ecosystem is really underregulated and under scrutinized right now, a lot of harm is getting done from the proliferation of these capabilities, which are largely unchecked.
What do you think we should be doing to deal with this offensive spyware market?
It's a really tricky problem. I look at it like a stool. Three legs, civil society, government, private sector.
For years, civil society. And by this, I mean, you know, the rights of defenders, journalists and so on.
And also research groups have been raising the alarm about this.
In recent years, interestingly enough, it became clear that big tech was pretty pissed off with NSO.
In 2019, WhatsApp and Facebook sued NSO Group.
And then last year, a whole bunch of other big companies, Microsoft and Google, signed on to that lawsuit as Amici, making it clear that the tech sector has had enough.
At the same time, they're also clearly leaning in, trying to detect the spyware and patch the exploits that it uses.
The third leg, though, and that is government action, has been conspicuously absent.
So it's a pretty tippy stool right now.
And I think where we need to go is for governments to recognize that, well, having some offensive marketplace may be useful to their interests.
We're getting to a place where that marketplace is causing so much harm and so much potential blowback even to them that it needs to get regulated and the harm needs to get dialed down.
Otherwise, we're going to get to a really troubling situation.
Why do you think governments are so silent on this topic?
The knowledge that I like to think about is the arms market. During the Cold War, both the great powers, Russia and the US, my understanding kind of benefited from the existence of arms traffickers because there were lots of proxy conflicts in the world.
And it was a way to get weapons into those proxy conflicts that you and your side were potentially supporting.
After the Cold War, a couple of things happened, including a massive influx of weapons from former Soviet states, which made a lot of the conflicts around the world more bloody.
A lot of that mediated by arms traffickers.
And states really realized, like, we have to do something about arms trafficking.
It's causing harm everywhere and making conflicts a lot more bloody.
I think we need to get to the same place with spyware.
And for that to happen, officials need to fear it. They need to feel that they and their political interests and their party and their prime ministers and presidents are just as likely to get targeted with this stuff as the problems that they'd like to look away from, like civil society and other things.
I think governments also have to recognize that big tech is crying out for assistance and help with this problem.
Having really recognized that tech and civil society alone can't solve it.
Right. It does seem like the mainstream media is starting to cover this issue more.
You know, there was the zero click Apple vulnerability, I guess, that was associated with this in the last couple of weeks that got a lot of attention.
Citizen Lab played a big part in that one as well, right? That's right. So that zero click was a particularly feisty one.
Both we and MSD International had some awareness that there was something going on with iMessage.
But then a couple of weeks ago, my colleague Bill was going back through an old backup of a device that we had looked at months before and found some really interesting GIF files, which turned out to be the holiest of holies, a zero click iMessage zero day that NSO had been using to drop its implant onto people's phones.
So we worked with Apple and Apple within a week had patches out for iOS, MacOS and even Apple Watch OS.
That was a pretty quick turnaround. Obviously, this is a very severe exploited in the wild exploit.
But what's interesting about this case is that you can have all this.
You can burn a really expensive, fancy export that's being used by bad actors in the wild, and they may just pivot and start using the next exploit that they have up the chain that they've been waiting for, just such an eventuality.
And it really shows us that we can sort of all be concerned like, oh, man, got to patch your device.
There's really a problem here that doesn't necessarily solve the problem of the threat actor.
If the threat actor is well resourced enough to have other exploits in waiting.
Right. And is it just a reality that it's going to be a never ending stream of exploits?
I think it is. And this gets to like a really important problem set, which is there's an offensive industry and there are people who would prefer to keep their head in the sand while finding interesting exploits and selling.
That marketplace is absolutely fueling harm.
And I think that we need to get to a place as cybersecurity professionals and people in this industry of really having our norms and values catch up with the harm that we know that industry and that marketplace is causing.
We've got a long way to go because there's a lot of glorification on a lot of that stuff.
And I think we have to get to a place where we recognize the role that is playing in causing a lot of harm.
Yeah, it's fascinating because most of us who work in the world of security, we're not just in it for the the neat technical things that we can discover through breaking or or that we can invent through building.
We're in it because of the hope that we have a positive impact on people. And it seems like the people the people who are in this particular sub industry must be looking the other way.
They've got to be looking in a certain direction. I was just recently on a panel with Etienne Meunier of Amnesty International, formerly Citizen Lab.
I mean, he was making an interesting point. He pointed out that hacking team, which is sort of like, you know, in some ways, a predecessor, part of the old DNA of Venezuela, hacking team was an Italian company that sold hacking capabilities, which, sure enough, was exposed targeting activists.
They also got massively breached.
And one of the interesting things that came out of the breach, as Etienne pointed out, is you could see that the management was sort of like singing, you know, telling a nice story to the technical staff.
Like, listen, everything's OK.
You know, you're saving lives and so on. Meanwhile, the management knew what was really going on, right?
There was a different reality there.
And the motive was financial. I'd like to believe that something similar is probably true with NSO and that there's a lot and companies like it.
There's a line that people are being fed like, look, you're preventing serious crime and stopping terror.
But the reality is this kind of work causes the proliferation of espionage tools.
And a lot of states are going to use those for really bad things that are going to make us net less secure.
And I think we need to get to a place where not only we glorify that kind of offensive work, but we also are honest with ourselves about what role it plays in geopolitics.
And the answer is, in so many cases, it is fueling authoritarian regimes who would love to be technologically empowered, not only to hold the population in fear and to pry into their personal lives, but to jump across their borders and target people in other countries who have said things that are critical about.
Right. This is a topic we could just I can enjoy talking with you about all day long, but we don't have all day.
We only have 30 minutes for this conversation.
So I want to jump over to a different topic that I've been thinking about a little bit.
You are I saw your name in the news back on a little bit after January 6th.
January 6th is a there's a date that we talk about a little bit here in the United States right now, because that was the day that this organized group stormed our Capitol Building in Washington, D.C.
And in reaction to that, the technology community in particular seemed to be fascinated with a crowdsourced approach to identification of who was there that day and trying to hold them accountable.
How did you get pulled into that situation? So I've been using some of the techniques that we actually use the lab for things like infrastructure analysis to try to do some attribution around some of the targeting of like potentially susceptible people to stop the steal messages back in the day.
So in 2020, looking at some of these weird front groups that were pushing some of Trump's messaging and especially groups that seem to be encouraging violence.
And I've been tracking these groups, looking at domain registrations and other kinds of fun stuff as they move towards January 6th.
And I was getting really alarmed that something was going to happen on January 6th.
I had no idea what. And so that day I was watching with concern, but then experienced what I think was the same gut punch that a lot of us felt when we saw images of people on the floor and the observation gallery of the Senate holding zip ties, masked people wearing paramilitary style clothing.
And that really caught my attention and my instinct.
And this is often the case for me when there's something that really bothers me is like I need some agency and you feel like there's something I can do.
And so I started using OSINT techniques to try to figure out who that person was, the guy holding zip ties, the first zip tie guy, using the just absolute torrent of imagery that had come out of that day.
And I decided consciously that I was going to use Twitter to show my process in the hopes that others would join me.
In the back of my mind were two fears.
One, there might have been some kind of an organized conspiracy to potentially kidnap people and cause harm to our elected representatives.
I am an American, by the way. Secondarily, I felt, well, you know, we have to make sure that there's some public accountability for these people, because who knows what's going to happen between now and the election.
And we have to do everything we can to scare these people away from ever doing this again.
After January 6th, a whole massive ecosystem, I'd like to call it an ecosystem of accountability, grew up to try to identify a lot of the people who had gone there, especially the mass people, the ones who are hard to figure out.
And it was remarkable to watch because it was this sort of like. First, probably tens of thousands and then thousands of people getting together and sort of pitching in with what knowledge they had to try to make these identifications and very quickly starting to get really careful about not identifying people publicly, about developing a set of methods and processes, forming groups so that they weren't exposing all the work they were doing publicly, which is an absolutely fascinating thing to watch.
And I think it is partly resulted in making anybody who thinks, oh, maybe I should go, you know, visit violence at a state capital.
Think twice, because there's not this possibility that a group of amateur sleuths and maybe some not so amateur are going to do their best to try to figure out who they are and make sure that they get held accountable.
Right. Were you worried that people would be overzealous in their approach?
Absolutely. And one of the things that I had in my mind and I think everybody else did was, man, we all remember what Reddit did with the Boston bomber.
And so one of the things I tried to do and a friend of mine, a guy named Art Tolar at Bellingcat, I think, says it best.
He said, look, anytime there's a big event like this, there is going to be a crowdsourced effort to try to figure out what went on.
The important thing to do is to model careful, ethical behavior and the right norms to try to guide that energy towards something productive.
Otherwise, there's always the possibility that it can certainly go south.
Right. What do you what were some of the ways that you tried to do that in this situation?
So one of the things that was most interesting to me was the fact that a fair number of the people there were some kind of military paraphernalia.
And so I constructed a sort of a volunteer informal committee of former military folks who recognized insignia and who could also basically tell me whether these were wish.com soldiers or the real deal.
And we actually went through hundreds of mostly men wearing different kinds of body armor and military kit and tried to triage who might be interested, who was wearing gear that was actually legit versus who was wearing airsoft stuff.
And this was like, you know, pictures of patches and gear. And in some cases, trying to identify weapons that people were carrying and other things like that.
One of the other approaches that we did was try to identify people based on who they were around and then try to work those things back towards social media profiles.
At the end of the day, what was interesting is how many tools are now available for people who want to do that kind of open source digging.
One of the big things that I decided on quite early was it would be important to move my process, which was ultimately a big volunteer collaboration towards working with journalists.
So when there was sort of a candidate guest for somebody's identity, the next step was for the journalist to pick up the phone and call that person and see if they would admit to being there or otherwise kind of give themselves away.
This felt like a way to try to make sure that the identification was pretty solid.
Well, at the same time, ensuring that there will be a public conversation about who this person was, what they've done.
Right. It's interesting.
Like, I think one of the other things that came out of this is we all have a heightened awareness that.
That we can be identified out in the world. Right.
And so if if if a collective group of people care enough about the situation, there's going to be an attribution effort.
And that that collective group can be volunteers like in this situation, but it can also be a government organization itself, you know, to go back to the, you know, to the authoritarian conversation from earlier.
Absolutely. Technology in the Arab Spring. But, you know, you as a researcher, you're quite a bit out there.
And, you know, like you said, you went and engaged on Twitter.
How do you think about that from a personal safety and security standpoint, putting yourself out there?
Just kind of the last question. Yeah, just a really good question.
I think one thing that I keep in mind is to try not to glorify the methods, but rather to focus on the meaning, because people can use it for bad things.
And I've seen people sort of say, like, oh, well, it's just OSEN. And it's like, well, but to what end?
Right. For what purpose and by who? And I think if there's one kind of common thread here, it's that the technology itself needs to be understood in terms of who's using it and what they're using it for.
Same for Pegasus, same for OSEN techniques.
I've certainly been concerned about the possibility of different kinds of digital targeting of me, both for some of the work I did on January 6th, but also for the work that me and my colleagues at Citizen Lab have been doing investigating players like Pegasus.
We actually had some experience.
So both myself and a colleague, Bahar Abdel-Razak at the Citizen Lab, were targeted by operatives for what looks like Black Q, if you believe Ernie Farrell's book, a group of private spies who seem to have been sent to try to discredit us and figure out some of the secrets behind our work.
And that, I think, gave me a kind of an interesting experience of being in a situation that had some parallels, although much less parallel to myself, to the people who we work with, to the high-risk journalists and activists who feel every day the tickle of surveillance and of state repression.
And I can tell you, ultimately, we ran a sting back against Black Q.
You can find out more about it if you Google my name, John Scott Rilton, and bumbling spy, if you read the New York Times.
But it showed me how creepy it is to know that you're under surveillance and also how much you feel like you're potentially contagious.
And that the surveillance targeting you may extend to people you care about and that you may bring it with you when you come to an interaction.
And that really helped highlight for me how part of the tool of government overreach and authoritarian behavior is the creation of fear and paranoia and the ability to just deploy these resources against an individual.
Yeah, it's it's intense. Well, we've we've come up against our time, so I just want to say to wrap up, thank you for joining us and talking through these issues.
Hopefully, this has been enlightening for anyone who's dialed in and who sees this session.
And last, I'll just say thank you for doing what you do.
Like I said at the beginning, I think the work you do is of critical importance.
Sheds light on important topics for all of us to think about and hopefully make the Internet and the world better as a result of your work.
So please keep it up.
Thanks, Joe. And thank you all for listening. Thank you for having me.
And I'll just say I think Cloudflare is a really good example of a company that is able to continue to do good and important things towards keeping all of us safer, including with free products and services.
And I just I hope that other big companies follow some of your examples.
Thank you.