Hacker Time
Presented by: Evan Johnson
Originally aired on April 3 @ 2:30 PM - 3:00 PM EDT
Join Evan Johnson as he speaks with security professionals about recent security news!
English
Security
News
Transcript (Beta)
And we're live. Welcome to Hacker Time, the number one security show anywhere. I'm your host, Evan Johnson from the Cloudflare product security team.
And today we've got an exciting episode.
It's the first one in a little while. I've been in the process of moving and leaving California and relocating to Cloudflare's Austin office.
So I'm here in Texas now, loving it so far. And all settled in and back working and back doing Hacker Time.
So I appreciate you joining me now that I'm back.
And I'm going to go over some security news today and then start a new live programming project, mostly some security design, or some design of what that software project will look like.
And we'll get into that and start doing that for the next couple episodes.
But the big news this week, the very sad passing of Dan Kaminsky, who is a really large figure in the security industry and community.
He was somebody that I didn't really know that well.
I'd seen him around a lot of events and had talked to him just in passing a couple times.
But he was always a really great figure.
And the one personal story I have of his is he gave a really nice speech at a memorial for another hacker who had passed away.
And I thought it was really well done and really nice of him to do that.
And so that's my one story I have of Dan Kaminsky.
But he was somebody that I could tell was deeply liked in the community and a good person overall.
So really sad news about him. The other news from the week and the last couple of weeks has been the CodeCov security incident.
And so CodeCov released a security update on April 15th. And what they said, they're a CI security company where they kind of hook into people's continuous integration and systems and help you have better quality code to understand code coverage and a bunch of things.
But on April 15th, what they had found out is that somebody had one of their products, they have different integration paths.
So you can add them as a GitHub third party, or you can use this bash uploader program that uploads your code to their servers.
And what they found was this bash uploader had been compromised.
And that's really serious. So that news came out April 15th. And then one week later, April 22nd, HashiCorp released that they had been impacted by this, the exposure of their GPG signing key.
Thankfully, no binaries were compromised or anything like that.
And their release is just a key which they rotated and I thought they handled this very well.
And I thought this was a pretty large and impactful security incident.
I have been surprised to not see more companies affected by this and releasing more information about it.
Because the bash uploader from CodeCov seems like it would be a really easy on ramp for people who are running their own CI or hacking together their own CI and using CodeCov.
Maybe I'm thinking that but maybe a lot of companies just use a direct GitHub integration or direct integration between their code repos and CodeCov.
So maybe I'm kind of overthinking how prevalent use of the bash uploader would be.
But really serious compromise of the bash uploader based on how CI systems work.
Because if you're running a shell script in your CI systems, a lot of companies and a lot of people what they'll do is they'll push artifacts from their CI system up to a Docker repository and app repo, NPM repo, any number of kind of artifact repositories.
And so there's a lot of secrets in CI systems.
With HashiCorp, they have their signing key. And a lot of people auto build and auto release their packages and their releases from CI.
So it's really surprising that there hasn't been more compromises that have been publicized.
Because I would expect that some companies might not have the logging or the capabilities to see if their artifact repositories had received a malicious commit from the shell script or a malicious app repo.
Maybe not though. So the way this compromise worked is they were piping all environment variables.
You'll see that the bash uploader was piping all environment variables up to these malicious IP addresses.
And so environment variables are where people generally keep their CI secrets.
And so that could be AWS credentials, GCP credentials, Cloudflare.
It could be any type of credentials that people put in their CI CD pipeline.
And then and it could be artifact secrets.
So authentication with Docker repositories, all of that.
Really serious. So if you have not looked into it, give it a read. Give their whole blog posts a read.
And it's kind of a tough problem to solve because CI CD, if you can compromise CI CD, it's a big deal.
And so it might be a good idea to not be pulling down bash scripts on the fly in your CI CD pipeline, since I know that that's the way the bash uploader worked.
You kind of curled the bash script and piped it to shell.
And that can be bad if that bash script is ever compromised, obviously.
So you could potentially version those scripts in a repository somewhere.
And that would allow you to still use the product and address the ongoing risk of the third party.
So definitely look into this if you're interested.
Really bad, really impactful security incident if you're affected. But so far, it's only been HashiCorp and only their code signing key.
So I'm not sure if we'll see more news about it or not.
So the next thing that I had on the agenda is the FLOC, the Federated Learning of Cohorts paper, is making its round.
And Google released FLOC a couple, maybe a month ago or two.
It really started to make its rounds. And the big news this week is GitHub, by default, disabled this on all their GitHub pages.
And this caused me to actually go read the paper, because I hadn't had the opportunity to read the paper and read other people's takes on it.
I read the EFF's take on it. I read the Brave browser released a take on it.
And I thought a quick overview might be helpful for a lot of people who maybe haven't had the chance to read it themselves.
And let me find the white paper here.
So the idea of the FLOC white paper, they basically, instead of the way Google Ads works today, where they receive a bunch of data from all of the websites who add Google Analytics and Google Ad tracking to their websites, they receive all this data about people's browsing patterns and the things people are looking at.
They identify you with a cookie, and then they can target ads based on all of these things that you've seen.
And basically, their goal is to move this client side.
Instead of having really fancy back end, where they're indexing all of the things that you look at and figuring out what types of things, figuring out things about you, and then what types of ads you might want to see, or letting advertisers target you based on some of the properties that Google's learned about you.
The idea is to move a lot of that stuff client side. And so Google won't necessarily need to see all of this data from all of these websites about you.
Instead, the browser will just support some type of learning about you, which ads can then use that learning to target ads towards you.
And so the crux of the idea is that on the client side, you'll be assigned a cohort ID, which is supposed to be somewhat anonymous.
That cohort ID is basically a fancy data structure, a fancy bit string, where they have a nice image of it further on in the paper.
It's basically a fancy bit string that allows them to figure out websites you've been to.
So if I've been to a bunch of websites, and you've been to all of the exact same websites, we would probably have the same cohort ID, or we've been to very similar websites.
Our cohort ID, the ID that Flock would assign to us, would likely be very similar or the same.
And so based on that ID, on the backend, Google can figure out how to give you ads.
And I've kind of polled some different people in different communities in the security world about this.
The paper is kind of interesting, and it's a quick read.
So I highly recommend giving it a read.
I had been a little intimidated to jump in because I thought, this is going to take all afternoon, but it really took me about 20 minutes to kind of understand it, get the idea of it, and give it a full read.
And so you'd have these takes of Flock is a terrible idea.
And basically what some of these stakes boil down to is this cohort ID isn't really anonymous.
You can still kind of use it for tracking. That's one of the criticisms.
And then there's a new privacy problem in that this cohort ID can be exposed and is exposed to all the websites who are using this.
And so if you go to my website, my personal website, I can see your cohort ID from my understanding of it.
And people are somewhat uncomfortable with that. So that's the main criticism that I've heard from EFF and this EFF article and doing some other reading of people's opinions on Flock.
And then I've also heard some other opinions about it.
Some who say that this is a net improvement over the current system.
So it's good that we've started working on something where kind of tracking everything isn't going to be necessary.
And the Flock paper kind of calls out that it's kind of very much a balancing act where this is a balancing utility of ad tracking and or utility of third party advertising and targeted advertising for them in their business model and also privacy.
So it is a privacy improvement.
There's still some problems, but also the ad tracking is still pretty good, but not as good as they currently are doing it.
So my end all take on this is that I really think that the Flock paper is going to change a lot.
It's still just a white paper.
And I think I can imagine based on what I read that it's going to go through many, many more iterations of improvement.
It's really not finished.
I definitely recommend giving not only the white paper a read, but different people's opinions a read.
So the EFF has a strong opinion on it that it's a terrible idea, but I'm sure there's other opinions out there as well about it being a net win potentially.
But I definitely don't think that it's in a fully baked enough place to really have a, for me to have a strong opinion about whether it's a positive or a negative.
But it was very interesting and it's definitely something that I'm going to be watching over the next couple of, develop over the next couple of years probably.
I could see improvements coming later this year potentially, and maybe it'll go really fast based on some of the legal, what's going on with Apple and Google.
But it is very, very interesting from a technical perspective and the JavaScript APIs are pretty well documented.
They're not well documented and they don't exist yet.
So in some ways, GitHub blocking this is a little early because I didn't try too hard, but I downloaded Canary and Chrome Canary and these APIs didn't exist yet.
The interest cohort didn't exist yet.
But this is kind of what the JavaScript code will be for interacting with the cohort.
Why not? And their privacy and security concerns kind of lay it out quite clearly that you are potentially revealing people's interests.
You can track people in a not anonymous, not super, people are not anonymous with this, but it's not super easy to track people either.
And so it's quite interesting.
Well, that's kind of the news. And so I wanted to start a new project.
You'll remember we did a bunch of programming with workers and we built a kind of simple website.
I've got it right here, ejcx.net. And we had a whole login flow.
It worked really, really well. I took a couple of weeks to get going because we kind of built everything from scratch, from the password hashing to the storage, to everything, all with Cloudflare Workers and workers KB.
And that was really exciting.
This next project, I'm thinking about not doing something workers related, despite me really enjoying programming and workers.
And what I was kind of thinking about is building some security canaries.
And so let me pull up something on canaries here.
The Wikipedia page should be, no, this is not it.
I'll pull up things to canaries since they're very notable in this space.
So canaries are little services or systems or things that look like computing assets, but aren't.
And all they are, are fodder that defenders put in their networks or put somewhere.
And they wait for these things to get used or accessed. They're hooked up to some monitoring and alerting systems.
And when an attacker touches it, the defenders will receive an alert.
And it means, it's kind of bad news because it means you've been compromised and there's somebody in your network, but they're also very, very simple to put out there and very effective in figuring out whether or not you've been compromised.
Because as an attacker, obviously the whole quieter you are, the more you're able to hear thing is a mantra that most attackers have and being stealthy is what you want to be.
But also if you're too stealthy, you can never move laterally as an attacker.
You can never find more assets or find the treasure trove of data or move beyond where your foothold is into a network.
And so you end up needing to do reconnaissance.
You end up needing to try things, try credentials, try look for your next step in.
And so the canaries are really effective.
And thanks here, this canary.tools website, they're a vendor who sells these.
And I've never used the things ones, but they are kind of the canary tokens are pretty ubiquitous where they're fake AWS credentials and you can just drop them all over the place.
And you know, they've been exposed when somebody starts using them, or it could be somebody internally trying to use them.
You never know if you're at a big company, but it's usually means you're either going to have a conversation with an employee at the company you work at, if this thing gets used because an employee found it and is like, what's this do?
Or it means you've been compromised.
So which is way worse than an employee just trying it out of curiosity.
And so I wanted to start a little design of what we're going to start doing.
And so here's our network that we're protecting. And we're going to have a bunch of, we're going to have some assets in here.
And so here's a square, I, this is actually kind of something that I do is I use slides to make my depictions, my graphics as a non-artistic person.
So we have, this is a service. I wonder if they have cylinders in here for databases.
Okay.
Maybe we don't need more. We just need a service and we will call this our canary.
And what we want our canary to do, it's running within our network here. We want it to listen on some ports.
So most easily we're going to want it to listen on, let me make a text area.
This works. So requirements. Oh, this is great. Actually down here.
Requirements. We want our canary to listen on, let's start with 80 or 8080.
Support HTTP on an arbitrary port. HTTP is a good starting place. And a lot of people deploy like storage and they'll have a canary that looks like a NAS or a database or all sorts of things.
But we'll start with HTTP and we can add support for others later.
When it receives a request or a specific type of request, like slash admin or something, then we are going to fire an alert.
And then receiving an alert, there are multiple strategies we could employ here.
We could try to use a vendor like pager duty.
We could try to use Twilio. That'd be great.
We could try to send an email. I don't know what's best.
I think I'll opt for Twilio to receive a SMS when this happens.
And so one episode, I'll have the secret already stashed away safely. So you all can't see it.
And my email as well. So you all don't spam me, but we will. We'll probably, we'll probably integrate with Twilio here since it's nice and easy.
Twilio alerting.
Yeah. So that's, and that's about it, but we should probably support more than HTTP.
We should probably, to make this really interesting, we should probably support other protocols.
Some coming to mind, FTP, SSH might be a good one.
And then I'm not sure we can add to it later. And so what our diagram is going to look like is it's going to have, we'll have an arrow here, and this is going to connect to Twilio.
This is kind of how it's all going to look. Twilio for alerting, which will then alert me via SMS.
Oh boy, here we go.
Oh, I can't, here we go. This is how you rotate it. So this is kind of the architecture of what we're going to build over the next couple of episodes.
Twilio for alerting, and then the alert reaches my phone.
And what should be in the alert, requirements for the alert, we should probably spec that out.
We'll want to know source, IP address.
We will want to know protocol and why it was an alert.
So connection to slash admin or HTTP request slash admin or attempted SSH authentication.
We want this to be easy to deploy.
So easy to deploy. This should be in a container. And then some way to easily run some magic command, whether it's a make file or a binary that runs your container, like a CLI that invokes a container with the right arguments and everything for port listening and all of that.
We will want some way to easily deploy this. And then anything else we should add?
I think that's pretty good. This will likely keep us busy for a couple of weeks if we're going 30 minutes of time live programming.
But if I had to plan out the milestones as well, that looks pretty good.
Milestone one, end-to-end alerting for HTTP. So that seems like honestly about half an hour of work.
One episode of work would be a nice goal for that to get HTTP, get a like Go service running that's listening on HTTP that has some type of alert and then ship that alert to Twilio.
And it's all packaged in a basic Docker file, not really easy to deploy yet.
Then the next milestone might be support for other protocols.
Then the third milestone will be spruce it up a bit, make it really easy to deploy.
And then it'd be nice if we shaped up our code a bit such that it was more like a module type system.
So it was really easy to contribute an additional add additional protocols module system for protocols and alerts.
That's probably two separate things. And I think this would be a really compelling potentially open source project.
And I think it'd be helpful for a lot of people. So this looks pretty cool. We'll probably start this and we have just one minute, 25 seconds left.
So I'll try to do one puzzle for the end of this, because I said I do chess puzzles for weeks with y'all and I never got around to it.
So let's do it. Let's do one or two for the end.
So we have one minute on the dot. What is the move here? To note, this seems like it could be a problem.
This looks compelling, but I don't.
Oh, I see. Yes, this is a easy one. So this is going to be a big problem as it moves.
We'll misplace my queen over here. So the correct sequence here is going to be rook h8, king take on h8, queen h5 here, check, king moves back here, and then delivering a mate on h7.
Let's do it. And if you didn't see that, or if you did, that's why you practice puzzles, because they're helpful in getting better at chess.
All right. Well, that was your puzzle of the week.
Thank you so much for joining Hacker Time this week. We're going to start programming next week.
Adios. I will see you all then. Thank you.