Uncovering the HTTP/2 Zero-Day vulnerability, and Israel-Gaza Internet patterns
In this week's program, João Tomé is joined by two Cloudflare guests based in Texas (US) and London (UK): John Engates (Field CTO) and Lucas Pardue (systems engineer and HTTP expert). The main focus is explaining what the recently discovered HTTP/2 Zero-Day vulnerability is, its significant impact on the Internet at large (resulting in never-before-seen DDoS attacks), and how Cloudflare’s customers are already protected.
Next, two other vulnerabilities are highlighted: the Atlassian Confluence CVE-2023-22515 and the hidden WebP vulnerability, which has more significant implications than originally anticipated.
We also delve into some Internet traffic patterns from Israel and the Gaza Strip after a conflict in the region was ignited by the October 7 Hamas attack.
Cloudflare is also celebrating its inclusion as a Top 100 Most Loved Workplace, according to Newsweek. We also explain why the general availability of the Magic WAN Connector makes life easier for large customers.
If you enjoy technical deep dives, this “Virtual networking 101 ” blog is about the intricacies of tap devices (a virtual network interface that looks like an ethernet network card) and how they are being used for virtual machines, reversing their original purpose.
Last but not least, we premiere our short segment “Ask the CTO”, with John Graham-Cumming answering audience questions.
You can check the mentioned blog posts:
- HTTP/2 Zero-Day Vulnerability Results in Record-Breaking DDoS Attacks
- HTTP/2 Rapid Reset: deconstructing the record-breaking attack
- Uncovering the Hidden WebP vulnerability: a tale of a CVE with much bigger implications than it originally seemed
- All Cloudflare Customers Protected from Atlassian Confluence CVE-2023-22515
- Cloudflare's a Top 100 Most Loved Workplace for the second consecutive year in 2023
- Internet traffic patterns in Israel and Palestine following the October 2023 attacks
- Announcing General Availability for the Magic WAN Connector: the easiest way to jumpstart SASE transformation for your network
Transcript (Beta)
Hello everyone and welcome to This Week in Net. It's the October 13th, 2023 edition, which means that it's Friday the 13th.
And this week we're going to talk about relevant vulnerabilities.
A new one was just disclosed. I'm João Tomé, based in Lisbon, Portugal, and with me I have not one, but two guests.
In the next few weeks, our CEO, John Graham-Cumming, who usually does this with me, will still appear at the end of the show with different short segments.
One is Ask the CTO, the other is a bit of Cloudflare history.
So this week I have with me John Engates, Cloudflare's Field CTO, and Lucas Pardue, Systems Engineer and a true protocol expert.
Hello, John. Hello, Lucas. How are you? I'm good. Great, thanks.
Very good. Let's start with the introductions. First, possibly, John, what does a field CTO do?
Oh, well, I do a lot. I spend a lot of time with our customers, primarily.
I travel and I talk to customers on a regular basis. I try to help them understand Cloudflare and what goes on behind the scenes, and understand our vision and where we're headed as a company.
I joke that the T in CTO is for traveling and for talking.
I do a lot of that, and that's my mission at Cloudflare. I see your job as a globetrotter job, because globetrotters are going everywhere.
And Lucas? So I am an engineer on the protocols team at Cloudflare. Our team is responsible for the server that is the front door to all of the brain of Cloudflare.
So Cloudflare's construct is a chain of proxies or services that speak to each other, but the front door does the plumbing and the direction.
So we care about TLS and the security aspects of those things, making sure that we're serving up the correct certificates and boring technical details of things, but also doing protocol translation.
So we speak the newest protocols, such as QUIC or HB3, HB2, and they're able to translate that into a format that the rest of the world origin servers can all communicate with.
And part of that job is maybe half of it is doing the engineering work, keeping the lights on, and the other half is engaging with the standards world, helping to collaborate across industry, because there's no good having a server that does really clever things if there's no clients.
So we work closely with browser vendors or other people who are interested, like the HB ecosystem, so to say, just making sure things are okay.
So where we see maybe new trends or new opportunities to help standardize new solutions, we're engaging actively in those things.
Exactly. And this week, HTTP protocols were on the news because of a vulnerability.
We're going to discuss a little bit on that.
But Lucas, correct me if I'm wrong, you're based in London? Correct.
Yes. And John, you're in San Antonio, Texas? I'm in Texas. That's right.
Okay. Just to situate. So let's dig right into the main focus of this week, which was we disclosed, Koffler disclosed, along with Google and Amazon AWS, the existence of a novel zero-day vulnerability dubbed as HTTP2 rapid reset attack.
So it exploits a weakness in the HTTP2 protocol. Let's start possibly in terms of how these types of vulnerabilities work.
So zero-day vulnerabilities were not previously discovered vulnerabilities, right?
But in what way these vulnerabilities work, like this process of announcing zero-day vulnerability works?
You want me to start with that one? I think I'll take it. I've been talking with a lot of customers about this over the course of the week.
But this is basically a disclosure that Cloudflare made to the broader cybersecurity community, to the vendor community, to our partners, and even our competitors in many cases.
We wanted to disclose this vulnerability because it has the potential to be so impactful to the Internet at large that we felt it was necessary to make sure that everyone was protected, right?
We have a mission at Cloudflare to help build a better Internet.
That's the goal of the company when we go through our daily jobs is to build a better Internet.
And ideally, that means protecting and securing and helping make sure that traffic continues to flow reliably.
But in the face of a big vulnerability like this one, the rapid reset, it has the potential to disrupt traffic.
There were some very large DDoS attacks targeted at Cloudflare off the back of this vulnerability, and we saw the potential for impact that these could bring.
And so we needed to really disclose that to the broader ecosystem. And those were record attacks.
We've never seen, in terms of requests per second, we've never seen this scale of attacks.
So it was much larger than the previous record that we've seen in terms of industry and also Cloudflare.
But I think there's a message before we go into details that Lucas, you can help us, that just as a heads up, if you're using Cloudflare, you're protected.
So our team has been working for a few weeks now to make sure that when this zero-day vulnerability was disclosed this week, those who use Cloudflare are already protected.
So that's a good message to have at the get-go in a sense, right? Yeah, absolutely.
Lucas, following a little bit on what the ATT2 protocol is, how much the web uses, if I go to radar, Cloudflare radar, I will see that it's around 62% these days, how much the web uses ATT2 protocol.
So it's massive, the use of ATT2 protocol.
So how does that protocol work and how did we discover this?
I think I'd have to say, can I have three hours maybe instead of three minutes?
Our audience needs a sum up. Yeah, yeah. I could talk about this stuff all the time, day in, day out.
That's what I get to do. That's why I like working in the company.
But I'll try to be brief. Effectively, HTTP has been around forever in Internet terms.
And if you go and look in the Cloudflare blog, I think back in 2018, I wrote kind of a long history of the Internet.
And that is already stale because things keep evolving.
They move fast, but they move glacially slow as well as this kind of weird dichotomy.
But effectively, what we have is this, what we like to refer to as HTTP semantics, the things that are common across all versions of HTTP.
So you've gone to blog.Cloudflare .com. The first interaction there would be a HTTP GET request for blog.Cloudflare.com slash index.html or whatever it is.
And that would elicit a response back from the server.
That idea is a common semantic. It would be encoded differently onto the wire and those as the bytes that need to be transmitted from a client to a server and read in order to understand what the request was for and sent back.
So the differences in the wire format, the kind of syntax of HTTP, that's what the versions are.
So effectively, there's no difference, but there's a humongous difference because that encoding is what allows for additional features or additional performance zones.
You mentioned about the statistics of usage.
Effectively, HTTP 2.0 is just an upgrade to 1.1.
It works better and faster across nearly every kind of scenario you can imagine.
And so the browsers prefer HTTP 2 .0 by default. When they connect to a server, they offer up a list.
I speak HTTP 1.0, I speak HTTP 2.0. The server picks one of those and then they progress and talk that.
So yeah, that explains why it's more common.
Exactly, it's more common now because it's a better version in a sense, right?
More safe, more quick, more on those aspects. But going back to this vulnerability in specific, can you help us understand a bit on when and how we discovered it?
It was starting August 25th, right? We'll start noticing something different.
Yeah, I'll caveat this. I do not work in the DOS team. So my comments are from my perspective in the protocols team.
We work really closely with our colleagues.
We've gotten great with them. And this was very much a collaborative effort across many engineering teams and other teams for sure as part of this.
But yeah, we're constantly being D-DOSed. And that's part of what makes Cloudflare great, that we have automatic detections and mitigations.
And the threat landscape is constantly evolving.
It's not static. And so we're used to being able to see new trends or new things happening and adapt and deploy.
But the sheer scale of these was humongous.
And it's pretty, I'd say it's been fairly rare to this point that attacks have tried to really leverage a feature of the H2 protocol itself rather than just volumetric attack by sending lots of requests.
This was that, but it used a feature of H2 called cancellation or reset to be able to very cheaply send a whole bunch of requests all at once that normally shouldn't be able to happen because we have guardrails in place.
This isn't that dissimilar to other vulnerabilities that were discovered in 2019 by Netflix engineers.
And again, that was a responsible disclosure policy. We were able to work and implement a load of mitigations for that by looking and providing a lot of monitoring of the low -level plumbing details that no one generally needs to care about.
But in this specific case, the variant, it's a protocol feature.
Let's be clear, it's not a bug in the protocol. This is a classic joke between engineers that is a bug or feature, not a bug, et cetera, et cetera.
But in this case, canceling a request is really useful.
If you load up on your mobile phone, you might be on not a very great connection and you browse to a page and you decide you want to click a link and go to the next page or go back.
You don't want to be wasting your bandwidth and your time waiting for things that you're not interested in anymore.
Cancellation is great. It's even better in H2 than H1 because in H1, you have to just kill the entire TCP connection, the underlying connection in order to do that, which has its own compromises or bad outcomes.
In HB2, you just send a frame called a reset stream. That very quickly and cheaply cancels the thing and the server would see this frame and tear everything down.
That's the model. This is a cheap thing to do and it works and it's been implemented.
It's not a new feature. It's not something that was rolled out recently and was found to be done badly.
It's just endemic. Every HB2 server in the world supports this frame and does what the spec says you should do.
Where the attack comes in is that we could do concurrency multiplexing, that's great, but infinite amounts of concurrency isn't going to be good for the client.
There's no point asking for everything all at once.
You want to target things that are most important, like an image that is at the top of the page, etc.
It also consumes resource.
Every request you need to process has some cost on the server side, whether it's CPU or memory, etc.
To provide some guardrails for this, HTTP2 has a setting called max concurrent or stream concurrency, max concurrent streams.
I think if you scroll down a little bit, there's some paragraph that goes into this a bit more.
The stream frame here, right? Yeah. What a lot of servers do is, or the spec says, by default, around about 100, that's a good number.
That's based on some gut feel of how web pages are constructed, etc. Different servers can pick different values.
They can go larger if they think they've got more capacity or scale, etc.
They could go smaller if that's what they want to do.
The thinking is that no matter what you do, you open up these requests. There's this complicated state diagram underneath.
We don't need to go into that. When you send a header stream, you open a stream.
As you get the response back, the stream will naturally get to an end state and close.
The active streams are all running the state machine, but they always need to stay under 100.
You get 100 to start with, you use that bucket.
Maybe if you reset a stream, you get one back and you can make a new request.
That's the difference in this case, right? The difference with the rapid reset is that effectively, you can use it to fool this accounting.
The stream concurrency, it's all valid, but it's effectively an accounting loophole that the client is able to, without any server action, create 100 things, reset 100 things, create 100 things, reset 100 things in rapid succession, such that they're never breaking the limit and they never get detected as doing so.
But what will happen is the server is churning and constantly creating and closing things.
If it can respond quickly and do that, there's actually no issue.
That's what we see a lot of the time, that the server is also rapid and everything's great.
But if there's any kind of latency or delay that is incurred by the system tidying itself up, you start to grow this backlog of work.
Eventually, if that grows too big, the system resources start to get gummed up and used.
I know that people have explained it differently and that's my view.
Especially in the architecture we had, you showed, of the TLS decryption proxy in the backend server.
When you have a more distributed architecture, such as cloud edges tend to have, it's more likely that you're offloading work elsewhere and the tidying up actions are just a little bit more delayed or a little bit harder to tear everything down.
Going to you, John, there's a what to take from this aspect, what every CSO needs to know.
In this blog post, Grant, our CSO, he explains the high -level perspective of this zero-day vulnerability.
There's this section that goes along with what Lucas was saying, that in this case is by automating this trivial request-cancel-request-cancel pattern at scale, actors are able to create that denial of service and take down any servers or application running the standard implementation of HTTP2.
So what every CSO needs to know. Yeah, for sure.
What everybody needs to know is that every web server that runs HTTP2 was vulnerable to this.
This is not unique to any particular host or provider or service.
It's every web server out there that speaks the HTTP2 protocol was vulnerable because they implement the protocol in a standard way.
We were talking there about all the ways that the protocol aims to improve efficiency.
Well, they're taking advantage of that.
They're taking advantage of it by sending a lot of requests and then canceling them very quickly and using up lots of resources that are basically going to handling those requests, but not really serving anything valuable.
So this is a very large amplification where they get to use very few resources to attack and they use up or manipulate the server to take a lot of resources away from the actual work that should be done.
And so again, everybody was vulnerable, including big service providers, because we all implemented the protocol in the standard way.
And today, what we have to do really is figure out workarounds, which Lucas and team and a whole bunch of other engineering folks have worked on to make sure that we can detect when somebody's trying to abuse the protocol, and then we can mitigate it in an effective way across our fleet.
Exactly.
It's also an example of collaboration. Our teams collaborated with other teams on an engineering side.
Google, AWS, we're always seeing this collaboration inside the company.
So it's a matter, and now customers and everyone has to be aware of this, possibly our customers, not necessarily if we put in place these procedures, but the industry should be aware of this and do the needed patches, right?
Right, for sure. And you mentioned earlier to talk about the size and the scale.
I don't know if we've covered that yet, but I'll just touch on it.
I mean, this vulnerability led to some DDoS attacks that were three times larger than Cloudflare had ever seen before.
Back in February of 2023, we saw attacks that were on the order of 71 million requests per second, which is large.
That was a record at the time.
But what we saw in August was attacks that topped 200 million requests per second, and there were lots of them.
This was not an isolated incident of one attack.
This was numerous attacks. And it wasn't just Cloudflare that was targeted.
There were other people in the industry. Namely, we talked about Amazon and Google.
Those were folks that we know were targeted as well. And so the collaboration across our own engineering teams and then across companies was to basically share knowledge and understand what was going on in order to obviously mitigate and make sure that we could protect the Internet at large.
Exactly. Lucas, on that front, what is the main outtake that, as an industry, we should take from this, you would say?
You work with standards, we work with others in the industry?
There's networking collaboration across the vendors. We're friendly competition in some areas.
And in other places, we're working with people like Curl, for instance, and the maintainers there.
This wasn't a client issue in this case, but in other interop or interesting interactions that we've seen, sometimes there is something we can do better, like I mentioned before with browsers.
But in this case, I already mentioned the Netflix vulnerabilities from before and prior.
What we did there was, although the way it manifests is the same, a lot of this depends really on the implementation and what it's doing and how it's architected.
And because we're a diverse ecosystem of people who want a web server to run on a Raspberry Pi or somebody in a big data center, there's lots of implementations and they're written in different languages, programming languages, or they're targeted to run in different environments.
And so the design decisions they made for their implementation might mean they're more subject to some kind of traffic patterns than others, or that their mitigations for some of those attacks need to be quite different.
Or they might just pick a different way, a different solution to solve the same problem.
That diversity is actually pretty good, because when such attacks become visible, they're always latent.
In this case, it just required somebody to figure out, oh, I can use this thing in a certain way.
When it doesn't affect everybody in exactly the same way, we can have some kind of immunity in a vague way.
But yeah, the actual outcome is that people patched their stuff in the ways that meant sense to them, and they documented effectively, I'm going to implement it in this way.
And this is all part of the public open source implementation of something.
But after the Netflix work we had, effectively, there wasn't anything to change in the protocol itself.
That's just the way it worked. We did look hard at that.
Is there anything maybe we didn't do right at the time H2 was being defined?
We didn't find anything there. But what we could do is create security considerations that, while we'd already said, look, there's always a potential for any abuse in any Internet protocol.
Be mindful of that, and maybe look out for these bullet points.
We went back to the H2 standard, and expanded that a lot, and actually pointed to some of those CVEs to say, these are the specific examples of how the protocol could be abused to do these kinds of things.
And there are different ways you could detect and mitigate this.
No matter what way you decide to choose this, you should definitely be observing those traffic patterns and making some decisions based on your own local implementation knowledge.
And effectively, we're in that similar situation now with rapid reset. Is there anything we can change in the protocol?
Not directly, but indirectly. Some of the stuff I mentioned about stream concurrency, we might be able to tweak things there and come up with something that can eliminate this class of attack altogether.
That is going to be a longer discussion. We have an ITF meeting planned in Prague, just regularly.
They're periodically done. We're going to find some time during that session to bring in the community and see if there's anything we can do.
So stay tuned on that one. But also maybe just another bullet point in that list of, here's another kind of attack example.
These are not exhaustive, but the more defensive you can make your web servers, the less chance of these things going to happen.
One conclusion I take from this first is a protocol that was created a few years ago.
We're finding now a new vulnerability, something that people didn't explore before are exploring now.
So the sophistication and even using old methods is always increasing.
So it's good for, like you were saying, companies to be aware, like ours, to be aware of these changes and to create quick solutions for these vulnerabilities.
The advantage I think Cloudflare has is that we have layers of defenses, countermeasures.
We have people constantly watching and monitoring the traffic flows.
We understand these probably at a deeper level, obviously based on Lucas's discussion, than most people would understand the protocol.
We can deploy protection very quickly and we can do it at scale, which is very different than trying to do this on a single web server or a single piece of infrastructure.
We've got lots and lots of capabilities that the average web server wouldn't have.
And that's really why my conclusion to all this is make sure you have someone like Cloudflare in your corner when you're up against attacks of this caliber.
Absolutely. And we have this dedicated site, callcloud.com.h2, that people can actually ask if they're currently under attack, ask for some advices or protection, which is also good to mention.
We still have time just to go over a few of the other blog posts we published this week.
So there's these two, ATP2 ones, but there's also a deep dive, a virtual networking one-on-one bridging the gap to understanding TAP from Merrick.
This is a clear deep dive, technical deep dive that people can go over.
TAP defines as a virtual network interface.
For a Portuguese, a TAP is the company, the aviation company.
It's called TAP, our aviation company, but in this case it's a different TAP.
We also had Cloudflare's top 100 most loved workplace for the second consecutive year in 2023.
Always good to have these little things. This is from Newsweek.
Actually, I want to hear your thoughts very briefly, if you can, about this other vulnerability regarding the Atlassian Confluence vulnerability.
Yeah, this one was Atlassian, basically a vulnerability that was identified.
It was a CVE that they announced in Atlassian's Confluence product.
They announced, or they made us aware of it before it was publicly disclosed.
So again, this is that responsible disclosure.
Basically, this could have led folks to a privileged escalation where attackers could basically take administrative access to a public Confluence instance.
That was obviously not a good thing, and so that was assessed as critical by Atlassian.
Upon learning about that vulnerability, we collaborated with Atlassian to deploy managed WAF rules across our entire customer base.
Every customer, including the free customers, received the protection.
This has become sort of typical at Cloudflare.
We had this with the Log4j incident where we had to create WAF rules very quickly to address an issue.
We basically rolled that out for our customers, and so they're protected from that potential threat.
Exactly. There's also this one, uncovering the hidden WebP vulnerability.
It's related to Google Chrome, if I'm not mistaken.
Well, it's beyond Google Chrome. It was identified initially as something that might have been a Chrome bug, but it's actually a vulnerability that is in the libWebP library itself.
It is affecting many applications, anything that basically handles WebP images.
The vulnerability, in this case, allowed someone to create a malformed WebP image file that would overflow the buffer, memory-allocated buffer, and basically allow them to write and pass the bounds of the buffer and potentially modify sensitive data in memory.
That could lead, again, to some very serious problems for whoever's running the library that houses WebP in that case.
Again, we learned about this. We learned that it was something beyond just the scope of Chrome, and we promptly opened a ticket for our security teams.
We started to roll out some capabilities that would help mitigate against this kind of an attack as well.
It reminds us of updates are important, may that be of system operating systems or Google Chrome or other updates are always important to get the latest in terms of security.
We also had an incident, quad 1.1.1 lookup failures on October the 4th, for those who want to read about it.
There's also two products we should mention, wait-in-room ads, multi-host and path coverage, unlocking browser protection, multilingual setups, and announcing general availability for Magic WAN connector.
Anyone want to highlight something on this one?
I think the Magic WAN connector is kind of cool.
It's basically extending our Zero Trust SASE architecture out to a remote site, basically giving a customer the ability to deploy a very simple appliance on-premises to extend their network into Cloudflare.
And it's basically zero-touch in terms of the configuration and install, it's managed from the Cloudflare dashboard.
It makes it very simple to extend. It's basically an on-ramp, it says it right there, a secure on-ramp to the Internet, but extending the Cloudflare network from a Zero Trust or SASE perspective into maybe a remote office, branch office scenario.
And for larger customers, this is really important because they don't want to do a lot of configuration on-site anymore.
People want to get out of the hardware business and Cloudflare is really trying to address the needs there to extend Cloudflare out to customer sites.
And this is probably the simplest way to do that.
That's really important, the simplicity in terms of plug and play, don't worry about it, we've got you covered.
We can say a lot about different products, different names of products, but having that ease of mind that, okay, I'm protected.
What do I need to know? Am I protected or not? It really goes a long way.
Last but not least, I wrote this blog post about Internet traffic patterns in Israel and Palestine following the October 23 attacks.
I would take just a few highlights here.
First, there's now six networks in the Gaza Strip that are offline, completely offline there.
So we show a little bit of that. There was an uptick in Internet traffic in Israel right after the Hamas attacks started, which makes sense.
People were going online to see what was happening. So outages and also cyber attacks.
We saw an uptick in cyber attacks, both two sides in Israel, like newspapers in Israel, media companies, a clear uptick in cyber attacks, but also in Palestine.
So there was an uptick in both after the attacks started.
And we try to keep Cloudflare Raider up to date in terms of those Internet patterns.
So that's it for this week, I would say. Ending notes, Lucas, for those who don't know about vulnerabilities a lot, are not technical, what do you think they should take out from vulnerabilities like the HTTP2 one?
One is we got you covered, but the other is maybe the Internet is very complex.
Yeah, I would say as much as I get to spend all my time on this stuff, I absolutely love it.
Like most people don't need to worry at all. Yes, patch, whenever you see an update, just always do it.
That's just good. Good health, good hygiene on the Internet.
But yeah, the stuff is so low level. It really should be the plumbing.
You don't think every day of the week about the pipes feeding your radiators or your taps.
They just work. When they go wrong, you need the experts to come and help you.
Or maybe take proactive measures and have health checks and your annual service or whatever.
Just be aware that the folks who work on this stuff deeply care about end users, that the whole point of us doing things is not just because it's cool and it's interesting and we're technicians or engineers.
Everything's driven by use cases. Protection and DDoS and security is fundamentally part of every specification and every implementation that's being worked on.
Sometimes things get through the cracks. Attackers are smart people. They will try and find ways to manipulate situations or protocols however they can.
It's not just one and done, write something and think this is perfect.
Everything's evolving all of the time.
By having folks who are interested in these topics, that's what helps.
Especially on this matter and in the ITF and standards in general, we welcome researchers to look at this in a responsible way and reach out whenever they think there's anything.
We have our own Cloudflare Hacker Award program.
There's ways to report vulnerabilities or suspected issues. If you've done an academic paper, for instance, reach out to the people that you're looking at, because quite often they can respond and engage and they want to fix this stuff, or they might want to be able to give some feedback.
One of my colleagues, Jonathan Hoyland, is working on a research group in the ITF called Usable Formal Methods.
Formal methods can be used to analyze protocol specifications and build up models for those things and try and analyze them in a different way.
One note, ITF is Internet Engineering Task Force.
That's the group that tries to put standards in place, in a sense.
You have the web too.
We have the W3C, which Cloudflare participates in. Then you have the whole level above that, the layer eight, like we like to call it, policy, making sure that how we deploy these things in different regions, et cetera, meets the needs of expectations.
I don't want to go into it too much, but we've worked hard at Cloudflare to understand things like broad brand nutrition labels, make sure that users in America understand the Internet services that they're getting and what they can provide, and lots of interesting stuff.
That's a good one. That type of information is really useful.
It's for food. You can see what the ingredients are in food, but also for the Internet is also good.
John, what is your takeaway, your overview?
My takeaway this week was really the idea that these DDoS attacks are going to continue to grow in terms of their size and scale and effect.
Basically, we had never seen anything like the attacks this week.
They far eclipsed what we'd seen before.
Some of the other vendors in the ecosystem saw even larger attacks than we did.
They're out of the bounds of what you would have plotted on a trend line.
That concerns me a little bit. Obviously, we are in new territory, but on the other hand, it makes me feel good that Cloudflare was able to mitigate these effects on customers.
We had very, very few customers that were affected. We quickly understood what was going on, and we were able to help stop that attack in its tracks.
On the one hand, yes, the bad guys always do figure out new ways to attack.
They're going to continue to be aggressive. There's no stopping the bad folks from launching these attacks.
Again, I think I'm always excited to see the response behind the scenes that Cloudflare mounts to really defend customers against those kinds of attacks.
It was really cool to see how we all came together inside the organization, everyone from engineering and the DDoS folks and marketing people and everybody in between talking about it and getting the word out and making sure people were aware that we had their back.
We were covering the customers that were behind Cloudflare.
That's super appealing in terms of a mission to help protect the Internet.
Absolutely. To be honest, I spoke with a few of the guys from the team and the persons from the team.
Oliver, I spoke with him on Monday. He was explaining to me the 12-week process of doing this, of trying to focus on the challenge at hand, to try to solve situations that are complex.
You can see there's effort and willing to solve problems, which is amazing for the technical side, but I think also for the customers that can feel protected.
A lot of pride coming out of this with the teams inside of Cloudflare.
I see it every day with people talking about how we protected customers.
We made tons of news this week, by the way.
If you check Cloudflare in the Google News, there's lots of articles. It's just because, again, we were sharing so much information and insight.
There's a lot of pride that goes along with that because we're trying to do good in the world and make sure people are protected.
Absolutely. That's a wrap. It was a good segment to learn a bit about these vulnerabilities.
Thank you, Lucas. Thank you, John.
Thanks for having me. And that's a wrap. Before we go, now it's time for our new Ask the CTO short segment.
Our CTO, John Graham-Cumming, answered questions that people made in our social media or via email.
Someone asked the question, as a programmer at heart, do you still find time to code?
And if so, what projects are you currently working on? Yes, I love programming.
And in fact, the only reason I don't program at work is I don't have enough time.
And the last time I worked on something at Cloudflare, I had to stop in the middle of it and somebody else had to take it over.
And I don't ever want to put someone in the position of having to take the CTO's code and work on it because that's just unfair to somebody.
So I don't write much code at work at all. I do have loads of little projects at home.
If you look at my personal blog, blog.jgc.org, you can see lots of things I'm working on there.
Recently been messing around with E Ink displays a little bit.
I've been messing around with the Minitel, which I'm a big fan of.
And you can read about stuff I did with firmware there. So lots of little projects tend to be around the home, tend to be devices.
I'm very interested in what some people call quiet computing or ambient computing, where you have devices that aren't interrupting you all the time.
And so, yeah, I do that kind of stuff in a variety of languages.
My GitHub, which is jgramc, I put most of that stuff out there.
Go take a look. So someone wrote in and said, when are you opening a South African office or allowing South African developers to work remotely for Cloudflare?
So the answer is Cloudflare, ever since COVID, has been a very remote, friendly company, and we have people all over the place.
But for legal reasons, we can't do it absolutely everywhere in the world, because there are complexities to paying people, taxes, the right to work and things like that.
So we have made it a rule that people can work from where we have a legal entity around the world.
And right now we are not set up in South Africa, so we can't do it. I don't know the answer to that question, but I would encourage people to apply to Cloudflare, because obviously we have offices in other locations, some people are willing to relocate, and things can change.
The company is growing very rapidly, and it wouldn't surprise me, given the importance of South Africa, if at some point we have an office there.