Cloudflare TV

Latest From Product and Engineering

Presented by Usman Muzaffar, Michael Tremante, Daniele Molteni
Originally aired on 

Join Cloudflare's Head of Engineering, Usman Muzaffar, for a quick recap of everything that shipped in the last week. Covers both new features and enhancements on Cloudflare products and the technology under the hood.


Transcript (Beta)

Music Music Music Music Music Music Music Music Music Music Music Music Music Music Music All right.

Hello. Welcome to another episode of the latest from product and engineering.

I'm very thrilled to have two of my colleagues who are based in our office, Michael and Daniele.

Michael, Daniele, can you go take a moment to introduce yourself?

Sure. Hello, everyone. My name is Michael Tremante. Super excited to be on Cloudflare TV again.

I am a product manager for the web application firewall and I'm calling from London today.

And hello, everyone. My name is Daniele Molteni.

I'm also calling from London today and I'm the product manager for the firewall rules.

So yeah, me and Michael were very complimentary. That's right.

So all right. And we have our first. That's the perfect setup right there.

So I'm here in Sunnyvale in California, not far from our San Francisco headquarters.

But, you know, so it's good morning from the Bay Area here. But both of you guys are product managers in London with the word firewall in your job title.

So Michael, start by like what OK, what is going on here?

There's two product managers here.

What are the what are the two parts here? What's complimentary and how do they interact?

Yeah, for sure. Well, first of all, we're not lacking ideas and things to work on.

I'll tell you that. But the, you know, the firewall is is an engine which we use across a lot of our security products.

Part of that engine, you know, is used by the managed rules team, which is the team I'm the product manager of where we provide.

Why do we call it the managed rules team? What's managed about it?

We basically manage them on behalf of our customers. So, you know, if you're a cloud for user, you have to worry about creating those rules and, you know, making sure they're up to date and making sure they're working properly.

We have a team dedicated essentially to building that as as the product on top of the firewall.

So I'm responsible as a product manager to making sure we're listening to customers and get feedback from customers.

And then, you know, on top of that, we're also building a lot of other tools which are provided as managed services on top of the firewall.

And then Daniela, you know, I speak to Daniela pretty much several times a day and is helping on the other side, which is if you as a customer want to build your own logic, then that's Daniela's field.

And so we are two sides of the same coin in that regard.

That's excellent. So, Daniela, talk a little bit about like what what does it mean, build your own rule?

Like, what does that entail?

What are the what are the key building blocks of the product that you manage that a customer might use?

Yeah. So, well, essentially, with managed rules, as Michael was saying, you can turn them on and basically Cloudflare like creates the rules and looks for specific patterns, for example, for attacks in your behalf.

So you don't have to do anything and you can just simply let us do the hard work.

But often this is not enough for our customers. Some of our customers, they want to build their own logic either to protect their origin, but also to create some custom logic on how they handle requests.

And also this usually tends to integrate with their back end service, back end application.

So oftentimes the line between security and the actual application is blurred and the control there, the customer just need more control.

So the building blocks there are essentially the ability to write a custom rule.

So we have firewall rules in our dashboard and that gives the power to the customer to write any logic on the traffic.

So, for example, you say for any request that comes from the specific country, then you might want to do a specific logic.

You could block the challenge or you could even allow it straight away to be back to your origin.

And then there are other parameters with plenty of fields and we can give you more and more.

You can go on for as long as you like. Yeah, exactly. Just stop me, please. No, I love that.

It's a great explanation. I think one of the things that you said was really insightful is it becomes part of the customer's application.

That it's not just this layer in front, but it's very tightly integrated.

Because our firewall rules, they can be written to be very fine grained, right?

You can have rules that match on individual parts of the path.

They can take actions. They can actually be part of the application logic effectively that is running.

Yeah, I have an example, for example, for this, which I think is really telling.

So if you take rate limiting, for example, you could count.

So rate limiting essentially is a product that does what it says on team.

So you have a bunch of requests coming in and you want to limit the rate of requests.

So if the rate exceeds a certain threshold, then you want to take an action like blocking or challenging.

Which you might want to do to prevent abuse or protect your origin or something like that.


And of course, you could check on certain parameters of the request coming in.

But what you can do is also count on the or check the response code or a part of the response from the server of our customers.

So they can trigger essentially rate limiting with their backend logic and part of the application, essentially.

That's great.

One thing if I may add, because on top of us working on two sides of the same coin, the custom and the managed part of it.

We also work a lot together because some fundamentals of the product are shared.

For example, our analytics dashboard, right?

From our users' perspectives, they don't really care if necessarily when reviewing an incident, if it's coming from managed.

They don't want to be going to two different dashboards.

So there's a lot of overlap in parts of the product, which we work on together on a pretty much daily basis, even across the two engineering teams.

Yeah. So it's quite literally a unified base and a unified presentation to the customer.

But how we are, the technology that we're enabling our customers with has two different paths, like one that we are managing ourselves.

And when we say manage ourselves, we literally mean we have engineers who are looking for CVEs, who are understanding them.

So we like to say if we have figured out how to protect one of our customers, we have now made that protection available to all our customers.

And when you have tens of millions of customers, that's a very big leverage there.

And speaking of big leverages, we had a very big week that we just wrapped up.

We were joking at the beginning of this call that you guys would become television stars because you're on CFTV every day.

So we did something called Security Week. Michael, just take a second.

What is Security Week? What does that mean? We've been a security company since day one.

So in some ways, it's security decade for us. But like what happened last week and why are we so proud of it?

Yeah, no, that's a good point. We've been focusing on security since the beginning.

But it also came to a moment where a lot of our feature releases were lining up nicely, both across the web application firewall, the custom rules, but even other security products at the company, such as in our Cloud for Protein suite.

And to make sure we made the best use of that and actually got our customers' attention to some extent, we decided to package a bunch of releases into a single week and actually posing them in a way that we're solving real life problems.

So actually, the theme of the week was common hacks that happen on the World Wide Web.

And we actually had a partnership with our internal security team.

And every day they actually publish a blog post talking about a specific vulnerability and how hackers out there expose and exploit that vulnerability.

And then along that, we had relevant product announcements each day of the week.

It was actually more than a week. It's more like a week and a half by the end of it.

And where we would explain how you can solve or protect your applications using some of the Cloud for products that already exist.

And also some of the new products that we announced throughout the week.

That's great.

Daniele, can you give me an example from your side of one of the products we announced last week?

And if you can, how Cloud for itself uses that kind of technology?

Yeah, definitely. So we launched a few products that fall into one umbrella, which we call API Shield.

So this was one of the major focus of firewall rules theme in the last six, seven months.

So we launched the first release of API Shield in October during Birthday Week, another week.

And for Security Week, we did basically a follow up adding brand new features to the API Shield brand.

And we are talking about specifically API schema validation, which is now available to all of our enterprise customers.

We're giving more control for mutual TLS and the issuance and managing certificates in general.

We're also launching a new data loss prevention tool, which is for API, but not just for API.

It's more broadly applicable.

And finally, we have another first list, managed IP list that we offer to our customers.

And essentially, we are basically giving some tight intelligence directly into the firewall so customers can use them when writing rules.

So very excited by all of this. There's a lot going on. And yeah, it's.

That's a lot. My mental buffer only has three slots and you completely overflowed it.

So let's let's see if we can unpack a little bit of that, because I think there's some really good stuff in there.

So just again, for our audience, you know, which is almost certainly technical, but API is application programming interface.

And here we're not talking about Cloud for API. We're talking about customers API.

So while Cloudera, when it started, was largely used to protect websites.

You know, a huge part, especially as mobile took off in the 2010s, is not traditional websites, but rather applications that are communicating with an API and then rendering a Web page on the browser locally.

So even regular websites start to look more like API traffic than than Web traffic.

And so that poses some interesting new challenges and some interesting new opportunities.

So a schema validation, I think, is an interesting one.

So like when we talk about a schema, we usually mean the structure of information and validation just means that it's going to conform to what I say, because the easiest way to know that the response should not make its way to the origin is, well, this isn't even this isn't even syntactically legal.

This isn't even a valid input to our API. So.

So how does that work, Daniel? How does how does Cloud do schema validation? That sounds like a technically expensive thing.

And then how does a customer even specify the schema?

You know, we're not talking about an attack here. We're talking I mean, we're talking about like, you know, this it's got to actually be legal JSON or XML or whatever that's coming across.

So how did we do that? Yeah, that's very complex.

It was a very challenging piece of technology where to be.

So a schema, as you as you mentioned, is a contract. You're essentially defining how your API should work and what you expect for every call.

So a schema could be like, oh, I define these are the calls I expect.

For example, I expect a get request on a particular URL.

And then I expect also certain parameters. Perhaps you want to retrieve one resource, specific resource that has a specific ID.

Yeah, you can specify what you need.

Right. And this can be a call by a Web page, could be called by a mobile app or an IoT device.

So the schema is essentially this definition that for every request or every function, let's say, or business logic implemented in the API should be created, crafted the request and send it over.

The way we implemented the validation, if you want the protection of all the requests, is that customers can take this file, this definition, upload it to the firewall, and the firewall will automatically basically set up rules that verify that every request complies with the schema.

This is what we also call a positive security model.

So by default, you kind of like block everything, only allow things that you know they are the way you expect.

You know they are good or at least the way you have predicted or designed the system to work on.

So this is essentially the idea.

And it works. We took a standard, which is the OpenAPI standard, also known as Swagger, Swagger file, which is kind of an industry standard for the API schemas.

And that's what we are supporting. We started supporting that because that's the most commonly used.

And oftentimes our customers, they release or they generate the schema as part of their software development process.

So this is usually embedded with the way they build software. So they produce the API, they design the API, they write code.

And then as a side product, there is also the schema.

And this is what we are leveraging, essentially.

That's great. So that means that, you know, this very common standard way, I remember using Swagger ages ago when I was an engineer, now we're really going back, is available.

And you can use that to upload the definition from their Cloudflare engineering.

The product is actually creating firewall rules, leveraging that same engine that can inspect and act on traffic to ensure that only traffic that matches that comes in.

That's really cool. Before I go back to some of the other things that Daniele rattled off.

Michael, you also, it was Security Week for you too.

Why don't you spend a minute telling us some of the interesting things that came from the manageable side of the house?

It was Security Week for the entire company.

That's right. We had a pretty big announcement as well.

I joined the product team about a year ago. And, you know, the WAF is one of the core products that has always been part of the Cloudflare suite from the beginning.

And since I've joined the WAF itself, the engine that runs all that logic hasn't really changed that much.

Of course, we've made improvements over the year, but we decided to use Security Week to pretty much announce a brand new WAF.

And when I say brand new WAF, I'm talking not only about the user interface, the user experience, but I'm even talking about the underlying engine that executes our managed rule sets.

And funnily enough, the engine itself is a generic engine.

It's getting used by firewall rules as well, by other teams, like the bot team had a big announcement.

Magic Firewall, we're all trying to leverage the same engine.

But the new WAF was our biggest announcement during the week.

And we had a second announcement as well regarding account takeover protections, which was one of the common hacks that the security team discussed, which we call exposed credential checks, which is an add-on to the WAF itself.

And it's actually implemented as a managed rule set and essentially allows customers to be notified whenever their end users are logging in with potentially breached credentials, which might have been stolen or leaked on the World Wide Web.

That's great.

So at a click of a button, there are now new rules that can tell. Actually, we see a credential that's coming across the wire that we know or suspect were part of a dump or a breach or something like that.

Correct. Well, we know for sure. So there's many databases are provided in the public domain.

And of course, we collect those databases.

We have implemented a lot of technology around the protocols we use and the cryptography around how we store those databases.

So they cannot be reverse engineers.

There's sort of a one way lookup only method which we can use from the edge.

Given we are a reverse proxy and the WAF is already inspecting all traffic coming through, customers can now either specify their login endpoints or can leverage our managed rule set, which is already looking for login endpoints.

And if a username and password credential has an exact match in the database, by default, we're going to be adding a header that gets sent back to the origin so that then our customers could, for example, issue a two factor authentication request for the end user or potentially even force the password reset.

In some cases, we haven't actually we're doing this internally.

We're probably going to be logging, you know, logging the event and simply just increasing our monitoring for that specific user session, just in case that the authentication was done by, not by the legitimate user, but some hacker that had exposed or found those credentials and is trying to reuse them across the web.

That's fantastic. When we talk about the new WAF, though, so what's the shiny thing about the WAF?

I didn't say new WAF.

I didn't say why. You didn't say what it was. You left me hanging. I got so excited about exposed credentials that I wanted to talk about that.

Let's go back to new WAF.

What's the new WAF? Yeah, new WAF is the big one. You know, we've had a lot of feedback from customers across the years.

Both our self-serve customers, our pro and biz plans, but also our enterprise customers.

And although the WAF is extremely easy to use, it lacked over the years, you know, some of the flexibility in deploying custom configurations.

You know, the more customers we get, the more diverse the applications behind Cloudflare become.

And sometimes you need to write exceptions into your WAF rules, right?

You may not want the WAF running on specific portions of your traffic.

Or at the same time, when you're configuring the WAF, you may want to do a bulk override setting across a whole set of rules.

Right now, the WAF counts, the managed rule sets count several hundred rules.

And if you, for example, wanted to put them all in logging mode in the old WAF, unless you used automation, the UI didn't really make that a simple process.

We've solved this and a number of other common feature requests that are coming in from our customers.

As well as rewriting the engine itself, which I think is worth mentioning, we're now using Rust as the underlying technology of the matching engine, which provides us a whole set of additional performance benefits, as well as safety around memory management, thread safety, etc.

And Rust itself was a technology used by other products of Cloudflare already.

So we're really going towards that uniform, scalable infrastructure to take us to the next level, of course.

Not to say the old WAF hasn't served us extremely well. Yeah, the old WAF served us very well.

We should really give it a 21 gun salute. But it's fantastic to see this technology built on a foundation that really just came into existence a couple of years ago.

And as we were creating the beginnings of what is now Daniele's team, and is now this foundation for so many security products, because it's become this really flexible way for us to inspect and act on traffic.

And it's very cool also that the old WAF has now been rewritten in that.

And that was a seamless thing.

And that's a tricky thing to do. Yeah, I think I mentioned this to you before, Osman.

It's sort of like changing the engines on a rocket whilst it's going to space.

That's right. Exactly. Changing the engines on the rocket whilst it's going to space.

Daniele, back to you. One of the other things you've talked about was mutual TLS.

Let's talk about that. I always take a second to explain mutual TLS because we're so used to seeing the green lock.

In fact, the green lock is gone recently.

Chrome said everything is SSL and TLS by default. And when you think of security, you think of making sure that the website you're connected to is exactly the website you think you're connected to so that you, the client, are not being fooled.

And mutual TLS adds in mirror version of that, and hence the word mutual, which is like the web server now asking the client to prove that they are who they are, which becomes very important when you talk about things like Internet of Things.

But talk a little bit about that. So what did Cloudflare do? What did we announce last week, and how are we helping make this problem easier for customers to manage?

Yeah, so you described that very nicely. So a website, what you only care about is whether the website is authentic, because you are the one who is consuming the information.

But the problem when you have an application where you are also pushing data and perhaps you're also responsible for the content of the data you're sending, it's also important from the backend, from the service, to authenticate who is sending this data, who is the source of information.

And that's where MTLS comes into play, and that's where it becomes so important for, for example, mobile apps.

Think about financial services or a bank, right?

They have a mobile app, they send the mobile app out, and through the mobile app, you can make, I don't know, check your account, you can make a transfer, you can make very sensitive operations.

And you want to make sure that the mobile app is authenticated, is also the one you have.

So what we are offering for this specific use case, like mobile apps, IoT devices, is a way to issue certificates and check certificates directly at the edge, at the Cloudflare edge.

So now you can, directly in the dashboard, you can go there, you can, there is a tab where it's called Client Certificates, and that's exactly what it says.

So you can create certificates for your mobile app or IoT device.

You create it with a click of a button.

You can copy-paste it directly into your IoT device or your, the development package of your...

Where you're building the software that goes on there.

Where you're building the software, and then the mobile app, the app goes down to all the devices, and then whenever the device sends a request to their origin through Cloudflare, they will present a certificate, their own certificate, their secret.

And we will validate the secret at the edge. And then you can write logic in the file that says something like, if the request has a valid certificate, then perfect, let it through the origin.

If it has a revoked certificate, then you can block it or handle it differently.

And if a request doesn't have a certificate at all, then just don't bother and drop the connection.

So that allows that kind of level of granularity and control on the type of request.

And the great thing is that it's available to everyone, to all the plants, even like free plants.

So it's a way for us to help developers and whoever is out there developing an app to focus on what's important, on developing their technology, their solutions, rather than investing resources on something like authentication that we can take care of.

That we can take care of for them.

So, and again, that's us protecting traffic in a way that the customer needs it.

It would be more work for them to have to do. They'd have to be managing that security, that certificate, and validating it on every connection.

And every connection that they encounter as invalid would be a waste of their origin processing time.

So all the more reason to do that at the edge rather than there.

Michael, I wanted to talk, ask you a little bit about, as we often talk about our products and how we've built all these things, but talk a little bit about the process of how do you coordinate something like Security Week in general?

Like how did this come together and what does this coordination between customer facing teams, engineering teams, marketing teams, product management teams?

Just comment a little bit about that.

What was interesting about that? We've done a bunch of these.

CloudFront does, we call them Innovation Weeks. And so what were some of the interesting things that you...

It's definitely a company-wide effort, especially when it comes to security.

The first step when we prepare any of these weeks, including Security Week, is identifying which product...

We always have a product roadmap, right?

We're never stopping on that front. We're always deciding and discussing with engineering, speaking to customers, getting the product roadmap and deciding what features to build.

The first step was, though, identifying exactly which of our products, which were going to be available for Security Week, we're going to announce and talk about.

Specifically for Security Week, we had one announcement per day.

So the idea was, of course, to prepare the content and the story that would go with them, especially as we were posing them in regards to the hack we're trying to prevent.

But that's somewhat the easy part because we were already building product.

Getting the coordination across all of the other teams is where it gets more complex, of course.

Some of the announcements have press releases go along with them.

So there's a whole PR and marketing side to be coordinated with to make sure, you know, the blog announcement goes along with the press release and all the media people are briefed accordingly.

But then even more importantly than that, we have a pretty substantial customer-facing team.

And as soon as we make an announcement on the blog, immediately we get a lot of inquiries coming in, both from, you know, the phone line, the email, our account teams start getting a lot of questions and making sure that those teams know what to say and especially know what's exactly available because we are very transparent in our development lifecycle.

Some of the product announcements today, last week, were available immediately from the dashboard across entire plans.

Some of them were beta launches, so only certain customers would be able to get access to them or have to sign up to a specific form to require access to the beta product, right?

So getting that information out there is super, super important. Definitely not only a thing, you know, me and Daniela worked on.

There's delivery management involved, a lot of our team members helping us with that.

But I think it came together really, really well overall.

Lots of customer interest. I think, you know, some of the inbound forms for beta access got overflowed, but that's good news.

That's great. We'll connect you with the other parts of the Cloudflare product suite that can help rate limit traffic.

Exactly. We aren't a victim of our success.

Last topic, I just want to touch on, Daniela, with you. You mentioned data loss prevention and how that's the beginning of a larger thing.

Let's just second on what is data loss prevention and how does that connect with what you shipped last week?

And, you know, what are some of the things that are the larger problem of data loss prevention that Cloudflare is helping solve?

Yeah, data loss prevention is a massive problem, and it covers a lot of use cases.

And yeah, it's difficult to define it, and often the boundaries, they move.

What we launched in terms of data loss prevention for API is essentially a way for us to provide visibility to customers to what kind of data leaves their origin servers.

So what we're doing is… It's expecting traffic on the way out rather than on the way in.


Yeah. So think about the API shield, MTLS, schema validation checks, all the requests coming in, and we make sure that everything is compliant with the expectation.

But then there's also a problem of the responses. Are the responses leaking any sensitive data that shouldn't be there?

Perhaps there was… Yeah, the way you develop your backend service is by accident returning a critical number or a social security number.

And you want to prevent that, right?

Yeah. So the DLP solution we are providing as part of the API shield is exactly that.

So it's a set of managed roles that the customers can turn on on their traffic so they don't have to write anything complex.

Complex role is all pre -packaged and managed by us.

And we are essentially running controls on very well -known and well-defined type of sensitive data.

As I mentioned, financial information, bank details, critical numbers, and things like that.

Of course, going forward, this could develop in something more customized so where users will be able to define their own sensitive data, how it looks like, and what they want to check or stop.

But as you mentioned, data loss prevention is a broader problem and is going… It was part of another big announcement, which is more in the Teams gateway software for Teams Space.

The customers own infrastructure because that's really where all the secrets are.

Exactly. And so I'm very excited to see where this is going.

And for sure, there will be more and more information.

More to come. More to come there. Guys, thank you so much. Michael, Daniele, what a treat to talk to you.

There's so many interesting projects that we've shipped and congratulations on all the great progress and success of Security Week.

And we will definitely have you back on latest from product and soon to talk about all the stuff that you're going to be building this quarter.

So thanks again for joining us.

Thanks, everybody, for watching. We'll see you next time. Bye. Thank you, everyone.

Bye. Thank you. Thank you.