Ask a Solutions Engineer
Get ready for a live Q&A session with Cloudflare's Solutions Engineer team, who will be ready with answers, expertise — and unparalleled whiteboarding skills. Send technical questions about Cloudflare products (or the Internet in general!) to firstname.lastname@example.org
Hello everyone, and welcome to this episode of Cloudflare TV. Welcome to the second live session of Ask a Solutions Engineer.
My name is Michael Tremante, I'm a Solutions Engineer at Cloudflare, and I'm here today with Matt Bullock.
Matt, do you want to give yourself a quick introduction?
Yeah, so I'm a Solutions Engineer, but I am currently based in the London office for the EMEA region.
Awesome, thank you for that.
And I'm also based, I'm not currently based out of London office, I'm out of a San Francisco office.
So we're doing a bit of a cross time zone session today.
So first of all, it's awesome that we didn't mess up too badly the first time we did this, so we've been allowed to do this again.
We'll see if we'll be allowed to come online for a third time or even more sessions after that.
And before we jump in, I wanted to give a quick recap of what we're going to talk about today and what the scope of the session is.
So essentially, me and Matt are Solutions Engineers, and within the company, we sort of have a good general knowledge across all the different products and services Cloudflare has to offer.
And over time, we've, the Solutions Engineer team has become a bit of a reference point to answer questions about how do different things work, and how do they work together within the context of the Cloudflare platform.
So we thought, why don't we do the same and just let our audience ask us questions, and maybe we can be helpful and clarify some of the miscomprehensions or any other technical concept that might need clarifying.
So here we are, and we're accepting questions from anyone in the audience.
There are several ways you can send us a question. Number one, send an email at livestudio at Cloudflare.tv.
Bear in mind, it's .tv, not .com.
If you send questions there, we will receive them. If you send us questions during the session, we'll try to answer them today.
If not, we'll put them in the queue and we'll answer them next time we do another session, another live session of Ask NSC.
But if you find us on Twitter, please feel free to ping us on Twitter, send us a question via Twitter.
I've also shared in the past a Google form where you can submit questions.
I'll make sure to share it again towards the end of the session.
So any way you can send us a question works. We'll make sure if we see it, we'll make sure we get to it.
A few gotchas. This is, again, some of these questions we're answering live, so we don't necessarily know all of the answers.
We'll try our best.
And if we do make any mistakes, I'm sure we will get corrected and we will make the amended corrections next time around as well.
So with that, one last little thing I wanted to mention before we jump in with the proper questions.
I've actually been asked by a few people, what are all the lava lamps behind us?
I think if you're a recurring viewer of Cloudflare TV, you may have noticed a lot of our colleagues also have lava lamps behind them.
That is our wall of entropy.
So essentially, it's one of the ways at Cloudflare where we're generating randomness.
We've got essentially a camera looking at the wall of lava lamps.
And because we cannot really predict what lava lamps are going, we use that as a randomness source for some of our software systems.
If you just Google wall of entropy Cloudflare, we've written many blog posts about it.
It's pretty interesting.
So, yeah, so that's the background. It's found in our San Francisco office.
So if you ever come to visit us in person, once we're all allowed to go back into the office, you will see the lava lamp there.
It's right in the lobby in the main entrance.
The lava lamp wall. And great. So with all of that aside, Matt, are you ready to start answering questions?
Ready as I'll ever be, I suppose.
So let's go. Awesome. So the first question we received, the first couple of questions are leftovers from last time.
It's actually, I guess, nicely positioned because it sort of answers a little bit more what we do as solutions engineers.
We don't have the name of who submitted the question. But nonetheless, Matthew, the question that was sent to us was, do solutions engineers provide professional services to help with onboarding new applications to Cloudflare?
So it's a bit of a biggie.
So I guess it might be a bit of a conversation. So over to you.
Do we provide professional services? So it's a good question because obviously customers ask, but it's also asked internally when a client.
So we as solutions engineers work on the enterprise plan.
So obviously these questions are self-serve, but technically we just work with enterprise customers.
So the larger client base.
Now, when clients come to us, we as solution engineers are assigned as the technical advocate of the product.
So we understand or we try to understand what the customer is currently doing, what they're using, what their infrastructure looks like, and coach them on how to integrate with Cloudflare.
Or give an overview of Cloudflare, where the solutions fit in, what's the best solution and what's the best way to configure it.
So we usually talk to the customer teams that will be ending up managing the platform.
Now, as you go more and more up the market, teams, there's not just one team that's managing Cloudflare.
There's a person or a team managing DNS.
There's a person managing SSL. There's a person managing the firewall.
So sometimes these teams don't have the knowledge, but they just have an overview and they like to use professional services.
With Cloudflare, because as hopefully most of you know, the product is super simple to configure.
And that goes, again, from our free plan all the way up to our enterprise plan.
And it's the same UI.
There's just more features. There's more things entitled in the backend, what we do, to make the product more scalable and help fit with their use cases.
Now, some of the big customers, even though they love the usability, they still don't have time to manage it.
So we, as the advocates, we can then use our partner network.
So we have partners that are specialized in the project management side, so they can write statements of work.
They can understand, do the professional services side, and then do actually the clicking the buttons in the control panel.
So as an SE, we do a lot of the pre-sales engagement as the advocate and showing what the platform can do and how to integrate.
If they do want people to click buttons in the UI or API integrations, things like that, then we would assign them out to a partner or bring a partner in to help us work on that.
So we don't do the professional services aspect. Got it. And as you said, Matt, if I got you correctly, we as an SE team only work with the enterprise customers.
Do we ever do anything with some of the sales service plans? It depends.
Obviously, there is use cases where people may be on our business plan. So that's the stage before they go into enterprise and they may be interested in some of the new products or they may be on a growth curve to be enterprise where they want our 24-7, 365 support, where they can pick up the phone and talk to our lovely support team throughout the world.
So sometimes we do talk to our self-service customers and look to see if enterprise is the right fit for them or if they want to stay on the self-service side.
That's probably where most of my interactions come in.
Obviously, it's more a consultative approach to say, hey, you're doing great.
There's no sort of push up to the enterprise. There's nothing there. That is our standards, terms and conditions.
Some big customers that you would see are actually using the business plan.
But yeah, obviously, you get the more support, you get access to ourselves to help you coach throughout that.
Yeah. And I think I want to echo what Matt said around our dashboard is very easy to use.
I'm sure we might be demoing or showing some of the dashboards, depending on the questions we have later on.
We are slightly spoiled because we get access to all of the features from our end.
And sometimes we do get mixed up on which features are available in the different plans.
But really, the enterprise service just simply has more toggles and more features enabled in the dashboard.
But it's just as simple to use as some of the self-service plans.
And I mean, we've had this question even from customers before.
Most of the time, though, really, we find ourselves not needing to deliver proper professional services.
We're here. We work as consultants in that regard, as Matt was saying, and it works pretty well.
And just to reiterate, there's many other ways to get in touch with getting Cloudflare help if you need some.
We have an amazing community forum, community site. We do have the support team, which is pretty much global, 24-7.
And then if you do ever end up in an enterprise plan, you'll be speaking to one of us or someone else in the team.
But we don't actually sell by the hour, for sure, such as pure professional services.
Sometimes if you post on Twitter, then Matthew or John, our CTO, will respond as well.
Yeah, the support from the best, I guess.
Great. So, thank you. So, yeah, so as Matthew said, and this is the reason also we're doing this session, is because we play and we sort of provide consultancy to some extent to our enterprise customers, we can sort of field most questions.
I underline most, not all. There we go. So, great. Thank you, Matt.
Let's go on to the next one. So the next one was asked by Scott. This was also a leftover from last week's session.
And it's a pretty interesting one.
It's a bit of a product robot one. So the question is, any potential future support for audio in stream in the future?
Hosting support and requirements for streaming audio podcasts, for example.
And I'm assuming Scott here is referring to Cloud First Stream.
So I guess before we ask the question, what is Cloud First Stream?
So hopefully I'm sharing my screen and it's coming up. So Cloud First Stream was built to address one of the issues sort of in the streaming market.
And that is where you have videos and then you would have to go to a separate place to have storage to store that videos on the Internet, book it somewhere, or you could be putting it on somewhere in the cloud or somewhere that has access to the Internet.
And then you need to encode it and then you need a way of delivering it.
And then you need the player to play it.
So you could be dealing with five different touch points, integrations.
And it was a bit clunky and each to like set up yourself. And it wasn't simple.
So Cloud First Stream sits in. So I just made this larger and then I click into here.
Cloud First Stream was built to address all of those features.
So what you have is an easy way to upload videos. So in MP4 or other file formats, you can use a link or you can even use our API as normal.
Everything's API driven and it's our dashboard is a wrapper to the API.
So once you upload these videos, we then are storing them.
So there's a first bit storage. We then encode them.
So we process them into different encodings, which and then we also supply the player.
So if I click in this video here, which is the standard video.
In which it's not playing on my browser.
But what you can do is here is you have different links.
So the first one is and if I zoom in, you can see a video link, which you can share and you can play within the browser.
You have the embed links and you have all the features and functionality of the players.
If you want it to auto play, if you want it to mute, what you would expect.
This allows you to then put this player on your application, on your website, and you have the end to end product.
Now, there is some cool features. So you can do token authentication, which stops people sharing videos.
And a few other things to integrate with this on our developers documentation.
There is information about how we do webhooks to let you know when video processing.
So people that are doing a lot of video on demand.
This is a great product and sort of scales from the self -serve where you're doing a few videos up to those big customers that could be web e -commerce.
And they're having a product video for everything. So we had this question last week and I think, yeah, we talked to the product team that owns Stream.
Currently, there is no roadmap for it to have a audio only. It is a good feature request.
And I know from enterprises that we have had this. We have a lot of people that are doing audio books and they are looking for a way of hosting and encoding and delivering them rather than them doing themselves in their origin and just using as a CDN.
So not yet is the answer, but hopefully one day. But yeah, we thought we'd give you a demo of overview of what Stream is because it's a newer product.
It's actually over a year to two years now. Yeah, there's a lot of new features and things being added to it.
Yeah. And just to recap, if people see on the screen, there's a little box with a Stream HTML tag.
So, you know, once you have a video and you've uploaded, that's all you need to copy paste in your website.
And essentially it just works in that sense. But it's basically a lot of people nowadays have to rely on third party video services because video is pretty hard, actually.
So it's a lot on YouTube, et cetera. This gives everyone an alternative to have something under their own domain without requiring third party.
And the other thing is we encode it into different formats in the form of quality.
So if you are delivering.
So if you put this player in this link in your page and you're delivering to someone with poor connectivity, they're in a 3G area on their phone, then it will scale down to sort of give that.
And if then it suddenly flips to a 4G connection, then obviously the better quality will come free.
So it's built in and sort of made yet easy video.
Easy video on demand is how we look at this and sort of would sell it to in a in a pitch.
I would say. Awesome. Cool. So as I've answered to on the bounce, it's time to answer a question.
I think it is. I think it is my turn now.
I feel the pressure. So this one's from Anna and it's can someone please help me understand in which cases can a DDoS attack be mitigated on self-serve plans?
So a bit out of your comfort zone, Michael. Yes, for sure. So Matt, if you don't mind.
OK, I can try share my screen this time. So I'm hoping everyone can see that.
So first of all, we we provide unmetered mitigation for everyone.
Unmetered DDoS mitigation for everyone on our platform. So if you have a website and you sign up to Cloudflare and you sort of receive a DDoS attack, we will protect that.
We'll protect you from that DDoS attack, regardless of the size of a DDoS attack.
Now, the question is, OK, but that DDoS attack can have many, many forms.
Which DDoS attacks do you actually protect against?
Now, if you think of Cloudflare as a proxy and you think of us as a web proxy, so specifically for HTTP and HTTPS traffic, you can.
And that is mostly what's available, you know, historically what we're known for.
And you have a website behind Cloudflare.
If the DDoS attack that someone is sending your way. And I'm assuming here you've configured Cloudflare properly.
So, you know, there's no back end ways or the attacker doesn't know your origin, IP address, etc.
So you're well hidden behind the Cloudflare platform.
And if the attack vector is leveraging a protocol, for example, that is not HTTP or HTTPS, that will be stopped by default at the edge.
Because there's no reason for us to send back to you anything that's not web traffic.
Right. And there's a vast, vast category of DDoS attacks that are volumetric, which are not leveraging the HTTP or HTTPS protocol.
One simple example, which has been around for a while, is DNS amplification attacks.
There's no reason for your origin to be receiving DNS traffic, even if it's regardless of the volume.
We are initially protecting out of the box web applications.
Before, as soon as we see any protocol or any attack vector that doesn't match what we should be proxying, that just gets dropped at the Cloudflare edge.
And it could be as large or small. It doesn't really matter. We provide that and made a DDoS mitigation and there's no additional charge for anyone on our platform.
We strongly believe that if an attacker is able not only to take down your origin, even if they do not take down your origin, but they manage to inflict a financial burden on our customers, to some extent that attack is successful.
So that is what drives, you know, some of our decision making here.
You shouldn't be worrying about, you know, us serving a lot of bandwidth and then you getting a big bill at the end of the day, as that's the service we provide.
And from a security perspective, though, of course, not all DDoS attacks are necessarily volumetric or on different protocols.
So I'm going to navigate now to the firewall tab. And I'm going to go to manage rules.
At the bottom of this card, I know the free plan, for example, doesn't have access to the firewall features per se.
But if you scroll down to the bottom, you will actually see a card which talks about the Cloudflare DDoS protection.
And here we do specify a number of attack categories, which we do protect for out of the box.
We actually protect a lot more than just these.
These are sort of the more generic ones that we see more often. And again, our network is protecting against these on a daily basis, several attacks on a daily basis.
But what if the attack, for example, is leveraging the HTTP protocol? So something that your origin is indeed expecting and therefore we cannot just block outright.
Now, we had a similar question last week, if I remember correctly.
And we do have in our software stack at the edge mitigations to detect when, for example, we observe a high rate of HTTP requests coming towards your origin.
So in this case here, we actually signify here as an HTTP flaw. And if things are very much out of the norm, then, of course, our mitigations will kick in.
And we do a number of changes on how we challenge or how we basically control that traffic so that your origin doesn't get necessarily hit hard by an HTTP flaw.
Let's say, though, that the attack is more sophisticated.
Maybe it's not as powerful as a very large flaw, but it's still causing you problems to your origin.
Then would that be mitigated on the self-serve plan?
Well, really, that starts to be in the case where it depends.
It depends on what the attack is like and the sort of features you have access to.
So, for example, customers who do have access to our firewall, so pro plan and above, do have access to our managed rules.
These are rules that Cloudflare has built over the years and is constantly developing and improving.
And some of the rules here in our web application firewall are actually mitigating and checking for specific signatures from known botnets.
So if there is a botnet which has generated a little bit of traffic, maybe our mitigations don't kick in automatically.
But if you have access to the WAF and the rules are turned on, you will get protection from those and they will be blocked by our Layer 7 WAF.
So I can show you here an example. If I click on the WAF and I search for denial of service, you can see here.
We actually have a lot of DDoS or better denial of service mitigations implemented as Layer 7 WAF rules.
And sometimes, though, you know, if the mitigation, if the attack vector is, you know, looks like a really legitimate browser and it's really difficult even for our Layer 7 WAF to be able to block and mitigate.
There are other features in the dashboard, depending on which plan and what you're using.
One very common one, for example, is a rate limiting feature.
So I'm heading over to the tools tab here. Rate limiting is something that allows us to essentially, you know, define what sort of rate of requests from specific IPs we should be expecting for our application.
And as soon as something goes over that threshold, then Cloudflare will start slowing down those requests and issuing challenge pages, blocking customers, not necessarily customers, but in this case, blocking bots or whatever is trying to access your application and slowing it down so that your origin can cope with the load.
If it gets even more complicated than that, maybe, you know, there's requests coming from all over the world.
We do have additional features. Most of these, though, do become available on the enterprise plan.
The bigger example here is our bot mitigation product, which actually doesn't have a interface per se in the dashboard.
We do have analytics for it, but basically it allows us to build firewall rules that are triggering or better, triggering on some of our intelligence, which is under the form of threat score or known bots or similar.
So essentially users that have access to the bot mitigation can write rules that say if this request, if Cloudflare thinks that this request is a bot and there's a good chance it is based on the scoring, then you could challenge that request or block it, et cetera.
And we do, as solutions engineers, we use these fields very often when we have customers coming under attack and we leverage them to mitigate the attacks and sort of save the origin from getting overloaded.
So the question is big.
The final answer depends on the attack. Most volumetric cases are easily blocked on the self-service plans.
As the attack sophistication goes up, you may need access to additional features and analytics to understand what's going on.
But the self-service plans definitely do provide a pretty good protection against DDoS attack.
And then the last thing I wanted to show is let's say worst case scenario.
You don't know what's going on and your origin is getting hammered and you're getting a lot of requests.
We do have an under attack mode, which is very often used by our self-service plans.
And so if I go to settings, we have this concept of security level.
This is applied to all requests hitting origin. So if you don't if you really don't know what's going on, but your application is relatively simple.
For example, it's a blog or a simple e-commerce site or something that doesn't require too much user interaction.
You can come over here and basically set I'm under attack.
And what this is going to do is going to challenge with the capture challenge every single user coming to your application.
Now, sure, the user experience will be slightly impacted on the first request because the user will need to solve the capture.
But once they solve the capture once, they'll be allowed through for subsequent requests.
But, you know, all of that malicious traffic will have a hard time getting through and actually hitting your origin.
So from an enterprise perspective, when we work, when me and Matthew are speaking to customers, we rarely rely on this because we can solve attacks with other methods.
But that's the sort of catch all if you're on a self -service plan, including the free plan.
If you want to mitigate an attack and you don't know exactly what's happening.
The last gotcha, of course, I'm assuming Cloudflare is probably configured here.
If when you onboard it on to Cloudflare, you haven't properly hidden or change the IP address of your origin.
You should check if the attack traffic is hitting you directly rather than coming via Cloudflare.
Last very quick mention, I've spoken about Layer 7 HTTP proxying. Of course, Cloudflare now proxies a lot more than just Layer 7 HTTP.
And we do have now the products for proxying any protocol.
But I think that's slightly outside of the scope of Anna's question.
So hopefully that answers it. And and that's that's useful.
So. So. So, yeah, there you go. Great. So Matt, I've done I've done my question now, so I feel the pressure is off.
I'm glad it's an hour session. I'm speaking a lot.
I'm speaking a lot. OK, so the next one is quite, quite easy. It should be quite short so we can catch up.
Question comes from Brandon. And the question Brandon has is, can you increase origin response timeout?
And to give more context, we're assuming or better, I'm assuming he means HTTP response timeout.
So Brandon, Matt, can that be done?
Depends. So on the enterprise plan, yes, there is a limit to the maximum connection.
How long we can stay open before the connection is closed.
Top of my head. Ten minutes. I'm looking forward to being corrected on that.
That means that means Cloudflare will connect to origin and leave that connection over ten minutes.
And it's a bit weird. Why do customers need this?
Usually it's because their database, you're sending requests to a database or that creates a long time to process, pull together the information before a response is sent back.
And that can take a long time. And yet go over the threshold.
So we do have use cases or we do. Customers do need this. If you're on a, if you can't obviously use the enterprise and you're on a self-serve plan, there's things that you can do.
And the reason why we close the connection is because we've not had a response from the origin.
The origin is just hanging what we perceive to be hanging.
So there is a way of sending a status code. So one or two, I think, which is processing where the origin can send that and almost like a, hey, I'm still here.
Just give me a few more minutes. I'm just processing this. And then once it's ready, it can send out the full response and go, here you go.
So if, if you can, that's sort of like the recommended, because if you are opening up the origin timeout, there is obviously then a DOS attack vector, which is if you have 10 requests and they just stay open for 10 minutes, you are slowly flooding all the requests that you, you can't handle an infinite amount of requests that all connections coming in, you are restricted.
So you could almost block yourself from accepting anything legitimate.
If people are creating these and it's just hanging for 10 minutes rather than cutting them and connection close.
So we do like customers to think about how they can do one or twos.
We've seen people where they've had, they do it over WebSockets and it's the same issue where they timeout.
And there is a in WebSockets, which is a persistent connection.
So when you go to, I think a good example is when you have a chat application, so you go to the bank and it says, talk to someone and it's on the chat.
That is actually over a WebSocket persistent connection from myself to the end user.
And we're transferring data over that. And within WebSocket, you can do a ping pong.
So it's like, yep, still here. And it's like SYNAC or ping pong, which is I'm I'm here.
Okay, cool. Waiting for the next response. So that's the ideal, because then you don't have to increase the timeout.
And if you don't have to increase the timeout, then you are limiting yourself to the vulnerability of just having a load of sort of connections open, doing nothing and starving anyone that really needs a connection.
The other option on a self -serve plan, if just to try and incorporate everything, it's not the best option.
But where you need to do this, you can grey cloud a subdomain. Now, that's not great because it's not protected.
You do get the DNS and things like PowerFlare. But if you can have a separate origin or something that was the one that was doing the long polling or the long sort of queries, then you could sort of do it that way and then orange cloud the rest of it.
It's that's the probably the worst case thing.
But if you can't, let's say you're doing a long processing on your main page and that's more important to orange cloud and you can move away to that sort of section, that's probably another option, not a great option for one or two ping ponging to try and do it on the enterprise plan.
We can. But again, we try and navigate the customers to can they improve their architecture or improve what they're doing to make it more, more performant or less, less risky.
And again, that's sort of what we do as a advisor to the customer.
I think I agree. Every time I've had this question is can can you know, the real solution here is to make your application better.
There's no reason why an HTTP request should be hanging for minutes plus.
I know some applications might be crunching numbers and generating reports, but then really you should detach that from initial requests and maybe have a initial request that just triggers a report generation with a response.
And then once the report is ready, you know, the user may refresh the page or receive an email notification or something along those lines.
Having longstanding HTTP requests is, as Matt said, is definitely not not not a good idea in the general case.
So my turn again.
Yeah. So I think this one's quite straightforward once this is from Jake.
Is rate limiting or blocked traffic included in monthly bandwidth usage?
Thank you for the question, Jake. So this I guess I sort of touched on this in my previous answer as well.
The short answer is no. And what I mean by that really.
So bandwidth usage or better data transfer is not actually how we bill rate limiting as such on self service.
We actually bill per request. But the key point there is we bill on requests that the rate limiting engine is checking on, but not necessarily triggering on.
So how I said earlier, what I said earlier is, you know, if an attacker is not able to bring down your origin, but they're still able to inflict a financial burden on you as Mr.
Customer or Miss Customer, the attacker is still successful to some extent, right?
Because you're having to dish out a certain amount of money, which is not predictable in advance.
Therefore, that's not an ideal situation you want to be in.
So in summary, no, we only look at clean traffic when when using the rate limiting product.
So you do need to design your rate limiting product correctly.
So maybe you're just matching the endpoints you care about the most.
Normally it's things like login pages and you know that traffic you'll be receiving on your login page is a lot less compared to the traffic.
Maybe you're expecting on your bigger website, on the wider website. So a rate limiting rule makes a lot of sense there.
And you're also avoiding, for example, attackers from doing credential stuffing or just trying, looping over a database of credentials, trying to get in.
And if it triggers, that's fine. That's on us.
You're only going to get billed for the clean traffic. The same concept applies if we go all the way up to enterprise customers.
It works slightly differently on enterprise customer land because we often try to provide a flat free depending on the product features that the customer is asking to use.
And then we also ask for an estimate on their clean traffic across the board.
Right. So then once they have access to the features, that is sort of a fixed price.
It doesn't really matter what happens after that.
If an attacker really starts triggering the rate limiter, that's totally fine.
It's not going to provide unpredictable pricing for our customer.
So Jake, you should be good to go. Just plan and design your rate limiting rules well in advance.
And if it does trigger and it's useful for you, that's not going to have a financial impact on how much you need to pay Cloudflare.
Great. So over to you again, Matt. So the next question, we don't have a name for this one.
I guess this one might require a little dashboard demo.
So get ready. The question is, how do I upload my SSL certificate to Cloudflare?
OK, cool. So SSL lives in. I'm assuming here, Matt, by the way, that the user, we don't know who it was, but I'm assuming they might be free free plan users.
Well, hopefully not, because I think we can only upload a certificate on the business and the enterprise plan.
So you do to bring your own certificates. You would have to be on the business or the end.
I'm not sure if we are looking to move that down because of old legacy reasons.
So you may be one day doing the pro plan.
What used to happen is for older clients that would not send SNI, they would have to connect to IPV4 addresses and IPV4 addresses are finite, limited.
And sort of every time you uploaded a cert, it would bind to a couple of IP addresses.
And unfortunately, we haven't got hundreds of thousands or a slash eight that's just ready to do that.
So we've now decoupled that away where we can do SNI only.
But I think on business and enterprise is the only actual place to do it.
So the first question I go into would be if this was asked to me in a sort of face to face or a meeting with a prospect would be like, yes, we can.
But why? Why do you want to?
The reason being is because Cloudflare issues certificates for and maintains the certificate management.
So for renewals and things like that automatically.
So anyone that subscribes to Cloudflare will get what we call a universal SSL certificate.
And this was one of the first major things that Cloudflare did that was completely different to anyone else.
I'm giving free SSL and I think the HTTPS traffic overnight just shot up because we issued all of the same.
I think that was towards the end of 2014 from memory, the universal SSL came out.
So, yeah, been doing it for a long time. And you get a certificate. So in this case, I think this I can choose my CA down here.
So did you say is the certificate authority that is issuing my certificates?
And you get the information there and you can see that it's all managed by Cloudflare.
And when it comes up for renewal, we will go through automatically.
We go through the renewal. Let's see if adding the validation records to DNS, et cetera, to validate and issue the next certificate.
So some customers, the reason when customers want to upload is sometimes they have something which is called an e-visa.
So an extended validation.
So it's when you go to a bank, sometimes it will say so mine. I don't know if it's changed because Chrome was doing changes.
It would say something like HSBC within the browser.
So it added a level of security or perceived security. But actually, there is no difference in sort of the encryption or anything between a domain validated, which is what is being issued by Cloudflare and extended validated.
They were just very, they're just very, very expensive. You have to go through an extended validation process.
So that would be the first thing. And some customers love that because as an extended validation certificates, normal domain validated certificates can be expensive to a business.
And if they're only using it for their web applications, oh, I can save cost by going to Cloudflare and getting a free certificate issued and let Cloudflare do all the HTTPS traffic.
Brilliant. Now, if you do want to.
So you can see here and I've got the advanced certificate manager. So this is what Michael said.
We get the nice new toys first. So Brian RPM is probably wondering why I've got that or how.
It's just because we have access to things.
But yeah, if we go to to the actual upload SSL certificates, you can see here that you can you would have your SSL certificate here, your private key here, and then you choose a bundle method.
So the support documentation talks about what is the best bundle method.
And we recommend compatible, which I think adds more of the and this is where I'm going to get corrected again.
Validates sort of the more of the chain.
But the modern one, I think, is a shorter chain. But sometimes some browsers don't trust it.
But, yeah, usually we recommend the compatible. But once you add your certificate, upload the private key.
Something that we do for enterprise customers is we can do something called private key restriction.
So this is what data centers we would actually push the private keys to.
I could dive into something called keyless SSL and how we do this, but we've only got unconscious of time.
But once you've done that and the format that we look for is PEM.
So if you need to upload your certificate, make sure it's in PEM format and not password protected.
It's basically. Yes. So. And once you've added that and you hit upload, we will then take that and deploy it to our global data centers and use that.
That will then appear back in here. And, yeah, you can then use that to terminate SSL.
Now, the thing is, we said you manage that. So you will get warnings when it's due to expire.
But because sometimes the way that certificates and how people use the customer certificates.
And I always advise against certificate pinning, but some people do.
They can't let a custom SSL cert fall out.
So even though we may have a universal one and a custom SSL expires, we will still try to serve that.
So it's your it's imperative that if you upload it, that you maintain it.
And if you don't want to use it in the future, that you delete it, because as soon as it expires, there could be issues where you'll pay your application is unreachable and is causes an outage, which isn't great.
So that's definitely something why you may want to keep Cloudflare issuing it as well.
But, yeah, this is where, again, Cloudflare issues certificates. I've got DigiCert, but I think, yeah, you can use Let's Encrypt if you prefer that.
And with the new advanced certificate manager, there's going to be a lot of cool features.
And this is how you can do cipher suites, sort of whitelisting to only allow certain ones, and validity on how long you want certificates to be ordered for.
So, yeah, shorter ones if you want, rather than a year. So I guess to summarize that, Jake, it's before you try to upload a certificate to Cloudflare.
Most likely, if I get you right, Matt, you don't need to upload a certificate to Cloudflare.
It's a very limited set of cases where you actually need to worry about it.
Cloudflare just does it for you. The only reason why people still purchase their own certificates is because the ones we issue, we don't give you access to the private keys that we issue ourselves.
Security considerations, if the private key was to get leaked, then obviously it's a vulnerable way you have to get your certificate replicated and things like that.
But, yes, that's probably the only reason why people still may need to order certificates.
But actually, yeah, most customers look to use Cloudflare now to issue and maintain.
So one for you, Michael, from Brandon.
I don't know if it's the same, Brandon, as earlier, but if it is, great.
The more questions, the merrier. Can Cloudflare proxying traffic affect the number of visits Google makes to a site?
Yeah, okay, so this is a big question.
I think we could spend hours and hours talking about SEO.
And with SEO, I mean search engine optimization. It's a bit of a black box to most of us, maybe even for people at Google sometimes.
But to answer the question shortly is, so can putting Cloudflare in front of your website affect the visits from Googlebot?
And I assume here the reason of the question is, and then consequently, the ranking of your website in organic search results.
The answer is yes, potentially.
But normally, Cloudflare actually helps you improve your Google rankings and other search engine rankings.
So although there might be a slight adjustment at the start, that is normal, unexpected.
But really over the long term, if you're doing the right steps and you're doing things correctly, Cloudflare is a tool that can help you get much better and also get higher scores with, I think Cloudflare, sorry, Google provides a number of tools to check your page insights, page speed, etc.
And Cloudflare can help you actually achieve better scores.
So if I look at from experience, normally when you move a website to a new IP address, and that is essentially what happens when you're on board to Cloudflare because your customers will be hitting a new IP address, not your server anymore.
Google actually purposely sometimes slows down the crawling ratio of your site because they don't know what your new infrastructure is capable of sustaining in terms of traffic.
So if you have a large website and Googlebot is crawling you pretty consistently, you might observe an initial slowdown.
And then if everything goes well on your website under the new IP address, and in this case, if it's behind Cloudflare, it's not generating any errors, you should normally see that Google crawl rate take up very quickly.
Now, having said that, there's a lot of factors that influence that.
So for example, if you did migrate to Cloudflare and then for some reason, or maybe you didn't notice there were errors that were being generated by your application, then Google will spot that, of course, and they may not increase their rate again until you fix those errors.
So it's very important with any migration, especially if you care a lot about SEO, to plan it and keep a very close eye to what's happening on your application so that one, you can mitigate the possibilities of things going wrong and two, if something is going wrong, you spot it immediately and you fix it.
One example, which is very common, which I can talk about is the Cloudflare network is any casted.
What I mean by that is that the Cloudflare IP addresses, which your users will be connecting to before we proxy back your origin, are advertised from all of our points of presence worldwide, which is likely different from what you're doing today, because you may have a server located in New York, for example, and that's one IP address and there's only one location worldwide where traffic goes to.
So when you put a website behind Cloudflare, the Google bot that originally was connecting to one IP from wherever it was coming from to New York will now start connecting to your site to the closest point of presence based on where the Google bot is being executed from.
So that's going to change behaviors. And in some cases, especially with larger applications, we've seen sometimes that this change in routing that the Google crawler uses to connect to your site, sometimes they generate some unintended effects as they will start, you know, seeing as the website being hosted very close to them, whilst in fact, it's just the point of presence that's very close to them.
And that could cause some issues. Now, normally, though, as I said earlier, these can be resolved.
We have a technology actually, which helps a lot with sort of routing aspects, which we're not going to talk about now, but it's Argo Smart Routing, which allows us to optimize routes for dynamic traffic, including in this case, traffic that the Google bot may be trying to access, especially if it's dynamic listing pages.
But bear in mind, you get SSL for free when you sign up, as Matt said.
SSL websites that are encrypted in the traffic normally get higher rankings because they're more secure.
You could leverage our caching.
That shouldn't decrease your page speed, your page loading speed.
So that's good news. You know, better usability. We provide all sorts of other image optimization and other features that just make your website better.
The one thing also in this topic that I have been asked many times is, you know, are Cloudflare IPs shared across customers or are they dedicated?
Sometimes it's true.
Google will potentially look at ranking your website depending on how many other websites are being advertised from the same IP address.
I've not seen actually that being the case very often for, you know, to avoid if you do have concerns, of course, and your website is large revenue generating websites, the higher up you go with the plans on Cloudflare, the more, you know, you may actually end up in dedicated IP addresses, which are only being used for your website only.
So that becomes a non-issue. From memory, the free plan, we're still sharing IP addresses on customer websites.
But yeah, normally, even if we are sharing IP addresses, I've never seen that being a major problem.
And bear in mind, if you're using shared hosting, you're already sharing an IP address with probably hundreds of other sites.
So really only if you're doing dedicated hosting or VPSs, maybe you have dedicated IPs to your application only.
So to recap, yes, putting Cloudflare in front may affect your Google bot crawling initially.
Normally from experience, we've always seen it go back to where it's supposed to, if not get even better.
And you should be getting better search results. If you do a Google search, there's a lot of articles, not only from us, but from SEO agencies that have blogged about similar topics on migrations to Cloudflare.
So there's plenty of content for you to think about and actually do the steps in the right order.
So, Brandon, hopefully that answers your question. And thank you for, yeah, we're assuming you also sent over the other questions.
So thank you for sending them over.
I just wanted to add the one thing I have seen is where our firewall rules, where people will block a country and they blocked the USA, but they haven't allowed the known bots through.
So because Google's crawling from the US, we will block that.
You have to explicitly say, I'm not known bots, which is our trusted bots, such as Google, such as Bing, to allow them through.
So if you are creating firewall rules, make sure you're adding that to it.
That's a very good point. Yeah.
So going back to making mistakes, you get powerful tools, but it's easy sometimes to be a little bit over restrictive when you're building firewall rules.
Right. OK, so Matthew, question for you from Michael, not from me, Michael, another Michael.
Looks like it's from you because it's quite difficult. Maybe I will add a few questions.
I haven't been like you at all. So hello.
Had a question around a specific type of attack. WS discovery amplification.
Do we protect against that? And so question, I guess even before we answer, we protect about it, protect that.
What is I have no idea what is WS amplification?
It's a good question because I'm not really familiar with it, too.
So quick looking and this is me reading a note web service dynamic, dynamic discovery or the best discovery.
So my quick TLDR and I'm again going to be corrected.
It's going to be a lot of correction from this is when it's a method of understanding the directory structure and how to navigate it.
So almost like a file system to get from one place to another, what that looks like.
Imagine in your if you're using Windows or Mac, you go and you can see that you click on.
I know you're my documents and then you have a file within that and this files.
And it's sort of a mapping to be able to traverse and get into that directory from reading it.
There seems to be a way of doing an amplification attack through this protocol.
So what's an amplification attack? An application attack is sending or trying to create sending a request.
That's really small. And the response being large.
So where it's a let's say if you were to make an HTTP request for an image, you would be making a small request.
But it downloads a five, 10 kilobyte image, which is a lot larger.
What you do with an application attack is you the other part is a way of spoofing where you want the response to go to.
So if I wanted to take down Michael's website, I would send a request somewhere.
But I would actually say, hey, I'm sending it from Michael's origin.
Please send a response there.
So you're sending like the request one way. But the responses are going to a different endpoint.
Now, the best, which means the worst amplification attacks and where you can send something really small, like a super small request.
But it turns out to be absolutely megabytes or large kilobytes of data.
And if you can send hundreds of thousands of those and reflect them to another target, that becomes a DDoS that just overwhelms that server.
So, yeah, I think there was a way of doing this that you can make a small request.
And because you're discovering a massive part of the directory and the structure, you could send a massive response and redirect it somewhere.
So by default, where WS lives, and this is what Michael said earlier, is we proxy HTTP, HTTPS traffic to begin with.
If it's outside of that, then we will just drop it. And it doesn't run on the standard ports.
So this attack doesn't run on 443 or 480. Cloudflare will just drop it.
Now, people like to target Cloudflare for DDoS. So they're probably going to try and send this traffic to a Cloudflare edge or a Cloudflare IP, where we obviously have to drop it.
And as in our last discussions, we have products that we love called GateBot, called DDoS -D.
And we have signatures, as Michael said earlier, to identify what this traffic is and just drop it at our edge.
So we drop it automatically.
So if you were to do Spectrum or if you were to open this port, then yes, Cloudflare has the protections inbuilt from default.
But if you're just using this for standard HTTP, HTTPS, nothing really to worry about.
The other thing, and let me just read my notes again, it runs on port 3702 UDP.
So if you have an origin and you don't expect WS traffic, just on IP tables, drop anything or make sure that port's not open on that.
And that sort of protects your origin as well if people are just firing things out into the open.
Yeah, no, that makes a lot of sense.
So essentially, Michael, unless you're using a specific protocol, it can be pretty much protected.
An amplification attack using that protocol will be protected out of the box, right?
Awesome. Cool. So we've got seven minutes. So this one's from Sierra, and it's a short question, but an amplification question, because there's a lot to discuss.
Do we have magic transit in China? I like the China.
China questions is interesting. So for those of you who don't know, when you sign up to Cloudflare, you get access to a very large network of points of presence we have worldwide, with the exception of the points of presence within mainland China.
And we have many points of presence there as well. But because of government regulations and the way the network works within China, that is an add-on that you need to purchase separately.
And you actually need to follow a number of required steps for us to be able to proxy and serve your traffic from our points of presence within China mainland.
And that is also because of some government regulations, as you would need a specific license to be able to serve your application within China.
Now, from a user perspective, even if you have the China network turned on or you're leveraging the China network, the dashboard doesn't change, right?
Part of our strong point is everything is easy to use. It's just a toggle.
Once we enable for you to switch on the toggle, you can switch it on, and nothing else changes in the dashboard.
However, some of the features are indeed not available within China, simply because of the architecture there.
Now, that doesn't mean we're not going to provide those features in the future.
The plan is for us to have perfect feature parity outside of China from within China.
But Magic Transit, as of today, is one of those features that we do not offer within the China network.
I want to quickly clarify, if you're not using the China network, it does not mean that if you have users in China that they cannot access your website.
That is separate. The user within China will still be able to access your website or your applications, but they will just be routed to a point of presence outside of China, right?
So, turning on the network just allows you to be routed within China.
Now, to answer your question, Sierra, the answer is no. We do not provide Magic Transit today within China.
For the rest of the audience, what is Magic Transit, I guess?
So, I'm going to keep this one short as well.
We've been speaking about proxying HTTP, HTTPS protocol as the default on the Cloudflare platform.
The number of protocols we can proxy is expanded when the customers use a feature we call Spectrum, which allows for UDP and TCP -based protocols.
With Magic Transit, we're even going a layer even further down, and we can basically proxy any IP traffic, including VPN traffic or IPSec, etc.
The way it normally works is mainly used by larger companies. They would actually bring their own IP ranges onto the Cloudflare network, and then we would advertise those from our points of presence.
And any request that goes to those IPs, we would proxy.
And for our customers to be able to do that, we've built a product which we call Magic Transit.
Great. So, we have three minutes. So, let's see.
We have one last question. If you want to keep it really short, then you can close us off, Matt.
Yeah, that's fine. And I'll just probably screen share this to be clear.
So, the question is from George, and it is, do you have a Magento integration in addition to the security performance extension?
And then George linked us to a link on our website about a Magento integration.
So, do we support Magento?
So, yes, Magento is a CMS, and there is a way that it's integrated and it's detailed, as we said, with that integration, Cloudflare.com slash integration slash Magento.
The other things that we do out of the box for Magento, so as we said in managed rules, Microsoft, the DDoS rules, we also have Magento rules that are based specifically for it.
So, you can see here blocking a number of CVEs, vulnerabilities that have come up before, and you can see the default mode, which you can do.
So, if you are running Magento and you have that back, you can literally turn that on.
And then, obviously, Magento is a CMS, so you can also integrate.
Our Cloudflare specials would help protect against cross -site scripting, SQL injection, and a number of other things.
So, there's definitely security things and performance.
The integration addresses, like, the quite – not the simple ones, but the ones out of the box.
So, yeah, like, there is more that you can add.
The integration is a great starting point. But within the dashboard, you can tune that integration further.
You can do caching. You can turn on the other features and then sort of use maybe Cloudflare Workers, which we haven't touched on today, but other features and functionality to make that more powerful and to protect you and yourself from that.
So, yeah, that's it in a quick nutshell.
I think I've got 1 minute and 10 seconds left. My call opened up with, you know, how to send and submit questions.
Please continue to do so. There is, I think, on the TV screen, if you go down, there is a way of submitting questions, which you can do throughout this.
If we've missed anyone, we'll hopefully get to those next time, if they allow us to do Episode 3.
But, yeah, thank you for joining us.
Thank you for, again, submitting questions. If there is anything you want us to touch on, let us know.
Yeah, hopefully see us again next week. Thank you very much, everyone.
Thank you for joining us. Bye-bye. Bye.