Ask a Solutions Engineer
Presented by: Kabir Sikand , Frank Taylor
Originally aired on June 30, 2020 @ 5:00 PM - 6:00 PM EDT
Get ready for a live Q&A session with Cloudflare's Solutions Engineer team, who will be ready with answers, expertise — and unparalleled whiteboarding skills. Send technical questions about Cloudflare products (or the Internet in general!) to [email protected]
English
Q&A
Technical
Transcript (Beta)
Welcome to so welcome to live Ask NSC. I'm Kabir Sikand. I'm a solutions engineer here at Cloudflare.
I've been here for a little north of a year now. And definitely excited to kind of go through some of the questions that we see posted, you know, across various various boards, both internally at Cloudflare and questions we get from customers externally, or folks just really using the platform.
So this is Ask NSC.
I'm joined by Frank Taylor. Frank, I'll let you do a quick introduction.
Yeah, I actually want to ask you, what was your experience before at Cloudflare?
Oh, yeah. So so coming into Cloudflare, I did a little bit of computer science.
And so that naturally kind of fed into software engineering and software development role.
Yeah, cool. So my name is Yeah, yeah, I'll rock on.
So my name is Frank, Frank Taylor. I'm also a solutions engineer at Cloudflare.
I used to be a front end programmer, I had a crowdfunding application that I created with some friends in college that, that did okay.
And then I came to Cloudflare mainly mainly mainly inspired by the, the Cloudflare work service, which was just in beta at the at the time, and we'll definitely be focusing on that today.
So, so yeah, I'm happy to, happy to be hosting this, this segment.
Yep. So Frank, I guess I'll start by sharing my screen. We have a few questions that came in from kind of around the around the Cloudflare community here.
So I'll post these up, I put them into a quick slideshow so that you guys can read along as we're, as we're going through them.
One moment while I share my screen.
All right.
So first question that we got here.
I'll direct over to you, Frank. How does Cloudflare's KV storage replicate keys and values across so many data centers?
Yeah, yeah, sure.
So I'll, I'll start on this one. So the way that Cloudflare KV operates, I guess I'll take a step back.
And for those of you who have, are not familiar with Cloudflare KV, we have a, a so called edge storage service, where you can, you can create arbitrary key value pairs in a database, similar in some ways to Cassandra, or Amazon's Dynamo IDB.
And what happens is that when a, a value is created on the, on the, on the edge at a Cloudflare data center, so you will, you will typically use a, a Cloudflare worker to create the function.
And that function will then create a certain key value pair, say, name, name, Frank, and then, or, or rather, user one, Frank.
And then that, that value will be written to the data center at, at which the Cloudflare connection is, is, is made.
Then it will be sent back to a one or two central storage allocations in the central US.
And then from there, it will replicate out to other Cloudflare data centers upon the first request for that, for that key.
So similar in the, in the way to, to how Cloudflare cache works, Cloudflare RKB uses a sort of centralized storage model as the source of truth.
And then it allows, allows those keys to replicate out to the other, other various Cloudflare data centers.
The sort of major advantage here is that even for so-called cold, cold storage keys, keys that either have not been accessed in a very long time or keys that are being accessed for the, for the first time, because of Cloudflare's, you know, a huge number of interconnects.
And now our, we also operate a private fiber across the Atlantic Ocean, we can actually send that key from like Singapore to the central US in about 250 milliseconds.
So that's a ongoing advantage that we have that we're actually able to offer this globally replicated key value store by keeping the source of truth in a single place.
Because the quality of our links is so, so strong. So that's, that's how it that's how it works.
Yeah, thanks for that. Yeah. And it is kind of interesting that, you know, we've built this technology very similarly to how a cache works, and how our caching functionality actually works at Cloudflare.
And so as a result, you know, you're getting really, really quick read times to data that you store within our key value, and eventually consistent writes.
So really useful to be able to use that in a multitude of kind of scripting use cases at Cloudflare on the workers platform.
Yeah, I think I also just, I could be a first just for use, I can also pull up and I pull up just a brief slide for people as you move into the next question, just so that we have sort of a baseline for describing how the Cloudflare worker service works in the first place, just for anyone who's not up, who has not previously used it.
Yeah, that would certainly be useful. So I'll, do you want to share that right now?
Or do you want to hop on to the next question real quick?
Let me let me just pull it up for you.
I think I think the next question we have is this, I'll wait until we get to the worker section in a few slides.
Yeah. Perfect. So this is a question that came in fairly recently, but we we get something similar to this in various forms, fairly often.
So I'm using another vendor for specific CDN services.
And I'd like to use Cloudflare security services. Can Cloudflare be used with another CDN vendor?
So the answer here definitely is it's a little philosophical, it's very much on technical as well.
I'd say that certainly the the short version of the answer is yes, of course, Cloudflare can be used with any other kind of CDN vendor or service that you want to put behind us.
What Cloudflare really is designed to do is sit at the network edge.
And so as a result, when you are doing or creating architectures, where you have multiple CDNs, multiple security services, maybe multiple load balancers, any sort of kind of stacked or layered architecture, it's best to put Cloudflare at the full network edge.
And that's because for our security services to run at, you know, the highest level of efficiency that they can, we need access to direct access to the connecting clients.
So as an example, here, imagine if we had something like an IP block that a user wanted to implement, which we didn't have direct access to the connecting client, it's not guaranteed that the IP that the client is connecting from, is actually going to go through any of the other layered services that you put in front of Cloudflare, and then reach our service and be blocked efficiently or properly.
So as a result, we just asked that we sit at the network edge, that's what the system is designed to do.
Our Anycast network is globally distributed, we're north of 200 cities at this point in terms of where our data centers are.
And we're about as close to the user as you can get with any sort of a global network.
So from that perspective, you're going to get a lot of performance benefits by putting Cloudflare at the edge, but you're also going to be able to get all the full suite of our security services as many as you really want things like our globally Anycast DDoS solution, our IP reputation databases, our web application firewall, custom firewall rules that you want to implement on top of those, OAuth security rule sets, rate limiters and bot management solutions, and so on and so forth.
So there's certainly a lot of security services that we can run on top of any other vendors or solutions that you have.
Those can include caching and CDN performance related features from on prem or cloud hosted CDNs, it could be an on prem or cloud hosted load balancer, it could just be a series of microservices depending on the a lot there.
If you were, if you had questions about, you know, exactly how does my architecture fit into the Cloudflare ecosystem, that's where opening up a conversation with folks on our sales teams and really solutions engineering teams would be useful.
Folks like myself and Frank would definitely be more than happy to go help understand what your architecture looks like today and what it could look like with Cloudflare instrumented at the edge.
Yeah, awesome.
Thanks a lot. So thank you, Kabir. I'm going to move into the some more of the workers segment here.
I wanted to, I will share my screen here. That's, that's cool.
I can take you over. Yeah, you'll, you'll, I, you actually have to you have to stop share here.
Yeah, man. You have to give me, let me, let me take you over.
All right. Let's see here. Alright, so everyone sees this, the basics, Kabir, am I, am I, am I, am I good?
Great. So the, the workers service at Cloudflare encompasses two revenue generating products for us and three, three teams, we operate the workers serverless runtime environment.
And in the next slide, I will show what, what is available in that our runtime and then we have workers KB, which I talked about at the beginning, it's the distributed key value store that you can use as the corresponding database to the workers.
And then finally, we also have a robust set of dev tools, which, which includes features like, it will automatically bundle the code for you, you can use it to preview locally what the, what the code will, will do once it's running on Cloudflare's network and so on.
So, oops, here.
So, the sort of advantage of using Cloudflare Workers over other traditional serverless solutions is that once you write the worker, you write the worker function, it will automatically be deployed in between five and 30 seconds to all 200 of our data centers, including the data centers that we have in, in China.
So, I guess, I guess that makes it closer to 250.
And so, once you write, if you write workers once, it will be deployed at all of Cloudflare's data centers.
Looking at the sort of like grid here about how it compares to other existing, existing services, such as Lambda at Edge, it's, it is, it is the sort of key thing over, over Lambda and fastly, Lucet, which is currently in beta is that it is both faster and can, can support more operations per second than both, both of those.
You'll, you'll also notice that the, the language support for, for workers is actually quite strong compared to the other attempts at a wide scale, a deployment of a, of a single runtime.
The sort of, you know, workers is not optimized for, for things that you might do on Azure functions or Google cloud functions, which, which allow for sort of longer running functions with that, that use more resources, but it is the sort of huge advantages that when you write a Google cloud function or an Azure function, it only runs in one, one place where you, or you have to write a, a copy of it to run in multiple places.
Cloudflare's service allows you to, to run that code at the edge and run all, and run that code at the edge everywhere all, all the, all the time.
So for, so just kind of a benchmarking for like webpage response times in the 95th percentile, Cloudflare is much faster than ALambda.
So while again, ALambda may be good for other things like running background tasks, workers is, is meant to be used to actually serve full on web applications inside of Cloudflare's, on network.
So I'm going to move on into some other little tricks that you can do with workers in the next, in the next slide here.
Let me just change the share.
Can you just show your screen briefly just to show the next question?
Oops, I got to let you do it.
Yep.
One moment. All right.
So we have a question specifically around, you know, within the workers environment, why would you use something like event.waitUntil instead of just using a wait?
And certainly, you know, some background on this particular question, for those that aren't necessarily very familiar with some of the ins and outs of JavaScript, a wait is a way to really, as the name would, would say, wait for a promise to resolve.
And that's kind of a pattern that's very inherent in some of the JavaScript that might be written in an environment like workers.
So I'll stop that share and maybe, Frank, you can show a little bit more.
Yeah, sure. I'll just take that back over. Right, here we are.
So the, how workers usually, usually operates is, is like this.
You listen for something called an event handler, the fetch event, which is triggered on every time a, a, a, a, a, a, a, a request comes into the Cloudflare zone on, on which this, this worker is running.
And so typically, what you would do is, if you were to just run this right here, a wait fetch request, you would, you would just be saying, fine, just what, just we can change this instead to be like, google.com.
If I just return this response, we look over on the right side editor here, even though I go to this website, I'm going to end up getting a response from, from, from, from Google.
So workers allows, but you can see I'm still at a tutorial dot Cloudflareworkers.com.
So workers allows you to, to make a request for any web resource and modify that and then, and then serve it back to the browser or the, the client.
A more, a more sort of common pattern would be this might, we might say like, let URL equals new URL request dot URL.
And then you might say URL dot host name equals yahoo .com.
And then you would just be rewriting the host name here, you would do fetch URL comma request.
And this would, Yahoo isn't, isn't, isn't, isn't, isn't going to let us through today.
So we'll, we'll try something else.
I'll bring this back to google.com. There we go.
Google.com. Awesome. So the, the sort of what the focus of this question is, what's happening here is that the, the response that is returned here, this, this statement, when you call this event dot respond, what you're saying, okay, this, this event came in.
And in order for this event to, to complete in order, it must, it must fulfill the promise handle request that is sent to the event dot I'll respond with, with method.
And in this case, that's going to return a, a response.
And that will typically terminate the, the function. However, you can, you can, you can modify this slightly to do this and say, just, you can actually pass the event directly.
And you can change this to be event. And I can say what request equal event dot request.
And the sort of clever thing about workers is that it doesn't actually have to end the function and execution when this is a return, you can use a special method called event dot wait until and event dot wait until what, what allows you to do is wait for a, a separate operation to, to run.
So what would typically be done is something along this, along these lines of the async function, go get some long running task.
And, and this might say, oh, wait, fetch. Yes.
Well, let's pretend that this was a, this was a, a long running task here.
And what will happen is that say, let response equals this, and then equals response dot text, and then console dot log response.
So what's going to happen here is that this, this, this, this function here is going to return the same page that that did up before, but what will be logged to the console is the, the response that comes back from this URL here.
So I can say, go get. So if we run this, you'll see that I got a error response back right here.
And this is, this comes from this URL here.
The difference between using event dot wait until and using a wait here is that a wait will actually block the, the IO from continuing to run here.
So when you use event dot wait until you actually telling the JavaScript compiler to continue along its course, and that this operation could take some amount of time, but until that event completes, you should just continue running the rest of the function it is it is it is typically used for longer running network operations, like inserting an item into Cloudflare KB or inserting an item into Cloudflare cache, or sending it to a remote endpoint, like Google, or heap.
And so this, when you use a wait in this context, what actually happens is that you have to wait for this response to come back before you can finish the, the function, the benefit to using event dot wait until is actually that it actually stops.
Sorry, it does not stop the function from off finishing, but it ensures that this task does a complete in the background after the response is sent back to the client.
So you can use this to end the function as opposed to just setting back the response.
Yeah, that's pretty interesting. I think you brought up a good point there, Frank, at the end is that it's not something that you're always going to necessarily use unless you explicitly don't want the script to block on this particular function.
And that some use cases there that you mentioned, right, inserting items into KB, as we mentioned at the beginning of this segment, KB is eventually consistent.
This thing into Cloudflare's cache, such that any subsequent requests potentially can benefit from our CDN functionality.
Some other items, as you mentioned, heap, heap is a, for those who are unfamiliar with heap, heap is a kind of data collection tool, such that you're able to potentially tie in your data user actions to an analytics tool that you can then use to maybe make business decisions down the line.
Other things you can do with this would be sending data to a logging service like Google or AWS, any sort of place where you might store logs, and then action on them later.
And that's fairly useful in the context of workers, because maybe there's data associated with what's going on in the worker or user specific data that you might want to action on at a later time.
So definitely really useful to use kind of both, they're kind of like your left and right hand with regards to asynchronous tasks.
Yeah, exactly. And if you notice, another thing I just did here is, and maybe be more intuitive is you can actually, you can wait for an indefinite period of time on this.
So while other serverless environments enforce eventually a timeout time, a single worker instance can actually run for like 24 hours.
Because during this time, while it's waiting for this response to complete, or it's simply just waiting for this timer to end, workers is based off the number of CPU on milliseconds.
So as long as you aren't using active CPU, this method allows a worker to possibly run for a very long time, which makes it suited for some applications, for example, SST, server-sent events where the worker can actually send data to a browser without being asked for it for a long period of time.
Yeah, yeah. And I mean, what makes that kind of possible also is in the kind of standard, you know, the standard, you know, request response model, the browser itself might time out or the service that the browser is requesting from might eventually time out.
But in this case, we've actually responded to the user in a timely fashion.
So the user is able to continue doing their thing.
And our worker is able to continue kind of waiting on whatever other task needs to get done before it finishes execution.
So thanks for that, Frank. That was pretty interesting.
I'm going to switch back to the questions here and we can kind of continue chugging along.
So the next question we got, you know, for legacy reasons, I have two domains that I need to support from a single cache.
Is that possible in Cloudflare? So to dig a little bit into this question, really what the person who asked this was asking was if we could serve a single asset from a single cache on two separate domains.
And so to kind of outline a way that this could be possible, I made a little example.
I'll run that in a moment. But with regards to having two separate zones on Cloudflare and trying to serve assets from a single cache, it's not something that we would do if you, you know, for example, had site1.com slash dog.jpg and site2.com slash dog.jpg.
On site1 and site2, those assets could very well be the exact same image, or they could be completely different.
Maybe a more interesting example here that you might run into in the wild would be two different implementations of WordPress.
A lot of those WordPress sites will have the exact same assets.
If Cloudflare were to pull assets from, you know, single cache, and you had two different WordPress sites, you would certainly fall into a lot of pitfalls around which from which site was that specific WordPress asset loaded from.
And so as a result, a different model that might help allow for something like this is having maybe a shared assets subdomain.
assets.careers.com slash dog.jpg would be maybe where I'm going to pull all of my dog images.
And then for some of my vanity sites that might be pulling that exact same dog image, I'm benefiting from using a single cache, but not necessarily running into some of the pitfalls of cache poisoning across different different zones.
So that one's pretty interesting. I think we can always go back to an example of that one later.
But let's kind of go through some of the other questions that we have here.
So back to you, Frank, for this one, how does warp work?
And why does it make my Internet faster?
Yeah, absolutely. So for those of you who aren't familiar with warp, we, ClientFlare also has a consumer, right, facing, facing a service called, called warp, also known as 1.1.1.1, which refers to the DNS resolver that is used for warp.
Warp is actually a VPN that is built on top of the wire guard protocol, which was added to the Linux kernel in March.
It is a wire guard, which is sort of the basis of warp is a lightweight VPN protocol that runs over UDP.
And the sort of general way to conceptualize warp is, is like this, let me take over the screen share here.
Can I grab the screen from you?
Yep.
Great. So, as of today, wire guard is sort of used for Cloudflare's VPN service, it is also used for other VPN applications.
Most people don't think of a VPN as sort of being being set up in a a configuration like this, but wire guard users, so called peers, and each, each peer has a very simple setup, they all have a public key that is effectively the unique identifier for that, for that peer that machine on the, the warp or the, the wire guard on network and wire guard allows you to basically specify and say, Okay, I am going to allow these particular peers here, I am going to allow them to connect to send traffic to me and I will forward their, their packets through to the destination that they are asking for.
This can actually go both ways, both ends of the connection can, can agree to use IP forwarding to, to, to both send packets through each other.
So, in this, in this case, if this, if this peer here were to connect to peer A.CFIQ.io, it would be able to send packets through peer A, and then if we look over, over here, this is, this is peer C, you can see that they have different endpoints and they are exchanging traffic with those, with those various endpoints.
The way that warp works is that a warp allows these clients to all connect to a, a single set of Cloudflare IPs, and those Cloudflare IPs are advertised from all 200 of our data centers, and because of the sort of massive allocation that, that massive footprint that Cloudflare has, what will often happen is that when you connect to the Internet, say through a Comcast or wherever your ISP is, we will be peered with a Comcast.
So, as soon as the connection gets to Comcast, it will go over a, a direct link to, to Cloudflare.
If the request that you make is destined for a website that is hosted by, or not hosted on, but is fronted by Cloudflare, then what we will do is we will actually send you over our, Argo network.
The way that Argo works is that Argo monitors network congestion along the, along the various pathways that exist between Cloudflare's co -locations, and if we, if we find an outage or a slower or a faster route, we can actually redirect the, the packets coming through the warp VPN across a, a faster route to those Cloudflare sites.
And because we operate roughly 10% of all traffic on the web today, that, that benefits those.
So, while warp does not make the Internet faster for sites that are not on Cloudflare, but it does increase the speed if you are trying to access a site that is on Cloudflare because it is given a privileged access to this special network that Cloudflare operates.
Perfect.
Thanks for that, Frank. Let's go back to some of the questions that we had come in this week.
So, moving a little bit onto the security side of things, if a request gets through a rate limit rule, will the request be challenged by IUAM or firewall rules?
And for those of you who are not for those new to the Cloudflare platform, IUAM stands for I'm under attack mode.
So, I think this, this is a pretty interesting question because it hits a little bit on how does Cloudflare's suite of security services work and in what order do things get executed?
So, to dive a little bit into that, you know, as with many things, depending on what suites of services you're using, any given request can go through different parts of our security stack in different manners.
And that's based on settings that you might implement as a user at Cloudflare.
So, internally, you know, what we do is we have for folks like our solutions engineers, some kind of visual maps of what this might look like.
But to walk you guys through it, effectively, as you get into a Cloudflare data center, let's say your request, you actually wouldn't hit a rate limiting rule early on.
In general, what actually happens is mitigation such as firewall rules and I'm under attack mode tend to run in front of Cloudflare's rate limiter.
So, often those requests will have already passed any of the I'm under attack mode or firewall rules, mitigations before actually hitting the rate limiter and getting counted as such.
So, a standard request flow might look like a service might go through a series, a request might go through a series of sanity check services, IP firewalls.
And within IP firewalls, a user of Cloudflare could set up specific allow rules that allow an IP address across the rest of our security services.
And that allows you to really easily say, hey, I need to punch a hole in the suite of security services that Cloudflare has for an IP address that I know is trusted.
And that's often kind of the use case there. Beyond that, you'll go through our DDoS mitigation services at L7.
You'll also go through services like our next in the stack and kind of that pipeline would be something like our firewall rules engine.
And that's what a user might use to configure specific custom rules.
So, something like maybe on my login page, I'm going to look for certain user agents and block them.
Those services are built into the firewall rules engine itself.
So, things like look out for traffic that looks like it's an automated system.
If it's not an automated system that I know about, then definitely issue a capture challenge to that automated system or that potentially automated system and ensure that it isn't human and then continue on through the rest of the security services.
And the rest of those security services include things like user agent blocks, zone lockdown, things like browser integrity check, IP reputation database, which also includes I'm under attack mode.
And then finally hitting things like the rate limiter and the web application firewall, which includes Cloudflare's managed rule set and OS rule set.
And so, kind of going through the stack there, this question certainly, if you were to rephrase it in different ways, if I had a request that got through a firewall rule, would it be challenged by I'm under attack mode or rate limiting rules?
And the answer is likely yes, unless you've specifically put a allow rule in place or a bypass of certain other features within our firewall rule.
So, there's definitely a lot to unpack there, but I do like this question because it certainly is important to understand how any given request goes through Cloudflare security services so that you can definitely build out the most robust security solution for your business.
Yeah, great.
Thanks, Kambir. So, I think, I believe the next question is also related to rate limiting.
Oh, you actually skipped this one. We're going to come back to it later.
Just not sure if we have enough time to cover it. Yeah, yeah. About rate limiting API keys.
So, this comes up a lot. And the short answer is that you cannot explicitly rate limit API keys.
By Cloudflare. However, for those customers that are on higher paid plans, what you can do is, hang on a second here.
Can I just quickly share something, Kambir?
Yep, absolutely. So, if you are on a, again, one of the higher paid plans for Cloudflare, and maybe this will inspire you, you can rate limit based on the presence of a certain response header.
So, if you were to, say, send back a 401 not authorized to the same IP address twice, or you were to send back a certain cookie value that would perhaps give that user a permission to access that in the future, you can use the response header condition to rate limit on various characteristics of the interaction between the client and the server.
And this is actually one of the most popular reasons that people do upgrade to higher paid plans.
While it would be convenient to rate limit a given API key, in practice, the IP address used is a much more useful piece of data.
And because you can so easily provide more information about how that IP is interacting on the network, either using Cloudflare's tools, for example, if you were to block your request with the WAF, and you were to put a 403 in this box here, you were to set this to be a 403, what would happen is you could actually rate limit based on that customer just hitting a given WAF rule on Cloudflare.
So, while we cannot explicitly rate limit API keys, we can usually help you meet that need to up protect your API in that way.
Thanks, Frank.
So, going through some of the other questions here. So, this is a customer using EdgeCite includes and Varnish.
And what is the equivalent of Cloudflare Workers?
So, we do have an interesting kind of talk about this.
So, effectively, like, EdgeCite includes a standard proposed by some subset of companies that allow for caching, but retain effectively some dynamic portion of the page.
So, what you can do is you can compile some of this EdgeCite include markup with JavaScript in workers and retrieve some of that content from remote endpoints, basically async within the worker.
Does that mean it's like the equivalent of what you can do in workers?
No, I think workers itself is very much a much more powerful tool than simply using it to fetch some sort of dynamic content from parts of a page.
You know, as we've shown in through some of the other questions that have come in today, and this week, workers can be used in for a variety of use cases.
That includes things like bringing assets from, you know, various third party origins, bringing them together and then constructing a response.
It could be modifying parts of a request or conditionally routing based off of certain request information that's coming in.
So, certainly, a lot more powerful.
Folks are now also starting to use workers to build out full fledged applications or API gateways.
And as the workers platform itself starts to mature, you'll see a lot more and more of these use cases come out.
But certainly, I think the answer to this question is that you can definitely do a lot of the same things with workers.
Yeah, yeah, yeah, thanks. I would also also say that I'm workers can provide full feature parity actually with edge side includes if that is a need.
Just in the same way that edge side includes is used to retrieve different fragments of a page from different endpoints.
Workers can be used to accomplish the exact same thing.
And perhaps as an advantage, workers can can grab all of those fragments in parallel.
So I believe it's up to 50 at once. And those 50 different requests for various ESI modules can be, again, all fetch at the exact same time and then assembled or even streamed back to the client to avoid any latency at all.
Yep.
So moving on to some of the other questions here. I'll shoot some over you, Frank, are you able to proxy MQP?
Yes, able to proxy MQP. So for those on on this, who are not familiar with a, a, a, a MQP, it is a, a sort of typically used for pub sub messaging.
So, for example, if you are sending a message about a given event on your, on your home built cloud platform, you, you might use a QMP to submit messages based on some series of events, perhaps you need 10 people to approve an image before the image is allowed to be pushed out onto social, social media, each, each time that somebody approves the image, you might set, you might send a message over a MQP, acknowledging that that the next approver in line needs to submit their, their own message.
MQP, a MQP is a, a TCP protocol, and as with any TCP or UDP protocol, we have a service called Cloudflare spectrum that, that acts as a, as a proxy for, again, any, any protocol running a UDP or a, a, a, a, a TCP and on any port one to 65, 530, I believe.
And you can use Cloudflare spectrum to do a port translation. So you can accept connections on a certain, say, connects, you can accept connections on port 631 and send them to port 442 in the background.
We can also provide the, the SSL termination on the, the front end and Cloudflare's DDoS and IP firewall services can also be used.
So the answer is yes, we can, we can proxy any TCP protocol, including a MQP.
Thanks, Frank.
So here's another question that came in. If I have an EC2 machine in a private subnet in AWS, can I still create a tunnel successfully?
So this is particularly talking about Cloudflare's Argo tunnel service, also known, known as Cloudflare-D.
What that allows you to do Argo tunnel kind of a little more broadly is for any machine that's able to install Cloudflare-D, Cloudflare-D can make outbound requests over to Cloudflare and effectively create kind of like an outbound tunnel from the instance to a set of Cloudflare data centers around the globe.
And so as long as in the case of this EC2 machine in a private subnet, as long as that EC2 machine is able to access the Internet, make that outbound communication, then yes, you can create a tunnel simply by installing Cloudflare-D and going through some configuration.
What this actually allows you to do now is you're able to use servers that are not necessarily completely open to the public Internet.
You can start to lock down those services from accessing anything other than Cloudflare.
It also manages a lot of the certificate exchange kind of headache that you might have to otherwise deal with, with regards to issuing, installing, maintaining and renewing certificates on your servers.
So as a result, it's a secure, reliable way to communicate between Cloudflare data centers and your origin servers.
And the only kind of caveats in terms of when you run a tunnel like this would be that you'd certainly want to make sure that these tunnels are registered as services and have functionality like auto start and resumption and things like that set up on those servers so that the service itself, if your server were to restart, the service or Cloudflare-D can also start up again when you scale up or scale down your infrastructure.
So this ties pretty nicely into the next question.
So can the team tell me if Cloudflare Access can work with a Linux or Windows machine that uses custom ports, specifically non-web-based or non-HTTP ports?
I'll turn that one over to you, Frank.
Yeah, yeah, great. Just a second here. Let me just pull this up for you. I'm going to share my screen again here.
So just a second to go here.
Kabir was mentioning Cloudflare-D. Cloudflare-D stands for Cloudflare daemon.
You can use Cloudflare-D with our corresponding service called Cloudflare Access which handles authentication into various machines or actually usually virtual machines that you operate.
You can either use IP whitelisting or you can use an integration with Google login or Okta login or SAML.
All these work and the way that it works is that if you look in my terminal here and you suppose I'm on a given machine, you can run Cloudflare-D access and you can run a number of different commands.
You can run Cloudflare-D access curl. Let me just, if I was just doing this, oops, Cloudflare-D.
In this case it didn't really do much because there was something I could actually get but you can use Cloudflare-D in front of either curl or ssh so you can use Cloudflare-D to run an outbound tunnel from your machine using ssh or any other protocol.
So if I wanted to use this here, I would run this command and this would allow Oktabeer to access my machine at ssh.
offsite.com. What happens is that it will spawn a browser window and it will, as long as he is able to authenticate through one of the means that I mentioned before, he would be able to access my machine over ssh.
This works for any TCP protocol and it actually doesn't, as Oktabeer mentioned, you don't actually have to open up any ingress ports on the virtual machine because it sets up an outbound tunnel over a web socket.
So once the web socket is set up, you can proxy any kind of binary data provided that that data uses as its transport TCP.
So yeah, I would happily show a demo but I think our session today is about expired, so you'll have to tune in next week for that.
Yep, thanks for that quick demo there, Frank.
So I think up next in about a minute here we have Cloudflare for HR Tech, so definitely tune in if you have the time.
Thanks for tuning in today. I'm Kabir, joined by Frank that was asking SE and if you guys have other questions, feel free to look at the email below on Cloudflare.tv and submit questions for next week.
Thanks and have a good day. Bye guys, have a good day.