Welcome to AI Week
Presented by: Sharon Goldberg, Kenny Johnson, Taylor Smith
Originally aired on August 25 @ 12:00 PM - 12:30 PM EDT
Welcome to Cloudflare AI Week 2025!
There's barely a company or a startup not focused on AI right now. Companies' entire strategies are shifting because of this incredible technology.
From August 25 to 29, Cloudflare is hosting AI Week, dedicated to empowering every organization to innovate with AI without compromising security.
Tune in all week for more news, announcements, and thought-provoking discussions!
Read the blog posts:
Visit the AI Week Hub for every announcement and CFTV episode — check back all week for more!
English
AI Week
Transcript (Beta)
Good morning, good afternoon, good evening, depending on where you're joining at from in the world.
My name is Kenny Johnson and welcome to AI Week 2025. I am part of the product management team of our Zero Trust product set.
I'm joined by Sharon Goldberg and Taylor Smith.
Sharon, do you want to go ahead and take a moment to introduce yourself?
Hi, I'm Sharon. I'm one of Kenny's colleagues on the product team and I lead some of our AI security posture management products inside our SASE platform.
Awesome. Good to have you here. Taylor, you want to introduce yourself?
Hey there, I'm Taylor Smith. I'm one of the product managers for our developer platform, specifically Stream, but I'll be representing workers and media as well today.
Awesome. Thank you for joining us. It's no secret that conversations around AI are completely dominating the business landscape.
We're seeing things like support reps responding to 10 times the tickets that they normally could have.
Salespeople are able to completely eliminate all the day-to-day busy work of data entry and get back to focusing on relationships.
Software engineers are being turned into reviewers of AI generated code, as opposed to actually producing code themselves.
It's really, really exciting to see this. But even though this feels really magical, there are not many tools in place to govern and control these AI experiences.
And there's a real potential for security issues, as well as employee misuse.
Sharon, since you've got a really deep background in cybersecurity, I'd love if you could talk about some of the specific attacks and accidents we're already seeing and kind of why they occur.
Yeah, thanks, Kenny.
So before I get into specific incidents, I just wanted to mention that we have all been in this business for a while, and there's all sorts of security issues with all sorts of different applications, but somehow AI feels a little bit different.
And there's a few reasons why it's different. So one of them is the temptation to actually put a lot of information into an AI tool is really, really high if it's going to analyze your financial results for you or write some codes for you.
Why would you dump your source code into a random third-party software if it wasn't going to do something useful?
But all of a sudden, we have these tools that are producing really valuable things when they're given a lot of information.
And so the information flow into these tools is much larger than what we're used to.
Another issue is that these tools can sometimes train on user data, but depending on how you're interacting with them and what form of license you have, that creates additional risks.
And then third, we are having a whole lot of regulation and compliance frameworks being developed right now as we speak to govern AI usage.
And so if you're in an enterprise and you're needing to comply with these rules, this is all very new, and it's very challenging to make that all work for you.
And so we're going to be talking about a bunch of features and a bunch of projects that we're launching this week to help make that easier for our customers.
Before we get into that, I just wanted to mention, like Kenny said, there have been a lot of incidents that highlight why some of these different issues actually can create a lot of problems.
So I can give you a couple of examples.
When Microsoft Bing Chat came out, now it's called Microsoft CodePilot, but back then users discovered that they could get prompts, they could put in prompts to get the chatbot to reveal its internal system instructions.
For example, its code name, which was Sydney.
We've had researchers demonstrate to use Slack AI to get it to summarize private information in private channels.
A while ago, research also showed that they could use Google AI into encoding a user's personal data into an image URL, which could then be sent to an attacker.
All of these issues have been patched, but it just shows you that these tools are new.
There's going to be security challenges as we get our, wrap our arms around dealing with that.
There's a lot of different challenges that IT and security teams need to deal with.
We're going to talk about some of the things that we're building here to help make that easier.
Back to you. Yeah, I completely agree. Thank you for that overview, Sharon.
I agree that this is a real step change. It's a completely different class of applications compared to what we've previously seen in the B2B SaaS landscape.
I'm really excited to have everybody here to be able to talk through what some of the main themes are going to be throughout the week.
We've got four core areas that we're going to focus on this week.
The first one is securing AI usage of your employees, as well as your end customers.
The second is going to be being able to actually develop and build your own AI -based experiences, but with security baked in as really a batteries included style setup.
Then we're going to talk about giving content creators control back over their original content from AI scrapers and AI bots and training models.
And then finally, we'll touch on how we're actually embedding AI directly into Cloudflare to turbocharge the platform.
I'll go ahead and turn it over to Sharon to talk about the first couple of days where we're thinking about AI security for employee usage.
Right. There's a couple of different threads coming out today.
Today's material has already come out, and then we have some more coming tomorrow.
Let me just get into some of the high level view of what's being launched.
First of all, in our SaaS platform, we are committed to making our SaaS platform the best place to secure your employees' access to AI.
We already have a lot of features in there that are useful for this.
For example, you can block access to undesirable providers of AI if you don't like them.
You can put remote browser isolation on top of them to prevent copy and paste of information to providers that you don't want to share that much information with.
We have redirects, so you can redirect traffic from maybe the consumer version of an AI provider to the enterprise version, which, by the way, is a really, really good thing to do because the consumer versions tend to train on user data, while the enterprise versions, that's what enterprises are generally paying for, is to have the provider not train on their employees' data.
If you're controlling employees' access to AI, you definitely want to make sure they're not using the consumer version of a provider when you've already paid for the enterprise version, which has much better security properties.
That's already been there for a while.
That's why we're really building on this SaaS platform to make it more secure for employee access to AI.
Today, or if you've checked, you can take a look at two announcements that came out today.
One of them was around securing employee access to AI, used it by detecting shadow AI and shadow IT.
We have a new report that provides data-driven analytics to give you an understanding of what AI applications your employees are using, and then to be able to mark them with a status, approved, unapproved, in review.
Then you can write policies, for example, saying that all unapproved AI applications should be blocked, or maybe we should redirect traffic to our approved AI provider or something like that.
That's already out today.
Another really, really cool feature that we are launching is AI prompt protection that allows you to actually detect and log the prompts and responses that your employees have with conversational chatbots, and also to provide guardrails.
For example, you can write a guardrail that says, block traffic to this, block this request because it has personally identifiable information, or it has source code, or it has financial information.
All that is in line and can be run on traffic right now.
That's just some of what we're doing on the SaaS platform.
I haven't even told you what's coming tomorrow. I'm not going to tell you because you have to wait until tomorrow, but we are doing things around, how can I say it without giving it away?
We are doing things that are not necessarily in line and in band, like what I described just now.
We're also doing things that will help you get a handle on how risky different AI providers may be.
That's all coming tomorrow.
The other thing that I wanted to mention, the second thread from this AI security thread is around MCP, Model Context Protocol.
If you've been paying attention in the last six months, MCP has exploded everywhere, including at Cloudflare, but also across the Internet.
It's a way to allow AI agents to interact with resources or SaaS services on the user's behalf.
Instead of having an AI give you instructions on how to set up an invoice on PayPal, you can have your MCP server actually create the invoice for you.
We at Cloudflare are really excited about MCP.
Months ago, we launched our remote MCP server platform, which allows you to build MCP servers in the cloud using Cloudflare to host the server for you, which is really, really cool and good for security because it's much more helpful to have your MCP servers hosted in the cloud where your IT administrators can manage them, rather than having people download them onto their laptops and you not know that they're there.
We have that feature. We also, about a month ago, very quietly launched an integration between our remote MCP servers and our Cloudflare access product, which Kenny is nodding because that's his feature and I'm talking about it here, but that is a really great way to do authentication and authorization and apply Zero Trust principles for access to MCP servers.
That is a really, really cool feature that's already out.
Then there's something even more cool coming tomorrow that's going to be built into our SaaS platform around managing lots and lots of MCP servers in your enterprise.
That's all I'll say there. Finally, we have our Firewall for AI product, which is on the application services side of the platform.
Firewall for AI sits in front of your website and protects it from requests.
Now, Firewall for AI can do a lot of things. One of the cool things it can do is detect when the thing it's protecting actually has a large language model inside of it, so we can actually find prompts in the request.
It's been doing that for a while.
It's also been able to find PII in the request. It's been doing that for a while, but there's other cool stuff that it's going to start doing too that we're announcing this week.
We continue to improve the quality of the detections that are Firewall for AI products.
That's basically the overview of all the things we're doing in AI Security Week, but I've omitted a lot of details, so please follow this week and check out the blogs with all the details.
With that, I'll pass it over to Taylor to tell us about the Wednesday and Thursday releases.
Hey, Sharon.
Thank you. Those are some really exciting updates. Thanks for sharing.
I am excited for Wednesday and Thursday this week when Developer Platform is going to be releasing a lot of new features and announcements.
I'll start with AI Gateway on Wednesday.
If you use AI Gateway as a way to build and integrate AI models into different applications, you may be familiar with how you can use it to select which models to use or how it provides a unified API so that you can make a simple request that can go to multiple models or multiple different schemas for different providers.
Our improvements to AI Gateway are going to help you with steering between different models and services, secrets management, and some cost control.
Those are really exciting features coming out, and some of that also supports some of the work that SASE is providing this week.
Looking a little further forward, Cloudflare Realtime and Workers AI are going to announce an exciting partnership that provides a pipeline for some very exciting interactivity for building new AI-powered experiences.
Then we have a couple of new models coming to Workers AI that I'm very excited about, particularly in the media space, and some new features and a partnership for AutoRag and a new feature for images with a technical explanation of how we got there.
I'm very excited to work with them on that.
And then on Thursday, we'll have some of our latest news for creators and publishers from our AI audit and pay-per-crawl teams.
You may remember some big launches that we made back in July in this space, and we've got a lot to share about what we've learned and where we're going from here.
And then also a little bit later in the day on Thursday, we'll be talking about two big technical explainers on how we do AI inference and model catalog management at the edge.
Those have been really educational for me to read, and I'm really excited to share all of that with you.
And so that's the big couple of days coming for developer platform and media AI this week.
And I guess I will turn it back over to Kenny. Awesome, Taylor.
Thank you so much. I quick plug for the security-focused area of the week is that Sharon, who is a professor in cybersecurity and a number of our best sales engineers who aggregated what we're seeing from some of our most advanced customers, put together a best practice guide for AI security.
This also was made in partnership with our own internal security team at Cloudflare, who I think is one of the best in the business.
So please keep an eye on that. That is really going to be an invaluable guide for an emerging space.
So bringing it home, some of the areas that we're going to supercharge Cloudflare and continue to expand AI usage in the platform.
These things will include some of our most tricky things to actually debug today.
And that will include things like explanations on our cloud email security product, why specific emails were blocked to spam, malicious, things like that.
Same thing with host names and DNS records.
So any specific filtering that you might have in place, if something gets blocked as a botnet or a phishing threat or something like that, we're going to add additional explainers.
Before that traditionally was a little bit of a gray area or a black box.
It was hard to understand our categorization. And then finally, being able to debug and understand what's going on with both warp client, which is the on-device client that we use if you want to on-ramp your traffic from your device to Cloudflare, our device client warp, as well as our digital experience monitoring.
Those are going to have an MCP server available to explain and debug what's potentially going on with the user's network or a user's device, which is a very, very common challenge.
If you have ever had to deal with this as a network engineer or deal with somebody on your network engineering team, trying to debug why a specific policy isn't working or user's not able to connect.
Usually you're staring at hundreds of lines of debug output and trying to piece together what's happening.
That is what LLMs are really, really good at spotting patterns and then applying that to give a suggested fix.
So we're very excited to begin to make that available here at the end of the week.
Excellent. Well, we have a really, really exciting lineup.
Please stay tuned to Cloudflare TV. Keep an eye on the blog. We're going to have a bunch of stuff going out on social as well.
And the really exciting thing is most of these features are going to be available this week in the dashboard to put your hands on.
And we really want to hear what you guys think. We want to hear your feedback.
Please let us know how using these features goes. And with that, let's go have an amazing AI Week 2025.
Thank you all so much. Sharon, thank you for being here.
Taylor, thank you for being here. And with that, we'll go ahead and wrap up.
Thank you so much.