Cloudflare TV

🎂 AI Deep Dive with Demos

Presented by Matt Silverlock, Ricky Robinett
Originally aired on 

Welcome to Cloudflare Birthday Week 2023!

2023 marks Cloudflare’s 13th birthday! Each day this week we will announce new products and host fascinating discussions with guests including product experts, customers, and industry peers.

Tune in all week for more news, announcements, and thought-provoking discussions!

Read the blog posts:

Visit the Birthday Week Hub for every announcement and CFTV episode — check back all week for more!

English
Birthday Week

Transcript (Beta)

Welcome to Cloudflare TV. You've got myself, Matt Silverlock, Director of Products here at Cloudflare and I'll let Ricky introduce himself just in a second.

But if you've been following a ton of the announcements today, you probably know what's sort of coming around a lot of our amazing sort of AI machine learning announcements.

If you have not, you're in for a treat. We're going to show you some demos, talk through a little bit of what we've been working on, show you how it kind of works, and then go from there.

So before we get any further, I'll let Ricky do a quick introduction of himself as well.

Yeah, hey everyone. My name is Ricky.

I have the privilege of leading up the Developer Relations team here at Cloudflare and I am based out of the greatest city in the world, New York City, as is Matt typically.

Yeah, I'm normally based out of New York, which I love, but I am in the Cloudflare Lisbon office at the moment where we've got a lot of the folks who've been actually working in a lot of our new AI products actually based out here as well.

And we're always hiring in Lisbon for anybody that's watching too. So to kind of dive into it, one of the things I think we want to kind of talk about is a little bit around what we're actually going to set a demo, what we kind of announced today.

And then we're going to kind of dive in and show you how it works, help you sort of understand what this actually looks like when you want to run code against it or the models we have available.

But let's step back for a second.

So we announced three, sort of four major things today, really under this major sort of AI theme.

And so we have Workers AI, which is fundamentally our new sort of GPU powered inference, what we like to say, running on region Earth.

And so that you can run this everywhere.

It's not just stuck in a Cloud region in US East.

So that's really, really powerful. And we'll talk a little bit more about how that works in a second.

We also have Vectorize, which is a vector database offering.

And so if you don't know a ton about vector database, we'll get into that as well.

But the shortest version is to scale up a lot of machine learning and sort of AI use cases and actually make those operate in production.

The best thing to do is, well, you can't just kind of call the model again and again and again.

And so in many cases, you want a vector database to sort of step in and kind of help you scale that out.

And two sort of other sort of announcements around this as well is our AI gateway.

And so one thing we kind of heard a lot of around from people is they're calling other AI APIs.

They're calling things like OpenAPI. They're calling sort of other tools.

It gets expensive fast, right? They want to be able to put right limits in.

They want to be able to protect against abuse. They want to understand where those calls are going.

They want analytics. They want logs.

And so we announced and launched AI gateway. And again, we'll sort of talk through a little bit more of those details, but as a really powerful tool for protecting your AI based applications as well and making that really easily.

So you don't have to kind of figure out six or seven different products, right?

You can put AI gateway in front and it just works.

And last but not least, and we'll try to squeeze this in a little bit of time today to talk about more, is our support for WebGPU in Cloudflare Workers as well.

And so a much lower level API, it's a really, really powerful browser API.

And so if anybody watching has familiarity with graphics APIs like sort of OpenGL, some of the sort of more mobile versions sort of OpenGL, things like sort of Apple's metal, you can kind of think of WebGPU is very, very similar to that, right?

You're sort of writing instead of a shader language, you're writing instead of a raw 3D language and actually sort of building things as well.

But there's some really, really cool stuff out there. The way people have done like 3D demos, 3D product use in WebGPU using those APIs.

And so we're really excited to support those as well.

But one thing we get asked a lot, it's like, well, hey, you've got this really cool workers AI platform.

What models are built in?

What can I actually do with it, right? Can I build a chat app? Can I do image classification?

How do I do all those kinds of things? Like what does it actually look like?

And so Ricky is the expert in this space. He's going to sort of talk us through and I think hopefully show you a little bit around what you can actually do in workers AI.

Yes. Thank you, Matt, for the introduction. I'm going to share my screen and I'm going to show you all some demos.

So the first place I'm going to start is the new Cloudflare AI landing page, ai.Cloudflare.com.

And I think this page is beautiful.

It truly brings me a lot of joy to see this page. But what I'm going to go down to and show you all is what we call the playground.

So this is where you can try some of the stuff out right now.

And so I'm going to show you some of these.

So the first model on workers AI is text generation. So this is what I think a lot of us are answering questions from our friends and family about.

What's that AI thing? They're talking about this. So this is a llama to chat.

And we're going to just ask a question. Why do so many people like pizza?

Which, you know, I have my own opinions, but llama too says it's the flavor.

I agree. Bread and cheese and sauce, savory and sweet. Yeah, it's convenience.

Pizza is convenient. And it's the social aspect. Pizza brings people together, which I, you know, grew up with the Ninja Turtles and they really built their entire life around pizza.

So we'll talk a bit more about text generation in a second.

But that is how that one works. And you can go play with it right now. We have speech recognition, which is with Whisper.

I'm going to not demo that just because it's less fun to demo an audio thing on a live stream.

What I want to demo is a ResNet.

And what I have is an image of pizza, right? And you can see if we upload an image of pizza, it is pretty confident this is pizza.

But those of you watching at home know this isn't the true test of an image classification model.

The real test we are going to do live, I didn't even try this, though it's inspired by Matt's tweet, we are going to go on Google Images, we are going to find a picture of a chihuahua, then we are going to find a picture of a blueberry muffin, and we're going to try both of them and see what happens.

So, so let me refresh. I've preloaded chihuahua just to save us some time.

And I want to try to find a very blueberry muffin-y looking chihuahua, which actually I feel like this one here, right, Matt, this kind of like, yeah.

Yeah, I would definitely confuse that just as a human being for like a blueberry muffin at a distance.

So I think that one works.

Yeah, yeah. So I am going to save that. I am going to go back here and go to our image classification.

So the first test, the chihuahua test. Let's see what happens.

Chihuahua with a high confidence score, not even, not even any pastry in here at all.

So are you shocked, Matt? Or is this exactly what you expected? I mean, I was kind of spoiled because this is the first thing I tested.

What's actually kind of cool is how fast it runs.

I think that's like kind of the cool part. It's like you upload the image and like, if you're watching and you're like, is this actually the real product?

This is all just the same code that any of us would write to use workers.ai.

Like it's the same ai.run and it's a ton of examples in the docs.

But there's nothing like super fancy here. We don't give ourselves like super special priority or anything like that.

It's the same production platform workers code that anybody would write here as well.

So like, I think, you know, Ricky, when you're dragging and dropping these images and it's running that inference and doing that image classification, that's just production Cloudflare Workers.ai.

Yeah. Yeah. I mean, this is, yeah, it's near real time. And again, no, no mistake for chihuahua.

Even I tried a gluten-free blueberry muffin just to throw it off and it got that this was something from a bakery.

So yeah, the speed I think is really incredible.

And yeah, this is just running what you would run.

Um, uh, another model we have is around, uh, text classification. So we can look at the sentiment of what someone says.

So we could say, uh, this pizza is amazing.

Uh, and pretty positive, uh, statement. We could say, uh, this, uh, blueberry muffin is dry, uh, which is the to me, uh, right.

And it's a very negative sentiment, right?

And so, uh, you can even, uh, you know, uh, take things that like have opinion in them and it senses those, um, we do embeddings.

Uh, I, again, I won't demo this because, uh, it's more useful in an application, but if you've used open AI embeddings or any other embeddings models, when you're working with LLMs, uh, you can use us for that.

Uh, and then lastly, uh, translation. So I will say all have a big pizza with my buddies, the Ninja turtles.

Uh, and, uh, I'm not sure there's really a way to translate that into actual French Ricky.

I was going to ask if you spoke French.

Uh, uh, yeah. Uh, I'm not even going to try, but I, I, I, this looks like the French version of all have a big pizza with my buddies, the Ninja turtles.

Um, uh, so these are the models you can use now. Uh, very stoked.

So text generation, speech recognition, image classification, text classification, text embedding, and translation.

Um, uh, I'm going to show y 'all a little bit more of a full application if that works for you, Matt.

Yeah, that'd be awesome.

I think like one of the cool things about just, you know, I know these are like kind of fun examples, but, um, you know, translation super powerful, right?

Um, you can use that for support tickets. You can use that as part of your website to particularly even just do like, you've got a bunch of customers at different locations, right.

They do a quick first pass and maybe give that to a translator, um, which are just going to get that baseline translation kind of out the door very quickly.

Right. And have that done for all your content kind of on the fly.

Um, there's a lot of really cool stuff here. I think, you know, I think I suspect a lot of folks will use text generation as they're experimenting and playing.

Cause like you said, Ricky, you know, in the world of sort of chat, GPTs and stuff like that.

And obviously with, with the llama model, um, super fun, really, really powerful for a lot of kind of QA chatbot stuff.

But I think, you know, we see a lot of like text classification, translation, embedding use cases.

Those are the kinds of things that people take to production a lot, because those are the things that kind of, you know, um, solve problems you had yesterday.

Um, so I'm really excited to kind of see how people deploy those. Yeah. Yeah.

I, I think that like all of this stuff is so much fun. Uh, but the real exciting thing is when you like stop and think about how practical and impactful, uh, these things can actually be.

Uh, so I do have a, uh, kind of standard, uh, chat application, uh, using react next tailwind and, uh, workers AI that I was going to show y'all.

So you can see, uh, I'm going to say hi here and it's going to say, Hey, hi, it's nice to meet you.

Um, you know, what can I help with? I'm going to say, what can you help with?

Um, uh, very Socratic and, uh, we should get a response here of here's the different things, uh, the spot can do.

So this is using workers AI, uh, it's using, uh, the llama, uh, to chat text generation model.

The thing that I actually wanted to show, um, here is, uh, I, uh, this is a playground that lets you play with the system message of, uh, what we're requesting with workers AI.

And, uh, historically I've kind of used LLMs out of the box.

I suspect a lot of you all had to.

And one of the things I found with llama too, is that like part of the power is playing with the system message of the system message.

It's like the pre message you tell the model of what you want to do.

So, uh, there's a few things I found that really work here.

So first of all, it's like, uh, giving your chat bot personality.

This is one of my favorites is I'm going to make it a grumpy chat bot.

So if I say, you know, you are a grumpy AI chat bot. And I say, hi, um, you know, it will now take on this personality, uh, that is annoyed that I am even having this conversation.

Uh, uh, and then, you know, I suspect it's not going to be doing good, uh, uh, on, on today.

Uh, so it spits it's grumpy at stomps its foot.

Uh, and so giving some sort of personality in your system prompt is really powerful.

Um, let me refresh Matt, any, anything you want to add while I'm, uh, showing this off?

Oh, this is a good one. Can we ask it to, oh, I mean, a funny thing would be like, ask everything to be in a, like a limerick.

You only speak in limericks. Yeah. Your responses be in limericks. Right.

Uh, again, sorry, I was going to say it's always fun to come up with the really silly examples, but in practice, and I think if, if anyone's going to be following the AI space, um, sometimes when, when folks, you know, um, leave some of their system prompts, uh, configurations in like their open source configurations for the models, you really see, like, there's a lot of really useful instructions of like, you know, not only will you be helpful, but you won't talk about a, you want to talk about B, um, your answers will tend to be in this format.

They won't be longer than X. It can be really, really powerful to like use a system prompt to shape those answers.

Right. And I think what we often see is like, it's an iterative process.

Like you'll, you'll build up a few guidelines. You're going to see how the model reacts to it.

You'll add on a few more and you'll sort of make sure that the responses it's giving a kind of shape to the use case.

Right.

Cause the more you can kind of tell it as context, um, the more useful it's going to be to you.

I think so. Like for anyone's watching, like, you know, and thinking about experimenting here, like configuring the system prompt is totally worth the time.

Yeah. It's funny. I think like before I started working with these kinds of LLMs, I thought I'd spend most of my time here, like trying to coach the user, like how to have a good interaction when actually I'm spending most of my time over here, trying to give a prompt of, uh, what the, the chat bot should act like and what it can do.

So, uh, you know, I think a more practical example, right. As we can say, you are a writer's assistant, right.

You help brainstorm and edit content whenever you start a conversation, make sure you let the user know what you can help with.

Right. And, uh, so now if I, uh, let's see if this resets the system prompt, we'll see, I may have to refresh and copy this.

Uh, um, yeah. So this is now, you know, uh, adjust their monocle.

I don't know why it got a monocle from this, but that's great.

Uh, you know, and they say, I'll help you brainstorm ideas for your next story, edit, refine your content, right.

In all of this, you can see how very quickly you can create a personality, but also be helpful to the user through like the system prompts versus the conversation itself.

Yeah. I think, I think it took that as a writer's assistant of like, uh, like a Tumblr, like a Tumblr, a Tumblr blog from like 2009.

It's very, uh, but I like it. I think it kind of works.

I think, again, I think people get to kind of see, like, there's a little bit of fun in this kind of stuff and you can actually bring that levity.

And if you want more of that, depending on your use case, or, you know, you could say, um, I need you to be serious and professional and pretend you're an editor at the New York times.

Um, and that, you know, I'm sure it would actually respond really well to that as well.

Yeah. Let's find out right now. You are a serious and professional chat bot.

Uh, you pretend you work at the New Yorker. Um, New York is here, but, uh, you know, it's a serious publication.

Yeah. A very serious publication.

Right. Uh, and let's see what happens. Like, uh, this, this will be, uh, uh, we're doing it live here, uh, which is how we like to do it.

Uh, right. And so before we got the monocle and we got the like humor and now we got like more of a professional serious, like, what can I help with?

Um, so you can see how these things happen pretty quickly.

Uh, I'm going to stop sharing my screen for a second, um, because, uh, I'm a developer and what I actually care about is the code, right?

I'm like, okay, this is cool, but how does this work? Right. Uh, and so, uh, let me share my code.

Um, and really there's, there's a few things that happen when you make a request to, to workers AI, right?

Um, first we take the messages that, uh, are being sent and we, uh, send those over as a messages array.

Right. And so, uh, this contains all those user messages and also that system prompt.

Uh, and then, uh, truly it's one API request that, uh, you know, goes to our Llama 2 endpoint.

Uh, you pass your API key, uh, and you get some Jason back. Right. Uh, uh, and so, uh, really, really, uh, straightforward API, uh, super easy, uh, to integrate and drop in, uh, which is, I think the most exciting thing for me is like, you know, I, I took this as a sample application that was using open AI, which I love open AI, but I wanted to see like, can I swap out workers AI?

And it was really like maybe a 10, 15 minute project to swap things in.

I think it might even have you beat.

I was writing, uh, documentation and examples for our vector database.

Um, and so, you know, for those of you that have used vector databases, you know, open AI is really powerful place to use, uh, for their embeddings API, transforming text into effectively like a, a machine learning model representation is probably the best way to put it, how it thinks about and represents that text.

Um, and I was like, cool, no, I can rewrite this to use workers.

I, it was like a two line code change.

It took no time at all. It just worked. It just went from taking out a couple of the, the five or six lines I had for open AI and replacing that with AI.run and the name of the embedding model, which is awesome.

And it just kind of works right away.

So, um, I think if you've been experimenting, I think like a lot of folks, if you've been experimenting with open AI, a lot of the other providers in this space, um, you're like, this is cool, but I want this to hopefully run in more places.

I want this to be potentially faster and better for like real time use cases or building an app.

Um, it is really easy to just like change a couple of lines.

Um, right. The app performance for a lot of these things, very similar. Um, you can also test them side by side instead of, you know, make sure you're happy with the performance, but, um, like you said, Ricky, it's kind of nice to not have to go and rewrite a large part of the application, um, around some of these APIs, but that should just be pretty, pretty easy to use.

Yeah. Yeah. My main takeaway from this Cloudflare TV session is just, you're a better developer than I am, right?

Like that's, it took me 15 minutes. If you're, it takes Matt just, uh, I think I just had an easier, I think I had a simpler application than you, but I'll take the compliment.

Uh, uh, let's talk about, uh, one or two line changes real quick.

If, uh, if you're up for it, uh, cause I think it's the perfect segue into the other thing I wanted to, uh, plug, which I won't do a live demo of, but I'll show a screenshot, which is, uh, AI gateway.

Uh, so actually I don't, Matt, you want to give you're so good at giving like high level, like, uh, you want to talk about AI gateway real quick before I show what it looks like.

Yeah. Um, and so, you know, we talked about the sort of the beginning of the session, but I think, you know, to go a little deeper, um, one thing we've seen from people already building, uh, using Cloudflare for AI applications.

So using Cloudflare for workers, using your database, using KB, um, building those applications and talking to existing AI instead of ML APIs elsewhere is again, like you can't be, I can say, cool, but I, you know, I'm really worried about racking up a huge bill.

I'm worried about hitting rate limits across my application from like one or two users, whether they're abusive or just maybe over eager.

Um, right. Understanding how those calls are working and getting better, like logging and analytics and understanding how that works.

Right. And, um, you know, candidly, we have a lot of those powerful features of Cloudflare but, um, you know, it's not as fun to have to go and set all of them up one by one and know which ones you need.

And so what we sort of said was like, how do we make this really easy?

How do we give you rate limiting? How do we give you analytics?

How do we give you logging? Um, how do you put that in front of any third-party like AI API without having to be an expert in understanding how those things work, right?

Literally you change an API call to, instead of saying open AI or GCPs, AI endpoints, it calls AI gateway.

And that can be in your worker's code.

That could be in code running on another serverless platform. It could be code you run locally.

Um, as well, it could be in any deployed application, right?

You can use AI gateway through those things that will route through the Cloudflare network and get to play, you know, rate limiting rules that you define that protect your resources, protect your bills, make sure you're not hitting the right limits to your providers.

So you can kind of smooth things out across all your users.

Um, uh, very, very, you know, and sometimes it's like easier to trivialize dashboards, but I think it is really powerful to be able to go and see like, okay, great.

Here's my traffic patterns to my, my backend API calls, uh, which ones are being used.

If you've got multiple different models for different parts of your application, if you've got text generation in place and other places, you might be doing classification tasks, but you can kind of protect those separately through AI gateway.

Um, so again, all of the pieces you typically have to kind of glue together, um, to get observability, to get rate limiting, to get security in front of your AI applications.

Um, fundamentally that's what AI gateway does.

And it just makes it really, really easy. So yeah, Ricky, I'll let you kind of pick it up from here, but, uh, hopefully folks have a better understanding.

Yeah. Yeah. Yeah. That's I, I feel like I'm like, oh my gosh, I don't even need to show it now.

You, uh, did such a good job describing it.

And I will say like the worst time to find out how your app is getting used is when your bill shows up.

Right. Uh, it's so much more fun to like, be able to look at a dashboard every day.

And so, uh, this is what, uh, you know, AI gateway looks like is, you know, you, uh, set it up and, uh, then you're able to see your, you know, requests, your cash results, uh, uh, and, uh, your cost, I think, which is also a super interesting part of it.

Right. So you can see in real time how your costs are going.

Uh, and I feel almost silly showing the docs, but I think it's like the proof of this is as Matt said, like if you're using open AI, for example, all you have to do is change this one line, uh, to use the base URL related to the AI gateway you create.

Uh, and so it really is like, uh, like, uh, to, to the point of the implementation, like I set this up in 30 seconds, like just swapped out the base URL and suddenly got all these insights.

Uh, and similar, you know, if you look at, uh, uh, hugging face or, uh, replicate, right.

And so, uh, really, really, uh, awesome, uh, implementation and excited to see what people pull out of these insights.

Yeah. And I think as you can probably see, or as you saw, um, but we have a lot of like really built-in setups for popular providers, like obviously, uh, like replicate, like open AI, like hugging face.

Um, what's really important though, is like, you can just use this in front of any provider.

Um, and if there's a particular sort of, uh, you know, AI ML endpoint that you want to see built in and make a little bit easier, um, let us know, reach out, you know, ping us on Twitter, on the developer discord.

Um, you know, this is just where we're kind of getting started.

And so if you're, you know, you're expecting to see, um, a, you know, a set of ML APIs provider that you want to see built in and you wish it was in a one click, let us know.

Um, it won't stop you getting started obviously. Um, but we always want that feedback and make it easier for you.

It'll make it easy for the next folks as well.

But, um, yeah, again, that, that list of known providers is just the beginning.

And of course you can still put us in front of any other provider. Yeah.

Yeah. And, you know, I just got a question in another call of like, what's next for AI gateway.

And we touch on this some in the project or in the blog post, but, uh, you know, so many people I talk to are worried about sending their data to an LLM, like what happens.

And I think that, uh, you know, we talk about in the blog posts of starting to do some of that data loss prevention work within it, which I think then like it opens up a whole new level of like, if you can trust your users that like, if they do something, they're not supposed to that, like the gateway will catch it.

It lets you give users a little more freedom and not have to lose sleep at night of like, what's getting leaked out into the world.

Yeah, no, very much.

Um, I think that's like, we're going to do, I think the cool thing about everything that we've kind of announced today is, um, you know, the cool thing is like it's in beta.

And I actually think that is a fun thing because the best thing to take away from that is like, we are just getting started.

Um, we've all been kind of working on this and we talking to a ton of folks, um, you know, people that have been starting to play with this, people that have been already building things on Cloudflare, instead of running into different kinds of challenges with other providers in some cases as well.

Um, and so what's amazing, I think is like everything that anyone here watching sees today across workers AI, across vectorize, across AI gateway, um, even in WebGPU, which we won't really have enough time to sort of talk through, unfortunately.

Um, but, um, you should expect that the amount of documentation will more than double over the next even month alone.

Um, the number of built-in models, particularly sort of use cases and some are more of the advanced models.

Um, you know, we're always considering what models to sort of add, um, expect that model catalog to continue to grow.

And at some point being able to support BYO models as well.

Um, capabilities like sort of Ricky, you know, you were saying around AI gateway, um, both in terms of security, sort of, uh, you know, DLP sort of related features as well.

So protecting organizations from worrying about like corporate data leaking out through models and things like that as well, and how to sort of protect and trace those use cases, those cases a little bit better.

Um, so a lot of this is changing, um, both out of things we have been hearing for a long time, and obviously I want to build, but a lot of that's also going to change when people give us more feedback.

And so again, um, you know, going to develop a discord.

So discord .Cloudflare.com is the best way to jump to that.

Um, pretty responsive on Twitter with Cloudflare and Cloudflare dev. Um, and if you, uh, I'll let you sort of chase down Ricky in my account if you want to ping us directly, but, uh, the Cloudflare account is the best way to get in touch.

Um, our community forums as well.

Um, we pay attention to a lot of this stuff. Um, even if we don't always reply, right.

I think we really try to sort of pay attention to sort of where people are running into friction with where they sort of want to be.

Um, you know, I was just talking to someone earlier today, asking about more sort of text embedding models for different kind of use cases.

Um, you know, sort of image embedding models as well.

So that if, you know, with the vector database, which is near and dear to my heart, as you can probably tell, um, being able to do image embedding and then be able to say, great, well, I have embeddings for, you know, these a hundred thousand images that represent something in my callback, maybe their product images, maybe I want to sort of de -duplicate uploads from users, um, maybe some form of sort of content moderation and things like that as well.

Um, I can transfer that, turn that, you know, that image corpus into embeddings and use vector search to go and do comparison against that without having to kind of run that through the model every time, right.

It's gonna be much faster with the vector database.

And so again, just because you see, um, a really powerful set, but a small set of models a day, a lot more will be in the future as well.

I think we're really excited to keep kind of fine-tuning that, um, as well.

I think again, embedding models, more variety, and we're also trying to pick, you know, I think we were talking about this internally, um, with a couple of engineers trying to pick models that are kind of best in class.

I think that's the cool part.

It's like one of the things is, um, it is fun when there's like a billion models, um, to experiment, but it's also nice when like, you can look at the model catalog we have and be like, no matter which model I choose, I'm going to get a pretty good basis to build an application of, and anything from here is a bit of fine -tuning.

I'm not having to sort through like 400 different sort of tech generation models to try and get the right one or, or, uh, image classification models, right?

There's a lot of really powerful stuff.

This whole space is changing a lot. And so kind of our goal is to help curate that a little bit more as well, so that there's a lot of tasks you can do, but you're not stuck trying to decide between, yeah, which of these 50 different models for image classification, should I be using, what's the performance like?

Is it going to be cost effective?

Is it going to be fast enough? We try to sort of take care of that as much as people want to do.

Yep. Yeah. And Matt, I think, I think our time is up.

Your computer made it. So I'm very excited about that. Uh, uh, yeah. Hope folks come join us on discord and Twitter and can't wait to see what everybody builds with this.

Thumbnail image for video "Birthday Week"

Birthday Week
2024 marks Cloudflare’s 14th birthday, and each day this week we will we announce new things that further our mission — to help build a better Internet. Be sure to head to the Birthday Week Hub for every blog post, and tune in to all the corresponding...
Watch more episodesÂ