Cloudflare TV

Developer Focus

Presented by Alexander Braunreuther , Andreas Kaiser, Kassian Rosner
Originally aired on 

Cloudflare developer advocate Kassian Rosner Wren speaks with Alexander Braunreuther and Andreas Kaiser, co-founders of Ninetailed.io, on how they are using Cloudflare Workers.

Ninetailed.io is a next-gen personalization platform for B2B companies.

English
Interviews

Transcript (Beta)

Welcome to Developer Focus. I am here with the founders of Ninetale.io, and we're going to ask them some questions about, you know, what it's like in the space that they're in and how they're using Cloudflare Workers and Workers TV.

So first, why don't you all introduce yourselves?

Okay, well, first of all, hi from Berlin.

My name is Andy Kaiser and yeah, I would say yes in a word or just in a sentence.

I'm like a marketing product techie. This is my co-founder, Alex.

Yeah, hi Kass, nice to meet you. I'm Alex, also from Berlin.

I'm a software architect and co-founder of Ninetale.io. That's fantastic.

So tell us about Ninetale.io. What problems are you solving?

Okay, the first thing is Ninetale .io tries to solve not just one problem. It tries to solve really the expectations of actual B2B buyers, because we are at this moment really in a situation where there's a lot of, well, a new generation coming into the B2B buying process or decision making.

And yeah, it's, the numbers don't lie.

And they say like around 73% of actual B2B buyers are millennials.

And what does it mean that this high percentage are millennials? It's really like, first of all, digital first, they are educational driven, and the most important thing, they really expect individual experiences when buying.

This is really something which comes from the B2C experience, like other companies like Amazon, which really started with this whole kind of personalization in the buying process.

And yeah, that's, I think all of us, even if we're not millennials, we expect like this individual experience.

And also in the actual situations also, it's really all about understanding the product or the services you're going to buy, having this educational part.

And the problem that we have seen is there's still a lot of B2B businesses, which, yeah, use these old sales strategies.

And that's really the problem or the challenge we are trying to solve.

That's awesome. So what got you started in B2B personalization?

What made you think, wow, this is the field we, this is what we need to tackle?

That's really, we started like with the idea of personalization because we had experience around the landing page personalization.

We built like a tool, like a product which generated in integrating it with Google Ads, like thousands of static HTML pages, landing pages, just putting like a list of Excel.

And that was like our first iteration. After that, we really spoke with a lot of people.

We spoke with different companies, with agencies.

And in this research stage, what we noticed is that there were a lot of tech challenges related with personalization.

There were a lot of data challenges also, like everyone wants to use data.

Everyone loves data, but nobody really integrates the data.

And we saw also a lot of issues with performance or scaling, these kinds of solutions.

And it's interesting because, like I said, I'm more a product marketing person, but started my career with as a designer and as a web developer.

And like understanding what was happening on the, on the page was for me always something important.

But then some years ago, when Google Tech Manager appeared, I was like putting all possible scripts into Google Tech.

And I remember last year, I was complaining all the time that Andy is making the site slower.

This is this kind of situation we have seen also in a lot of companies, this kind of, okay, marketeers, they want the data, they want you to put third party pixels everywhere.

And really invest a lot of time, a lot of efforts, and optimizing just like 100 milliseconds.

Okay, 150 milliseconds, that's not so much. Awesome.

So what about serverless computing seemed like a good fit for your company's tech needs?

What made you think, okay, serverless is going to be our way to go, at least for some tasks, not all of them?

Yeah, I have to say, from my side, I've been very early adopter of serverless all in all.

So I never liked this approach of renting.

And I also found it annoying to rent out a EC2 instance to deploy a web server there was something that was always annoying to me, because I wanted to build products and not focus on how to host a database or how to get the server running or whatever.

I want to deploy something and use it and make it work. And I believe in 2014, when AWS Lambda came out, I met some people at a meetup and I was like, whoa, that's so super cool.

It was totally overhyped and everybody was saying, yeah, you're young, right?

Whatever. It's completely common. And also, I think WordPress was like two or three years ago when it came out.

I moved on it in the end of 2018.

And I loved it immediately, because I was using Google Cloud Functions before a lot.

But the cold start times were always a problem and that stuff. And so I came to the WordPress.

But as I said, I always was an early adopter. So I'm using serverless, not only serverless functions, I'm using managed databases, managed queues, managed files, everything on the serverless spec.

So just to be faster.

And that's what I think is very important for early stage startup, to be faster than the other ones.

And don't think around, hey, how could I put together my Kubernetes cluster or something like that.

So what drew you to Cloudflare Workers specifically?

Yeah, that's interesting, because when we started to analyze really this topic about personalization for B2B, what we really had in mind was we wanted in real time.

And we wanted in real time for everyone. And it's like everyone in the last town of the world.

It was really about having a global solution.

And having a global solution and speaking about real time, it's all about latency.

And that's, we started really also to think about different solutions.

We tested like having different CDPs from different data centers.

And at the end, it was like, no, we are trying to reinvent something.

And this research on this finding, and with the experience of Alex also around serverless.

And I think we are users of Cloudflare since years.

And yeah, it was like, we found it.

And both of us were like, whoa, we have to do it with the workers. So basically, the technical side was very interesting to me.

I don't know if you know Gatsby or Next.js, for example.

So for example, there I just tried to build, to use the e-testing.

So that was before we used to build a personalization engine. And it was just on our website where I tried to build in the e-testing with Google Optimize, for example.

And it was super, super hard because you have this challenge of having a statically rendered site and also a client-side rendering, which was always giving me these flickers or it didn't load anyhow and all that stuff.

And it was always a challenge for me. And then I found somewhere in the depth of the GitHub issues of Gatsby, this ticket of somebody who said, hey, you could use something, an edge site.

He called it ESI. So I think it comes from Varnish Cache, where he tried to use it.

And he said, hey, we could think about workers. And that was everything I read about it.

And then I made a proof of concept where I included links into the Gatsby code, pumped it through a worker, and used this HTML rewrite.

And what was insane for me was I reached this HTML code, which had the right personalization or the right AD tests included.

And I didn't have to use any pixel on the client-side.

It was like, oh my God, it simply worked. And that was great.

So then it was the point when we went to the personalization thing that we said, OK, that's the technology we've built it on.

And it's way better than anything that exists right now.

That's awesome to hear. Yes. I can definitely see how with latency, workers would work really well because we've got over 200 data centers all over the world.

And all of your workers are hosted on each one. So no matter where your user is, they'll access the worker as fast as they can get to the closest endpoint.

And then, yeah, the HTML rewriting capabilities of workers, I'm constantly amazed with the new demos that they come out with on the docs.

I'm like, well, we could do that. It's really cool to see. So I'm really glad to hear that.

So what problems are you solving with workers? What are you tackling that the workers are helping you solve?

Yeah, basically, as I said, one of the most important points is the HTML rewriting.

That's a big part of our software.

But we are using workers as a reverse proxy. So we are basically having Cloudflare as a reverse proxy.

And we are having a reverse proxy on a reverse proxy because we have found a feature in the worker docs, which is called, I think, resolve override or something like that.

So you can come through the host, and we have built a setup with the CNAME setup feature, which we are going to add to our client.

So our customers use a CNAME, and we pump it to their origin, a record. So we proxy their whole domain and inject all our stuff.

And that gives us a lot of opportunities, which are quite interesting, which you never can do as a marketing technology, which is based on the client-side pixels.

Because the first thing is our technology is insanely fast that way.

So we give an overhead of like 50 to 100 milliseconds doing the whole personalization, loading a profile of the customer and so on.

So we are doing a complete thing in nearly no time. Another thing which is great is we can inject all our scripts.

So like we have analytic scripts, where the eventing and so on is pumped into our CG, so in our backend data store.

And for example, a lot of browsers mostly are dropping scripts, script text right now.

Our software is injecting scripts with hashes and so on. So a browser cannot know what this script is.

That's a big opportunity. And also it's way faster because you don't have to do all this SSL injecting and so on.

Yeah, I think that's one of the most important parts.

At the end, it's really about this latency issue, challenge, and really about getting the right data and having the right data of every user.

So we can show to the user the right content he's looking to.

So it's really also what we speak is about, okay, it's like you said, we work like a reverse proxy.

I think it's also funny to say, okay, we are like in the 90s or 80s.

So a reverse proxy where I think it's like we're going back in Internet. And no, it's really about having the origin, about having the edge and what we need and what we call the near edge, which at the end is the user agent or the browser of the user.

And we work on those three levels, really, or tiers to optimize the data handling, to optimize all the information.

And at the end, really, we need to improve the customer experience and make life easier to the marketeers and developers, because it's also what we try.

It's to put developers and marketeers together and that they understand their customers.

That's awesome. So you also use Workers KV. Tell me about how that impacts your infrastructure and how you use it.

Oh, that was also, I would say, it's somewhat of a breakthrough.

So I think we implemented it two months ago. So as I said, we're acting like a reverse proxy.

So we somewhere have to store, for example, which IP or which domain we have to proxy, or we have our segments and our experiences, which we put into the HTML.

And in the beginning, we simply put it into a SQL database somewhere in a data center in Frankfurt, which was okay for our testing environment, because we're going through the work in Frankfurt and access it there, and that's okay.

But yeah, I think two months ago, we acquired our first international customer from New York, and then we tested it out, and the performance were for them okay, but we wanted to make it better.

And it was super easy to just put our data onto the KV and have it worldwide propagated.

So it's very interesting because the caching works pretty much like a normal cache, just you don't have to propagate it on your own.

So that's pretty good for us because it simplifies everything a lot.

And it's really about this, what I as another technical or not so technical, what I really find really wonderful is this, that it's really distributed around the world.

So it's really about, we have customers from the US, we have customers from Southern Europe, Northern Europe, and really the information is there instantly.

And that's really, I think, the greatness of workers and of KPAO.

Also, what we are thinking of in the long term, I know it's not possible right now, and we hope it will.

So there will be some more query stuff on the KV.

So we think about storing event data there, so that the profiles of our users are really exactly there.

So the query time is like, we tested it out, like 10 milliseconds to the KV.

To our CDPs it's a bit longer, and then you could have it way easier than it is right now.

So because our system scaling is that we have to deploy on multiple data centers, our software, our backend software, so that it is accessible very fast.

And that's, as I said in the beginning, we like to use serverless software to interact very much faster.

We are not interested in doing that kind of stuff.

Cool. Well, we have time for one more question. And so my question is, where do you see serverless heading in the future?

Do you see it taking over everything?

Do you see it staying as a solve certain problems, doesn't solve other problems?

What do you see happening with serverless in the future?

I think the first thing is really about what will happen in the future, or in the near future, related with edge computing, which is also a topic where there's a lot of different interpretation of what really edge computing is.

At the end of the day, for example, with all this 5G, we will have really very, very low latency in the future.

But at the end, it's the latency to the edge, because at the end, from the edge to the origin, there's still a higher latency.

And it's interesting how we went from this waste and how we built the systems, which were like, okay, we have all the servers in the office.

I'm not so old, but I've heard about it.

We have the servers in the office, then we go to the servers somewhere in a data center, then in the cloud.

And I think it's really interesting how a lot of data processing, a lot of data handling will go to the edge.

Because we have this near edges, it could be the browser or a car or whatever, sending information to the edge.

There, there will be a lot of processing, putting the processing to the cloud to distribute it.

But at the end, it's really, I think, there are really, there will be in the next couple of years, really a lot of use cases around the edge computing.

Because if you can have it on the edge, why do you want to replicate it everywhere?

That's, I think, where serverless kicks in. So you could not do as a normal company, you could not build edge computing software, which is distributed to so many data centers, that wouldn't make any sense to build it on your own.

And that's, in this area, for example, I see there's no other choice to count on serverless or at least to an infrastructure provider.

But sure, there are some old school, in the normal serverless space, I think, there are some old school hosting providers, which are still interesting.

But all in all, I think most of the workloads are able to be hosted on serverless.

But that's my opinion.

No, no, I think we really believe in it. Because we had this evolution also in the last half year, really having this interaction on a daily basis with the technology, where you see all the benefits this technology brings, and how it evolves, etc.

And how it solves your use cases. At the end, it's really, I think it's something which will, let's say so, which will be the next wave or the next change.

And I think also it will be in the next couple of years, because it will really, I think there will be a really fast adoption.

That's fantastic. All right.

Well, thank you so much, you two. I really appreciate it. And I'll get you in contact with product management about that, that TV option.

Thank you so much for your time today.

And I hope you have a lovely rest of your day. Okay. Yeah. Thank you for your time and see you soon.

Yeah. Bye. Bye-bye. Bye -bye.

Bye -bye.

Bye -bye.

Bye-bye. Bye-bye.

Bye-bye.

Like many other retailers in the industry, Falabella is in the midst of a digital transformation to evolve their business culture to maintain their competitive advantage and to better serve their customers.

Cloudflare was an important step towards not only accelerating their website properties, but also increasing their organization's operational efficiencies and agility.

The decision of TI was also a business decision.

I mean, the faster we can deliver the data to our customers, the faster, the less loading time and seconds we can improve our site.

And that internalizes business locomotives. I mean, the business really understands that performance, that is, a second in the loading of a page, is a sale.

I mean, a loss in customer data is a loss of trust. So I think we are looking at better agility, better response time in terms of support, better operational capabilities.

Earlier, for a cache purge, it used to take around two hours.

Today, it takes around 20 milliseconds, 30 milliseconds to do a cache purge.

Home page loads faster. Your first view is much faster. It's fast. Cloudflare plays an important role in safeguarding customer information and improving the efficiencies of all of their web properties.

Cloudflare is a perfect illustration for me of how we can deliver value to our customers quickly.

The big challenge ahead is to start building the culture and building the foundations to allow teams, whoever they are, in five or ten years, poder hacer su trabajo.

With customers like Falabella and over 10 million other domains that trust Cloudflare with their security and performance, we're making the Internet fast, secure, and reliable for everyone.

Cloudflare, helping build a better Internet. The release of worker sites makes it super easy to deploy static applications to Cloudflare Workers.

In this example, I'll use create-react-app to quickly deploy a React application to Cloudflare Workers.

To start, I'll run npx create-react-app, passing in the name of my project.

Here, I'll call it my-react-app. Once create -react-app has finished setting up my project, we can go in the folder and run wrangler-init –site.

This will set up some sane defaults that we can use to get started deploying our React app.

wrangler .toml, which we'll get to in a second, represents the configuration for my project, and workers-site is the default code needed to run it on the workers platform.

If you're interested, you can look in the workers-site folder to understand how it works.

But for now, we'll just use the default configuration.

For now, I'll open up wrangler.toml and paste in a couple configuration keys.

I'll need my Cloudflare account ID to indicate to Wrangler where I actually want to deploy my application.

So in the Cloudflare UI, I'll go to my account, go to workers, and on the sidebar, I'll scroll down and find my account ID here and copy it to my clipboard.

Back in my wrangler.toml, I'll paste in my account ID, and bucket is the location that my project will be built out to.

With create -react-app, this is the build folder. Once I've set those up, I'll save the file and run npm build.

create-react-app will build my project in just a couple seconds, and once it's done, I'm ready to deploy my project to Cloudflare Workers.

I'll run wrangler publish, which will take my project, build it, and upload all of the static assets to workers.kv, as well as the necessary script to serve those assets from kv to my users.

Opening up my new project in the browser, you can see that my React app is available at my workers .dev domain, and with a couple minutes and just a brief amount of config, we've deployed an application that's automatically cached on Cloudflare servers, so it stays super fast.

If you're interested in learning more about worker sites, make sure to check out our docs, where we've added a new tutorial to go along with this video, as well as an entire new workers sites section to help you learn how to deploy other applications to Cloudflare Workers.