Cloudflare TV

Latest from Product and Engineering

Presented by Jen Taylor, Justin Raczak, Gurjinder Singh Batra
Originally aired on 

Join Cloudflare's Head of Product, Jen Taylor and other members of Cloudflare's engineering and product teams for a quick recap of everything that shipped in the last week. Covers both new features and enhancements on Cloudflare products and the technology under the hood.

English
Product

Transcript (Beta)

Music Music Hey, I'm Jen Taylor, Chief Product Officer at Cloudflare, and I'm excited to be doing another episode of latest from product and engineering.

Sadly, Usman, the head of engineering is unable to join us today.

So I'm going to try to hold both ends of our bargain in this conversation.

But I'm very excited today to be joined by the folks on our onboarding team and dig in a little bit on the work that you guys are doing.

Why don't you go ahead and introduce yourselves. Justin, go ahead. My name is Justin.

I'm on the onboarding and UI platform teams here at Cloudflare. Hey guys, I'm Gurjinder.

I am the engineering manager for onboarding team and UI platform at Cloudflare.

So it's interesting, you know, one of the things that we've talked about internally quite a bit is that calling this the, you know, onboarding team is maybe a little bit of a misnomer in that, you know, a big part of what you guys really do is think about that connectivity between the people who are the marketing experience and the product experience itself.

And you're really kind of the glue in that connective tissue.

It's, you know, depends on how you think about onboarding.

And I think the point that you're getting at is there's a way to think about it very narrowly, which is congratulations, you've given us an email and a password, you are now onboarded to the platform.

But you know, the larger the product, the larger the platform, the more as we can do to help a customer actually be onboarded.

And we tend to think about that broader definition.

Yeah, I think when I joined Cloudflare, that was the whole problem that we were trying to solve.

But onboarding is very narrow. And even if I asked some basic questions about, does anybody know how many people onboard every day?

We don't have a number, we don't know the charts on them. And I think over the course of these almost two years now, we have been working on tracking all those stuff, like the stuff that we can present in the meaningful way, figure out our stats, have the metrics around it.

I think all that work that onboarding did to have more visibility into what to ship, what is working for our customers, what is not working for our customers.

And I think that has helped define Justin's roadmap for this onboarding product as a whole.

Yeah. No, most definitely. I mean, the two of you guys coming together really, I think, have helped us as an organization make a step function improvement in the way that we really think about and manage this aspect of our business.

You kind of took us from a place where we were thinking about and having feelings about how we were doing to actually being able to quantify that.

When you started that process, though, where did you start? I mean, how did you even start kind of unpacking that challenge?

So I think, for me, one of the first order of things I did after joining late 2019 was start to kind of dig around in the data a little bit and see what I could figure out on my own.

And conversion funnels is a very product manager thing to look at.

So I started to try to construct some, and I think, as Gurjinder pointed out, that's where I started to find some of the, you know, hey, customers who go through this flow, how many actually make it through?

Well, I don't think we can see over there, so we're not entirely sure, but maybe it's roughly this many.

And so, obviously, the most performant organizations think about the impact of the things that they do, and so measurement is super, super critical.

You know, the kind of two ways to think about the work is taking things off the box.

The other way is, did we actually have the outcome that we were after?

And I think Cloudflare is definitely oriented over the latter there, right?

We'd rather do things that drive the impact. And so we started with parts of the experience that our own team owns, so that kind of core onboarding flow when a new customer comes to Cloudflare and they have a site that they want to protect and accelerate.

So, okay, well, let's get our own flow well instrumented here, and we kind of viewed that as the beachhead, you know?

So, thinking in the kind of lead by example model, let's do a really, really great job of instrumenting our parts of the experience and building great visibility through charts and measurements through what we have, share that with other teams, and kind of, hey, look how great this is.

I can answer all the questions you have about our experience.

We want to give you the tools to kind of do the same thing, and so we kind of started with a narrow slice.

And now I think we're the kind of de facto champions of making sure that makes it out to the rest of the organization, empowering everyone to be able to say, hey, are we having the impact that we're having with the work that we're, or the impact that we're after with the work that we're doing?

Yeah, and I think one big concern about changing or making any changes to this onboarding flow is like this onboarding flow exists forever, existed forever, like people have been used to doing this onboarding flow forever.

And anything that we introduced in that flow brings a friction, like how do you know that friction is working for customers or it's having negative impact on the business as a whole?

And without tracking and without all those instrumentation, it was impossible to know.

And I think we were literally shooting the dart, like we didn't know whether this product is going to be useful.

Like if somebody like Matthew comes and asks us a question like, hey guys, you introduced this, how is this working out for our customers?

Like we had no way to tell that.

And I think that's where having the instrumentation, building that whole pipeline was so crucial for us to define our success, to even know like we are in the right direction or to maneuver in case there's something required to be done differently.

I think it totally helped us in that way. Well, you know, and it's interesting, it's kind of a classic example, you know, classic kind of, you know, change in and of itself is hard, but just because it's hard doesn't necessarily mean that it's bad.

But in the absence of actually having data one way or another to say this is good or this is bad, this is having a positive or negative impact, like it's just impossible to tell.

Absolutely. So where'd you start? In measuring our experience or in kind of, yeah, so we, again, for anyone who's gone through the Cloudflare onboarding experience, I think one of its greatest attributes is that it is very short.

And I think that's usually one of the principles you try to have for a signup or registration experience, ask only the questions that you actually need and just get the customer on to what they're doing.

So I think to our testament, or of Teams past, I should say, I think we've done a great job with that.

It is just email and password and you're in. But the platform itself is rather large.

And so through the onboarding experience, there are different outcomes that customers are after.

Some are coming to us to onboard a site onto Cloudflare for infrastructure.

Some are just going to Cloudflare.com and they have something else in mind.

They're trying to protect internal applications or remote workers using Cloudflare for Teams, or they're ready to build a serverless application with workers.

And so those are the outcomes that we want to be able to measure.

And so understanding where those goalposts are. Hey, for some customers, it's they've brought monkeysandbananas.com onto Cloudflare successfully versus a customer who's actually made it to deploying their first workers application or customers who have set up gateway or access on the Team side.

And so we started with that core flow.

And so does a customer who comes to us interested in onboarding a site onto the platform, are they able to set up their DNS records?

Are they able to point their name servers to Cloudflare?

Do they actually make it to the dashboard where they are done so that the next time they come back, we're protecting and we're accelerating what they've brought on?

And then for anyone that's worked with conversion funnels, you know, the magic of this is when you visualize this, you get to see immediately what we lose people here and here.

So there's something confusing about this experience or something that's not matching customers experience over here.

And that helps the team focus on where we should spend our time thinking of experiments that we should run on the flow.

Or it tells us, you know, hey, we should go try to talk to customers who might have hit this and see what their experience was, see what they were expecting versus what they actually ran into and why that led to them ultimately not finding success with us, which, of course, for our team is what we're particularly focused on.

So Griginda, like, you know, Justin is like, let's go. And I mean, what tools do you reach for?

I mean, do you pull in a third party tool? Do you, you know, I mean, how do you tackle this?

Yeah, initially, it was a lot of clutter.

And being a security company ourselves, we want to reduce the clutter because more the clutter, more the chances of something going bad.

So we came up with this whole cool framework, we call it Sparrow.

It's our internal tool for tracking.

So on our dashboard side, on the UI side, we just have integration with Sparrow, which is our internal library.

It has the internal integration with our security scrubber.

So none of the information that customers, we call PII, information or sensitive information does not make through that pipeline at all.

So we have the scrubbers in that pipeline right over there.

So the cool part of this whole pipeline was how we could have one library to be used in UI channel events through this backend pipeline and utilize that pipeline itself as an orchestration, send the events to different screens.

Like our marketing guys, they are heavily dependent on Google Analytics.

And for Justin right now, he is, he wants to use Amplitude, which gives him more insights into how the product is doing as a whole.

So instead of having a dashboard talking to Google Analytics, having a dashboard talking to GA, we have this pipeline that we built on the backend to orchestrate the data coming from our dashboard to GA, to Amplitude, or to say, it's capable of sending it out to any other tool which is required by our PMs or other teams.

We could totally have that integration.

And yeah, the data flows seamlessly over those destinations.

And along the way, we hit bigger snags. One big snag was like, hey, you are, we might be receiving more data than we are sending out to our backend systems.

We had no way to prove that. Like, hey, is this pipeline working as expected?

This is receiving maybe, let's, for example, say 100,000 events every day, it might be sending 80,000 events to Amplitude.

Like, how do we know that this whole thing matches?

We are getting the expected amount as the input, and we are giving the expected amount as the throughput.

So along the way, we made this a little bit more smart by starting logging the events as raw data internally in our internal systems before we send it out to those third-party library tools for analytics.

And kind of have a debug where our developers can see what's moving through this pipeline end-to-end and make a decision whether something is wrong or not.

That has helped us reduce so many bugs in this whole pipeline, helped us to make it more durable, more reliable.

And up to the point where this whole pipeline was only, we were only able to use this pipeline in our Cloudflare dashboard.

And while the people on different teams like our www.Cloudflare.com could not digest it because this pipeline, this whole library was coupled together to our dashboard.

So we moved it out eventually.

And also we made tweaks for our marketing people, like they have, they want to know their spending on the ads, for example.

They're spending on Google Ads.

And of course, people click on those Google Ads and there's a conversion sometime, people can work to get a customer.

All that pipeline, we never supported it and was not working as intended.

And we spent a lot of time understanding, what does this Google Tag Manager do?

What does the UTM properties do?

All that stuff. And integrated into our interface, such that this interface as one can be used by any client of us, be it our dashboard for teams, be it our www .Cloudflare.com or be it our Cloudflare dashboard itself.

Everybody can use the same library that ingests the data into the same sources to gather data at one place.

And I think that's a huge win. And this is a great success story where this project we started as our internal to solve the problems for onboarding team and how it evolved to solve the problems around Cloudflare, where people are dealing with many different things that we never had a use case for.

You mentioned Sparrow just a minute ago.

And I'm kind of curious when you guys were like, hey, you know, this all these JavaScript cookies we're taking from these third party measurement services.

One, there are too many of them. Two, they're kind of creepy. Three, they create security vulnerabilities.

Four, they slow the pages down.

That's a lot of problem to solve. Let's just step back for a minute and talk about the origin of Sparrow.

Where did you guys start? What tools were you using when you were thinking about building it?

How did you approach that problem? Yeah, so I think the main motivation of this whole having a Sparrow was to reduce the clutter.

And we had some incidents where we were using some third party libraries in our dashboard.

If those libraries get hacked, our customers might get compromised.

And we are a security company, we cannot let that happen. And I think those were the motivations where we wanted to get off that model where we have third party libraries in our Cloudflare dashboard.

And rather have all that processing done on our backend where we authenticate the traffic.

We know it's a Cloudflare dashboard traffic.

We understand the client and go from there. And this whole pipeline is built on workers, which is another tool that we are dogfooding because the scale of how many events we digest across the globe, like so many customers use it.

We have probably tens of thousands of new signups every day.

Hundreds and thousands of customers log into every day. Everything that customers do generates an event.

And we want it to be scalable. We want it to be reliable.

We want it to be fast, so that it's like a dashboard performance is not impacted.

And that's where a cool product like workers came handy. It has solved for all those problems.

Every time there's a new event, we spin a new worker near the colo, near the data center, close to our customers where they are logged in.

And that gave us a lot of those advantages which worker was intended to and fit pretty well in our model of being more reliable.

Capturing all those events, not dropping them so that we know like, hey, whatever we are doing, customers are doing on the Cloudflare dashboard, those are being captured and we know we understand what's going on.

So there's no drop off events. Yeah, all that stuff that worker is right now being used for, like customers like it, like a workers product for a reason.

And we piggybacked on it, dogfooded on it and made sure like we took the advantage of all of that as well as a company.

Yeah, I mean, that's one of the things I really love about the work that we do at Cloudflare is, you know, we build and use so much of our own technology and the delivery of our service that we really kind of kick the tires.

And I think workers and the Sparrow example is a fantastic one.

You know, frankly, you know, workers for event logging and management is something that we've actually seen a bunch of different customers, you know, attempt and I think Sparrow is a great and really robust example.

But I want to come back then. OK, so now you've got Sparrow, you've got Sparrow up there, you've got Sparrow to a place where I can take it past these events.

You know, Justin, I'm kind of curious of two things. One is how do you decide which events to track and how do you decide which systems need to get those events?

I love this question. There's no right answer to it. So everything from here forward is kind of our philosophy that we landed on.

And even as a team, we're making a lot of these architecture decisions in the first half of last year.

We're having both debates internally and attending webinars where other folks were having debates.

And there's kind of two primary schools of thoughts on what to track at a meta level.

And the first is track everything. You never know when you're going to need it, which I think is kind of the very googly approach to it and the kind of Facebook ad driven approach to it.

And then the other is to kind of be more purposeful.

And we ended up going in that direction because, again, for us, measurement is about making sure that we're having the outcomes that we want.

And that's usually particularly for our team.

You know, we're so dashboard experience focused.

We're usually trying to do things to the dashboard that make the customer's experience better.

And so we have pretty pointed questions that we're asking.

And so in that kind of spirit of preserving privacy wherever possible, we only want to track the things that are relevant for us to answer those questions.

And so, for example, below the event level, we're not interested in which particular user is doing which particular thing.

So we don't track any of that data. We are interested in if a customer who starts this thing, are they successful getting to this other thing that they were after using the experience that we've built?

I don't need to know if it was Jen Taylor or Ginger or one of Justin's dogs, not relevant to us.

And so we did take that approach of being purposeful in our tracking.

Again, lively debate in the industry. I think you'll find people split right down the middle.

Vendors are kind of having to try to decide, you know, got to put content out there, right?

Your customers want to know what should we track?

And so everyone's kind of having to decide where they come down on the decide of that debate.

And it does add a little bit of extra work for you. Adding in a library that just tracks everything means you do that and it's done.

But the cost of that, again, is you have very little control over what it is that you're tracking.

You kind of have to start from a block list rather than explicitly saying what it is you're trying to answer.

And then the data that you've captured can be a little bit unwieldy.

Again, if you don't know what's in there or how it's being named because it's auto named and auto cataloged.

The name of the game to me for this is about empowering everyone in the company to be able to go answer these questions with data.

Most of us have some part of our organization that has a powerhouse of analysts there.

But those really, I think, are best for the really hard challenges that have to draw on seven or eight different data sources.

That's what they're great at and that's what we should let them do.

But for some of these more straightforward questions, we want to make it so anyone in any part of the company can say, I wonder how this thing is performing or I wonder what the customer experience looks like here.

And they can just go in and answer that question for themselves.

So that's what I think is very powerful about what we call explicit versus implicit tracking.

The data is clean. It's clear. It's organized around particular questions that we were trying to answer when we implemented the measurement in the first place.

Yeah, I think it's good for security as well. Like when you know what you're tracking, there's less chance something unintended goes through these pipelines and end up in third party tools.

Like Google Analytics where it becomes increasingly difficult to know what is in there and then follow them, follow up with them to delete the data.

Things like if we know what we are tracking, we have control over, we know very specific information that flows in.

We might have some kind of scrubbing tools on top of that.

Like, hey, accidentally things happen.

Somebody accidentally, a developer tried to do something, accidentally putting the data in there and we catch it.

When we open the floodgates, it becomes harder and harder to catch those kind of things.

And we deal with all kinds of stuff.

Like if you look at our dashboard, we capture security tokens. We capture customer certificates.

We capture a lot of the customer sensitive data. We don't want to end up in our third party tools here.

We absolutely don't want that. So that's the reason I think having this explicit approach of like, hey, teams make an explicit choice of what you are tracking, what do you want to track and have a control over it.

I think that works for us as a security company as well. I should emphasize, you know, we kind of, we talked about how it was cool to build some of our own internal tooling.

But I think, again, given our security and our privacy concerns as a company, right, we believe in part of that mission of helping build a better net is preserving our customers' privacy, but also helping them preserve their customers and their end users' privacy.

I think you'll see in the models for a lot of third party tools that we might have used, we basically have to pay for the features that would enable us to protect our customers' privacy and their customers' privacy.

And that felt a little wonky to us. Right. And so we thought, well, this is actually a benefit of going in this direction.

Like this is something we want to embody as a customer, as a company.

And so we can build this into our tools and we don't have to pay to improve our customers' experience again, because this is something that's so paramount to us.

And I think that's been really key. Yeah.

So, you know, say, for example, I mean, Gurjendra used the example of ad tracking and ad conversion.

You know, how are you guys collaborating with the rest of the organization on kind of defining what needs to get measured and sharing the information?

How does that flow work? Okay, I can take that. So, yeah, it was a really challenging problem for us to solve for.

Given like we don't use Google Analytics libraries, how they even capture the tagging attribution.

Like we kept hearing like, hey, this thing is not working.

Sparrow is not able to tackle that.

But the specification of the answers, like what exactly is not working, was unknown to us as an organization.

And Google Analytics is very different. It's different than any other analytics tool.

It needs so much special handling. So we paired up with our friends on the marketing team who spent a lot of time poking around in Google Analytics.

They have a lot of connections around and partnership with Google as a whole because we are the customers.

So our team, onboarding team engineers, started collaborating with the folks from the marketing team.

And starting experimenting on why is this not working, debugging it together, trying to see the data.

As I told earlier, we have now the capability to see end-to-end what data is coming in, how the data is flowing within our pipeline, how data is going out.

So that debugging that we had over there helped our marketing people and us to understand what data exactly is flowing when it comes from Google Ads.

And how are we processing it? And how is it going up in Google Analytics?

And I think a lot of that research based on the data and our experimentation to tweak a few things.

One of the big things that we realized, we generate our own ID.

While the data comes from Google Ads, the ID is already generated because now you land at the www .tugr.com.

Then you go to the dashboard and get converted as a customer.

The IDs were mismatching and that's how we were able to know, yeah, now we are debugging.

We have the raw data in front of us. We know this is a problem because we can see it now.

And that's how we partner with them to solve problems one by one and trying to understand what exactly are our issues.

And then tick mark, check the boxes off. And reach a level where now we have 100% confidence.

Like, hey, this is working as we intend to. We are able to see the data in Google Analytics, as well as GDM, UTM patterns that we needed to go to.

And I think that support from the marketing folks and our partnership with those teams coming together to solve the common problem.

I think it really is a success story over there.

And I'd say we're kind of on chapter two here. To the original point, this kind of started as a project to plug a somewhat urgent and important need within product engineering.

And to the testament of this engineering team, it was kind of a side project to the core of what the onboarding team does.

This is the same team we're talking about that works on the customer onboarding experience that built this really powerful measurement platform.

I think we passed a line that was likely my missing, that we kind of grew past, where we saw those inbound requests.

Hey, we needed to do this thing. We needed to do that thing. And we were kind of dealing with them ad hoc.

And then there was this moment where we realized we should probably be thinking about the marketing team as one of our customers.

I think we would see these requests coming in differently if this was a customer who's got great example use cases for us.

I don't think Cloudflare's marketing organization is too much different than most other tech companies.

So, okay, well, if we think about it that way, we would probably have a little bit of a different engagement model here.

It's like an EPM and engineer would love to sit down with their customers and have them say, I wish it did this, and I wish it did this.

And understand why it is they're asking for those things, because we understand why we're building the functionality we're building, which lets us actually deliver the right solution rather than, hey, I need UTM tracking.

Hey, we need UTM tracking because it's this really critical part of making sure that we're spending our advertising dollars the right way, which anyone who works on the marketing side can appreciate the importance there.

And I think we are in that kind of chapter two right now.

And it's been a very fruitful shift in the way that we've been thinking about that work.

Yeah, I love the notion of the marketing team as an internal customer.

Because, again, it kind of comes back to what we were talking about a few minutes ago about sort of we use our own technology.

We kind of, quote unquote, dog food our own technology as we build it.

You know, a lot of what's happening here is, you know, servicing our internal customers also gives us insights in ways in which, you know, this technology and these solutions could service, you know, broader customer needs.

I also really appreciate, Justin, the thing that you just highlighted.

Because I think for me, this is sort of quintessential, like what we do as product managers, which is really think about sort of what is the problem they're looking to solve, making sure we understand it, which is sort of like because if somebody just like sends you a ticket and is like, add UTM, you're like, I don't know.

Like, I get added. Am I adding the right stuff? Am I adding the wrong stuff?

It didn't come through. Did it come through correctly and stuff like that.

Back to that first point about measurement is about making sure you accomplish what you set out to accomplish.

And so when it was, here's the very specific request without kind of internalizing, hey, it's to achieve this objective.

There was no really checking back to make sure, you know, yes, checkbox.

We did that. We did the thing. Have we accomplished the objective? Can the folks who are working on how do we advertise Cloudflare to the world, can they see what it is that they were after seeing?

And so, yeah, I think it's been an interesting ride.

Yeah, that's what I like being in this job anyways, because with all those things, like we're not boxed into just solving the problems for ourselves.

Like we are, as a company, we start with something which might be a pet project.

And then we all realize the potential of how this can grow and start collaborating with multiple stakeholders within the organization to take their feedback on it and then go full-fledged on how every other part of this organization, Cloudflare, can start consuming the same thing and start doctoring in the same thing and be successful together.

I think that's the big message that I take on to my candidates when I interview for teams.

It's like that's how Cloudflare culture is.

That's when we started something small and it becomes big with the help of everybody else within the organization.

So one team. Yeah. So we just have a little over two minutes left.

Really quickly, first top thought off the top of your head when I ask this question, what comes next?

Again, I can go first, but I want to hear from Griginder also.

So in the spirit of kind of empowering our internal teams and thinking about internal Cloudflare teams as a customer, today, again, this is a very engineering-driven project so far, and so change is, hey, we need a new source or we need a new destination or we need to add some logic to our privacy scrubber to make sure we're not capturing certain things.

That's all done in code. So it goes through a process I'm sure most can empathize with.

We've got to prioritize. We've got to scope. We've got to get it onto the schedule, and we're usually pretty quick, but it's a non-zero amount of time from, hey, a thing needs to be done to the thing being done.

And so in that spirit of empowering everyone to kind of be involved in that process and get to what they're after more quickly, we're looking at how we wrap this into a UI, more like a traditional web application.

And so the internal customer and marketing say, we're bringing this thing in and the field is named a little bit wrong.

Instead of filing a ticket and waiting, we can give them the tools, say, just go catch it right here.

You can do the confirmation there, press the Save button, and now it'll be fixed in Google Analytics for you.

And, hey, that didn't actually matter to us, so we're not going to make changes in any of the other downstream systems that product engineering might use.

And so I'm pretty stoked about that. The team is all collaborating on this a bit, and I'm really excited to see some of the rubber hit the road on that.

Yeah, exactly. I think from the engineering side as well, there are so many things, because this was intended as our own internal project.

We made a lot of tweaks in the record for the things that can work for us, which means we're very focused on our stuff, not the stuff that marketing might be relied on or how they perceive or they want to make Google Analytics work for them.

So a lot of that stuff, we want to make a UI where people can customize, our team can go and define their own dimensions, for example, for Google Analytics and define the data that they want to flow through, log the data that they want to flow through.

All those things are next steps in our pipeline to make it more user-friendly, make it more up there that our internal customers can use it the way they intend to, rather than asking us all the time.

So I can't believe it. We've got seconds left.

Honestly, I could keep going on this for another hour. Fantastic conversation. Really appreciate you guys making the time.

I really appreciate what you've done with Sparrow, and I look forward to continuing to work with you on it.

Thanks for having us, Jen.

Thanks. Have a great weekend. Thanks. Bye.