💻 Using React and Serverless to Build Interactive Sites: How Dig Delivers Fresh Food and Fast Sites
Presented by: Jen Vaccaro, Kayla Geigerman, Sue Malomo, Matt Weinberg
Originally aired on April 12, 2021 @ 12:30 PM - 1:00 PM EDT
Join us for a conversation on how to spin up interactive websites with better developer experience by using React and serverless technology.
We will explore how the Digital Agency Happy Cog delivered a big win for their client, Dig, a large restaurant chain in the US north east. Their interactive ordering site helped grow their online business just in time for the pandemic.
English
Developer Week
JAMstack
Transcript (Beta)
Hi everyone. Thanks for tuning into our Developer Week segment of Cluster for Cloudversations.
If you're new to this segment, this is a series on Cloudflare TV where we meet with different customers, shine the spotlight on them, learn about what they do, how they do it, discuss industry-related topics, best practices, and their Cloudflare use cases.
Today we are joined by Matt and Sue from HappyCog who were instrumental in the building and development of the Digg restaurant sites.
So we're going to learn all about that.
But before we do, let's do some intros. I'll start.
I am Kayla. I am on the customer advocacy team at Cloudflare and I am joined by Jen if you want to introduce yourself.
Hey everyone. I'm Jen Vaccaro. I'm the product marketing manager for Cloudflare Workers, Cloudflare Pages, and a couple of the other products that fall under that suite.
Matt. I'm Matt Weinberg. I am the president of technology at HappyCog and I'm also a co-founder here.
Thank you for joining.
I'm Sue and I'm a developer at HappyCog and the tech lead for the Digg projects.
Awesome. Thanks Sue. So Matt, why don't you tell me a little bit about HappyCog and the work that you guys do there.
Sure. So HappyCog is a full service interactive agency.
We do development, web, and mobile. We do design branding and we do a lot of marketing as well, paid media, search engine optimization, that kind of thing.
We do a lot of business applications and big data type stuff too.
And how long have you guys been doing that for? About 20 years.
Wow. That's a long time. Where are you located? We have a headquarters here in New York City where I am, but we have people distributed all around the United States.
Awesome. What kind of customers are you guys serving today? It's a very big mix.
We have everyone from Fortune 50 companies to small startups, restaurant groups, everything in the middle, higher education, big law, nonprofits, B2B.
It's a huge range.
Awesome. And so since we're specifically talking about your work with Digg Restaurants today, why don't you tell me a little bit about how long you've been working with them and how that came to be?
Sure. We've been working with Digg for, or also known as Digg Inn, for about three years at this point.
They came to us because we had been doing work for another similar group called End Pizza, which is located in New York City and Washington, a couple other cities.
They referred us over to Digg.
Digg needed help with their mobile app and a whole bunch of their web work.
And so they hired us and we've had a really long relationship with them ever since.
Awesome. And for those people who, you know, may not be familiar with Digg, I certainly am because I lived in the Northeast for a while and best lunch spot by far.
But could you at least kind of explain what they do and where they are? Sure.
So Digg is, or again, also known as Digg Inn, depending on where you are. They are fast, casual, kind of healthy, natural foods.
You know, it's all about good quality.
You kind of walk down a line and you choose if you want rice or vegetables or kind of what your base is, what your protein is, what your toppings and sides are.
But everything they do is grown ethically, naturally healthy. It's really an amazing place and it's super, super popular, especially here in New York City for lunch.
Exactly. If you are in New York City and you like broccoli, I would go there.
I mean, that's just my opinion. So for Happy Cog and your customers as a whole, what are the main technical challenge you see your customers facing today, especially during the pandemic?
And Sue, feel free to join in as well, as I know you have a lot of technical experience.
Sure. So I'll start and then pass it over to Sue.
You know, just in general, I think our customers are trying to figure out a digital transition that was accelerated by many years.
You know, plans that they always had that were going to be maybe five, 10-year plans back last year, those became 18-month plans or 12 -month plans, or in some of our restaurants cases, three-month plans.
So that's big. Sue? Yeah, I think in our case, DIG was kind of ahead of the curve.
They already had online ordering. They already had pickup and delivery.
So that kind of infrastructure was already in place.
But still in the early stages of the pandemic, you know, looking back a year ago now, they needed to be able to get information out very quickly to customers.
Things were changing sometimes on a day-to-day basis, things like contact -free delivery and being able to allow customers to choose that and pass along the information of where it should be dropped off or whatever special instructions they might need to make that happen, you know, very quickly.
So we had a lot of pivoting and a lot of, we were talking to them on a constant basis in the early stages.
Awesome. So you did mention this a little bit, but let's talk a little bit about some more, you know, with everyone kind of working from home and not taking the normal lunch breaks where people actually go out to lunch.
Can you talk a little bit about the increase in need for online ordering during COVID-19 and how that directly impacted DIG?
I mean, I know they already had an online ordering site, but I'm sure it did shift for them.
Sure. So there's a couple of reasons, a couple kind of related aspects of this.
You know, DIG has about 40 locations and a lot of them are here in New York.
They have a bunch in other cities as well.
As Sue mentioned, they always had a lot of online ordering, but they also had a lot of people walking the line, just going in and walking the line.
And also even with online ordering, you would go and you would kind of wait and you would see if your order was ready and you'd pick it up.
And there would sometimes be a little bit of a crowd, especially during the peak times.
So first of all, nobody or very few people wanted to walk the line anymore.
Almost everybody wanted to order.
But also from DIG's point of view, they couldn't have 50 people like all rushing in and standing there together, waiting.
Because first of all, no one wanted to do that.
Second of all, that's against the rules and the laws that were kind of put into place.
And third, it just wasn't very organized and nobody would be able to find anything.
So kind of at the same time, they had two competing things.
On the one hand, tons more people ordering online. On the second hand, they couldn't have, they had to restrict how many people could actually pick up their food at the same time.
So that's a big technology challenge. And I think that really plays into better systems to kind of manage order flow and communications with customers, because they need to be able to tell customer, instead of saying your order will be ready anytime from 10 to 30 minutes from now, it's actually a lot better if they could say your order will be ready in 18 minutes or your order will be ready in 12 minutes.
So you kind of just show up when you need to be there and you get your food and you leave.
Not that they want to rush you, but just it's more organized for everybody.
And they need to do that across dozens of locations.
So technology and online ordering became a lot more intertwined in that way.
Yeah, that's awesome. That actually sounds incredibly helpful. So Jen, why don't you go ahead and talk a little bit specifically about how Happy Cog and Dig used workers to kind of do this?
Yeah, yeah. So just off your point, Matt and Sue, regarding this new connection between online ordering, needing a very precise time for people to show up.
And then also I know, Matt, you've previously in our conversations talked about how, particularly with the restaurant industry, and then even more so after COVID, there would be these major spikes in when traffic would come in.
And so like during the lunch hour, maybe during the dinner or the breakfast hour would be huge spikes in traffic.
And the hope would be that there wasn't any sort of like overloading of the servers and that nothing would crash with these unpredictable spikes.
So kind of from that background, I'm curious from both Matt and Sue, from a technology perspective, how have you gone about or what are different frameworks or ways that you've thought about solving some of these issues from whether it's React?
I know we've talked about workers, Jamstack.
I'm curious how you've gone about solving some of these challenges.
I think there's a business question and then a technical question. So I'll start with the business.
And I think Sue can definitely talk about the technical side, which is restaurant margins are very traditionally small, right?
And even costs went up during COVID because they have to do extra shifts and extra cleaning and things like that.
And so we want a case where we're not overpaying for hardware and overpaying for capacity.
On the other hand, as you said, you have these huge spikes.
If you're a big lunch restaurant, if you're a big dinner restaurant, again, I'm not telling you anything secret, you can go and stand outside and look.
You kind of have a couple hours where it's very slow. You have a huge rush for lunch and the couple hours that everybody eats, and then it's kind of slow again.
And if you're a big dinner restaurant, you have a couple hours right then.
So how do you architect something that can handle those huge spikes and those huge rushes while also not overpaying for capacity when you don't need it?
If you're not open at 2 a.m., you don't want that kind of capacity. At the same time, if you're really popular one day, every minute that you're down, that your site is down or your online ordering is down, it's losing thousands of hours.
So that's the kind of business case there.
And then there are technical aspects, of course, as well.
Yeah, from a technical standpoint, I've never had to worry about the infrastructure with Cloud for Workers.
Once everything was set up, we deploy and it just works.
Like Matt said, we don't have to worry about those rushes or handling site traffic or anything.
It's very solid. It's very consistent. And it's one less thing we have to worry about from a technical standpoint.
And I'm curious, when you were thinking about the different possibilities of solving some of these challenges, were you mainly looking at serverless?
Were you looking at other alternatives out there in the market?
Or what was your deciding process? Sue, would you like to answer that or you prefer I do?
You can handle it from the business decision, if you'd like.
Sure. So again, I think there's a business side and a technical side, right?
From business side, first of all, they're a restaurant. It's a restaurant group, but they're a restaurant.
Their business is selling food.
So the last thing they want to be thinking about is servers and infrastructure.
And they don't necessarily want to be having big conversations about all the back end and the choices.
And so I think we're, first of all, looking for something that would be easy, like easy for us to explain, easy for us to post on.
Also something that we were proven that we had used before.
And that was proven because we didn't want...
We always want to push the envelope. We always want to experiment.
But we also don't want to experiment with something that is potentially going to be down from 11 AM to 1 PM and lose all their lunch sales.
So we'd worked with Cloudflare a lot.
We've been working with Cloudflare for at least five, six years at this point, if not more.
So we were very convinced about the uptime and the stability of the products.
And also just from a cost point of view, it was extremely, extremely reasonable and very easy for us to tell them, like, these are your costs.
It's not going to be unpredictable. These are going to be your fixed costs.
If the cost has to scale, it's because your traffic is scaling a lot.
I think it's very easy for them to understand. Yeah.
And from a technical standpoint, the deploy process could not be any easier. Like Matt said, once we set it up, we don't really have to think about it.
It's not something we need to explain a lot to them.
It really allows the developers to focus on adding functionality, adding features, and not worrying about spending time on the deployment process or making sure the server's up or any kind of infrastructure worries.
Great. Great. And I'm also curious if you could elaborate. Maybe, Sue, this is more particularly for you.
But I know that you've been using Cloudflare, which previously was Cloudflare Worker Sites.
It's now moved over entirely to Cloudflare Pages, which went into GA, General Availability, today, which we're really excited about.
And Pages is a Jamstack framework. It works with a lot of the – or Jamstack compatible.
It works with a lot of the frameworks like Gatsby, Hugo, and then in particular as well, React.
And I know that's what you both had been using particularly for Dig.
So, I'm wondering if you can explain a little bit about the technical implementation of using React and if you could detail that a little bit, maybe why you chose it and then some of the ways that you went about implementing it.
Sure.
Well, using GitHub Actions with – again, when we were using Cloudflare Workers, we were using the Cloudflare Worker Wrangler Action to handle the continuous integration and continuous deployment, which made it super easy.
I had a huge deployment with thousands of lines of code was being changed.
And because of that integration, I was able to prep a pull request and we did this major deploy just by clicking the button to merge the branches in.
And it just works. It takes minutes and that can't be overstated for how much better that makes the development experience, the client's experience.
A huge deploy like that can be very stressful and it can be potentially complicated and sometimes you're worrying about is it going to bring the site down?
Is anything going to happen?
But in this case, that integration was so seamless, so easy that the deployments are very, very quick and very easy to do.
And as far as React, which is very compatible with Jamstack, although maybe not exactly in the Jamstack world, have both of you played around with some of the other frameworks in Jamstack or what might make React particularly useful for this type of use case?
I can help.
I can kind of answer some of that. So, first of all, the way the system is architected is that we have our front end files, which live on now Cloudflare pages, but previously worker sites.
And there's kind of a backend system that acts as the central hub of all the ordering and the, you know, predicting timelines and kind of knowing when an order is done and when someone's picked it up, all of that.
Using React allowed us to separate those a little bit because, you know, the website is not the only way to order.
As I said before, we work on their mobile app and mobile app is another way to order and there's third parties, you know, like Grubhub or Seamless or Uber Eats or whatever it might be.
So, all of them talk to this central system.
So, a nice reason for using React on the front end is because it let us have that separation of concerns.
We could have this whole front end system that was their online ordering interface and abstract away the details of the implementation to the backend where all the orders are actually processed.
Another thing to note is that their app had been built in React native.
So, just internally for us, you know, we have a lot of React experience.
We do a lot of React work.
Felt like it would be good for them to kind of have more of their stack on very kind of, you know, the React, React native kind of similar technologies.
And with workers, again, we originally did this in worker sites, moved to pages now.
It allowed us to do routing very easily.
Meaning, like, you know, really it's a single page application, right?
If you're clicking around, it's only loading in some of the new content.
With workers, we can kind of easily route all the requests to our single kind of application logic.
And then based on the URL, it kind of shows you the right content.
So, that's really nice. And some other systems, you kind of have to fake that with 404 error page handling, which can be kind of annoying and give you weird results with SEO and things like that.
So, that was pretty much it.
We felt like it was good, you know, it's a good popular framework. It's we use in React native already for mobile.
And it works really well in workers. That's great.
That makes a lot of sense. One thing I want to talk a little bit about, and I think you were, Matt, explaining this when you were discussing sort of the backend and the frontend and wanting to have React to really focus on the frontend.
And so, particularly with worker sites and then now Cloudflare Pages, today it's really for static sites.
And I'm wondering if we can dive in a little bit more on like how you integrated that dynamic element as well into the whole ordering process.
Sure. So, you want me to talk about that? Would you like to talk about that?
Do you want to start and then I can fill anything in? Sure. So, again, for us, it's really important to be able to scale kind of easily and quickly because we have these very kind of quiet times followed by these huge rushes where everybody orders at the same time.
So, we knew that in the frontend, having as much as possible be static, just as a base case would be great, right?
Because we can do static hosting.
We can use worker sites or we can use Pages. And there's pretty much no issue with scaling static like that.
We're never hitting a database or anything.
So, let's say 90% of what you're seeing there is static. But of course, there's some dynamic information, right?
When you're starting to create a cart, when you're starting to create your order, that's all dynamic.
When you actually place your order, if you're logging in and you need to pull back your saved credit cards, your saved addresses, your favorite orders, that's all dynamic as well.
So, the way that works is that, as I said, most of the content is static.
And then it's just basically Ajax, asynchronous JavaScript requests to the backend API to kind of effectively hydrate with dynamic data where it's needed.
So, you get your whole static page, no database calls, no...
Actually, we let Cloudflare worry about that.
We don't have to worry about it at all. And then just a couple API calls to the backend brain system to get the dynamic data where it's needed.
But again, we try to be really light and performant with those.
We just grab the data we need to be dynamic and pull it in as needed.
This way, we keep load off the backend where we can.
I think as Matt said, the application itself is pretty lightweight.
And that makes it super responsive and super fast.
Even as you're going through the different locations, which may have different menu items, all of that is handled through the API, which makes it really responsive and fast.
The other thing I guess I should mention is we're also using Cloudflare for what we call the marketing site.
Just if you go to digin.com, it's like the CMS.
You could see menus and about information and other kind of non-ordering type information.
We use Cloudflare for that too.
People, as you would not be surprised to hear, they want to see the menus right before they order.
Sometimes they just go to the marketing site or they're looking for jobs or whatever.
That's all CMS powered, but we cache that very heavily on Cloudflare's just kind of standard caching network and standard CDN.
So 99.9% of hits to their main marketing site never even touch our origin.
And that's helpful too, again, on these big spikes.
And we can have a pretty cost-effective backend server because we know we're caching it on the Cloudflare side.
And you're using the Cloudflare cache API, right, for that?
Oh, yes. We are. So we cache everything. And then in the content management system, when their editors change any information, we use the Cloudflare cache API to just break the cache of the specific URL or URLs that would be impacted by that change.
Very interesting. So there's two sites.
There's the marketing site that's CMS based, and you're using mainly Cloudflare cache API.
And then you have the ordering site, which has both the dynamic and static elements that you said.
And then that is being run on previously Cloudflare worker sites, and then now being hosted on Cloudflare pages.
Exactly right.
Great. And so now that we've talked a little bit from the React side, I'm wondering if we could particularly talk maybe about Cloudflare Workers and worker sites in particular.
And I know that, so you said you're using it as a React framework for the online ordering site.
And I'm wondering if you can talk a little bit particularly focusing on your experience with worker sites.
And if you started using Cloudflare pages or have familiarity with some of the updates there, if you could comment on some of those changes or how that experience has been for you.
Sure. We currently have two environments set up so that we have a live site for ordering the production side, and then we have a sandbox so that we can test out new features, new functionality and things like that.
And it's super easy to manage with the Cloudflare Workers and now with the pages having two separate environments.
The client can test out new features. We can run test orders. We use a separate API for that.
So we're not affecting the live restaurant locations at all.
And when we switched over to a new loyalty program for them, we were able to very quickly spin up a third environment to test a totally separate API.
So we had three separate environments kind of running all at the same time, running different code bases or using a different API, which made it very, very easy for the client and on our side also just to be able to run these all using one infrastructure, which was great.
Yeah, I think on the pages side, one of the reasons that we were excited to move over to that was because we get some of those kind of preview abilities out of the box for free.
We get the ability to have a sandbox and the additional loyalty program and all of that, as Sue just said.
The nice thing about pages is you're kind of getting that automatically with your GitHub branches.
So every time we branch, we can get an automatic, just what we call pull request app, or I know that different development teams call them different things.
So that's really nice. Like the preview deployments are very, very nice.
Just from a GitHub integration point of view, again, Sue mentioned the Wrangler action on GitHub.
That was super easy, but with pages also, we don't even have to bother with the GitHub action and we don't have to use our GitHub actions minutes.
We can kind of just, again, integrate for free and are included out of the box.
So that's really nice as well. So there's some nice features that are helpful for us.
That's really great. You probably haven't had a chance to get this morning because I know we've been preparing for this, but some of the new features with pages going into GA, one of them is that we have an integration with Cloudflare Tunnel.
So it used to be Argo Tunnel, now it's Cloudflare Tunnel.
And what's pretty cool with that is that you're able to share your local host.
So if you're running your code, you have your files up in VS Code or whatever you're using, you can share your local host with another collaborator.
And you can be making live updates, kind of like a Google Doc, where you both can be making live updates and having that, and you can view it without even having to commit any of the changes or having it go live anywhere.
So that's something kind of exciting where we've been able to integrate with some of the other Cloudflare products and really help make that collaboration easy.
And then something else that's new from when pages was in beta and now going into GA that maybe will be interesting for your team is that the preview links are all going to be automatically protected behind Cloudflare access for all of your links.
So if you are, after you have committed your changes and you're pulling up the preview, those will not have any risk of going into the worldwide web.
They'll be protected behind Cloudflare access.
So those are two things that maybe as it goes into GA, you'll also be able to start taking note of.
That's great. I also saw redirects. There was an announcement around redirects as well.
Yes. Redirects is the other big one. I'm trying to think what other big...
Yeah, I think the redirects, support for redirects, the access and the tunnel, Cloudflare tunnel, the three biggest ones, although there might be something else I'm missing.
Yeah. I think it's been really interesting to see you all increase your developer tooling around this over the last couple of months around worker sites and workers and pages now.
And just, I think that's honestly developing in some of the other systems.
Sometimes there's a very long iteration cycle between you write something, you try to see a preview, you have to upload it to the edge function or whatever it might be, and you have to wait for it to go.
And that can be very hard when developing. So I really like that you all have helped get logs quickly, helped spin up previews quickly, the console, the tunneling.
It's just, that's one reason we chose Cloudflare over some of the other options.
That's really great to hear. We're happy that... Yeah, it's been quick.
Pages, our team has been working on it over the last maybe year or so.
And yeah, just the amount that we've been able to iterate on some of the changes has been really fast.
I know we're wrapping up here. We only have a few more minutes.
Kayla, I saw you unmuted. I don't know if you were going to say something, but I was just going to ask them about some of their best practices that they would share with other users, whether they're using workers, Cloudflare Pages or React, anything you can share to folks who are just getting started that you might want to tell us here in the last couple of minutes.
You read my mind, Jen. I would love to hear that as well.
Great. Sue, all yours. Thanks. Yeah, I think as far as best practices go, the great thing is with the integration is if you have a good setup for your GitHub repos, then you kind of are all set.
It's automatic with the Cloudflare Pages.
But one thing I do want to point out, the documentation for Cloudflare is one of the best I've seen for any other infrastructure, which from a developer standpoint, I really appreciate.
It's very easy to get up and running, to get started, and to find answers to anything that you might have questions about during the setup process or anything like that.
So, yeah, that would be the best place to start would be the Cloudflare documentation.
I guess I just have one more add to that, which is if you're doing a React app or something similar on worker sites, which is a single page app, there's a really nice helper inside Cloudflare's handler package.
I think it's called serve single page app. It's one or two lines you add to your worker, and it helps a lot with all the single page app routing, especially if you're using React or similar.
So, take a look at that as well.
That's a really good point as well. Great, great. So, I know we only have two more minutes.
Kayla, did you have anything you wanted to say to wrap it up?
Or maybe Matt and Sue, any final words to share here about your experience, whether it's with Digg, Workers React, or anything like that, as maybe some folks tuning in are also wanting to build their own interactive websites, particularly to respond to the pandemic and our changing technology needs.
Yes, I would love to hear that, especially any plans that you have going on with Digg and any other restaurants that are dealing with something similar.
I think a lot of our restaurant clients are dealing with something similar, and our biggest thing is just trying to find ways to scale, as I said, scale at different pieces of the site, of the systems individually, while making sure that we can keep costs down during the off-peak times.
So, not only workers, right, but using Cloudflare Polish to have better SEO and better performance, increase conversion rates, and using the caching API and CDN and Argo and all of that is very helpful for us.
Awesome.
Since we are just almost out of time, I wanted to thank you guys so much again for joining us today and telling us about Digg and the work that you guys would do with them and all the great things that you're doing with workers and workers pages.
So, thank you so much. For those watching who want to get even just a little bit more information on this, we do have a great new case study with HappyCock and Digg on Cloudflare.com, so definitely make sure to check that out, and enjoy Developer Week and all the great things that are coming up.
So, have a great day, everyone, and thank you again.
Thanks for having us. Bye.