💻 Incremental adoption of micro-frontends with Cloudflare Workers
Join Cloudflare Sr Director, Engineering Igor Minar, Principal Systems Engineer Peter Bacon Darwin, Systems Engineer Carmen Popoviciu, Systems Engineer Dario Piotrowicz, and Systems Engineer James Culveyhouse to learn more about what launched today for Developer Week.
Read the blog post:
Visit the Developer Week Hub for every announcement and CFTV episode — check back all week for more!
Hi, everyone. My name is Carmen.
I'm a systems engineer here at Cloudflare.
I'm on the page's team and I am really excited today to have Pete, Dario and Igor join me to talk about incremental adoption of micro frontends with Cloudflare Workers.
Would you like to introduce ourselves first? He's going to go first.
Go ahead, Pete. Go, go, go. Hello, everyone.
So, my name's Pete Bacon Darwin. I've been at Cloudflare for almost a year now.
In about a week's time or two weeks time.
I've been here a year and I'm a principal engineer on the Workers Experiences team.
I go next. Hi, everybody. I'm Igor miner, engineer director here at Cloudflare, working on developer experiences and a lot of web research as we'll see in a bit.
And the meat diet, which I did not cut for, for more or less half a year now. And they're working on the expense team.
Thank you. So before we dive into it, I was thinking that we maybe just first want to kind of go through some, some basic concepts so that we bring everybody along with us.
So, like, first of all, maybe we would like to start by explaining why what micro fountains are.
Why do we why do we need them?
Where do we use them?
Things like that.
Who. Who would like to hear.
I can take this one.
Sure. So many of us are building with applications and this segment is specifically focused on an audience that is interested in building frontends and especially big frontends.
Traditionally, many of the frontend applications today are built with client side technologies, whether it's React, angular view svelte many, many other solid.
And one of the things that we see is that as you start building bigger and bigger applications, especially if you're in the government sphere or health care, banking, insurance industries, travel industries, you have big applications that you need to build and building them using the technologies that are very client side focused results in very monolithic and big applications and also applications that are difficult to iterate on because you have often dozens, hundreds, sometimes even thousands of engineers working on a single monolithic application.
And any mistake by any of the engineer in this monolithic codebase can affect the user experience for the entire application.
And this is why when you go to many of these big applications on web applications, the applications take a long time to render or start and sometimes fragile, sometimes broken.
You see things that just don't make sense.
And it's not because the developers don't know what they are doing.
It's just it's difficult to build these big applications with monolithic client side first architecture.
And this is where a microphone that's come in microphone is the frontend take on service microservice architecture, but translate it to the frontend.
It allows you to take big monolithic application and break it up into smaller applications.
These applications can be independently developed and deployed, but to the user, once the user is looking at the UI, all these mini applications, they feel like a single cohesive application.
So they don't feel like they are looking at 510 applications.
To them, it's a single application, but in reality, organizationally the application is structured as a set of small applications.
Often independent teams own and iterate on these mini applications and on the schedule how to deploy these applications to production.
And this is really allows the applications to scale.
And this is why we've been researching this area and are proposing some new architectures that are server side focused.
Allow you to do server side rendering but still enable you to take advantage of microphones and split up these big applications.
Anyone want to add something to that? Yeah.
So the thing that's really interesting about our approach is, is that we're actually building on top of Cloudflare Workers and these Workers are give us the ability to run stuff in the cloud very near to the user in really scalable way.
So whereas maybe traditional CSR would be running in a big data center farm somewhere, we're able to scale this out and, and actually provide an experience to the user, which is even faster than, than could have been done otherwise with previous technology.
So it's kind of this mix of moving away from doing everything on the client, but also not just necessarily putting everything miles away from the user, which will take a long time and use up lots of resources.
So it's a it's an interesting hybrid approach that I think could make a real difference in how people develop apps in the future.
So the things that you're like you're referring to right now, I think the three of you wrote a blog post not so long ago and you had this very interesting demo app, The Cloud Gallery.
Could you tell me a little bit about how is that different to how Microsoft content's look right now?
Like, what problems does it solve and how does it solve them?
And maybe we could even demo that to show folks like to have it as an example.
- Yeah, so... So we built a demo of an application that's been built from microphone ends.
I could, I could actually show the screen and give you a quick demonstration of what it looks like now.
So this was our original blog post, and this was about introducing this concept of microfinance and the particular architecture that we designed devised we're calling a fragment based architecture, because each micro front end is a fragment which can also contain other fragments.
And so I recommend going and reading this blog post if you get a chance.
But in that blog post, we talk about a demo app that we built called the Cloud Gallery.
A lot of this was built by one of our colleagues called James Culverhouse, who couldn't actually make it today.
But he was going to he was going to join us to talk about this, too.
But we'll we'll try and make do without him.
So this is a really simple app.
All it does is it shows you some different clouds.
You can filter them by typing in here.
So if I start typing in, it will come up with a list of options and then I can select one and it will filter them based on the tag that we've chosen.
So you can see that there is various interactive parts.
There's some data being shown.
If I click on this button here, I can actually show that this whole application is actually made up of a number of these fragments.
There's a head of fragment, there's a body.
The body, interestingly, contains two other fragments, like the filter and the gallery here.
And then right down at the bottom you'll see we have a footer. And each of these is actually built as its own separate application that is being hosted by a Cloudflare worker.
So each of these runs independently of each other.
So for instance, if I click on this button here, you'll be able to see that except my.
The little thing that shows me that I'm sharing. Is in the way, so I couldn't click on that tab.
So here you can see.
That this Cloudflare gallery header is a separate application, then it's running a separate URL to the other ones and then you can do the same with the other ones here and they're actually completely self contained.
And this is one of the points of a micro frontend is that the micro frontend is exists in its own space and if it interacts with other microphone hands, it has to do in a very well defined way.
So for instance, if we if we go to the filter tab, you can see that this actually will operate as we expect.
And then if we click on one of those, it will update the URL.
But obviously nothing else is happening here because there's no other microphone for it to talk to.
So this is all well and good. Did you want to jump in?
There you go. I was just going to say that the important thing here is that all of these fragments are fragment microphone contents are server side rendered.
So if you if we maybe you want to jump into the network tab and just see how this page loads, because what you'll see is that.
When we make a request or in the browser makes a request to the server to the entry URL, that URL actually returns an HTML that is used to render this dispatch instantly.
In this demo, we used quick framework to build each of these fragments, but the architecture we are proposing is actually not quick specific and you could use it for with other frameworks, whether it's React View or solid bunch of others.
And we in this second demo that we'll show a little bit later will actually show you some of the fragments that were built with other technologies.
I wanted to ask something.
So all these small fragments, how are they all orchestrated together?
Like, how do they how do they kind of communicate with each other on the page, or how does that work?
So when you access this main page, there is Worker which will go and make requests to all of the other fragments, ask them to stream their HTML to it, and then it will then patch them all together and stream them down dynamically.
Then once they're on the client, then they can start interacting by sending messages to each other.
But that's easier to see in that second demo that we'll talk about later.
But one of the things that we'd quite like to show off here is we can artificially slow down the streaming of the HTML and in particular this gallery.
So we can we can slow down how quickly the gallery fragment streams down its content.
So we'll actually keep passing HTML down, but it won't it won't quite finish the response instantly.
So if we put a little delay in here and then I hit refresh, you'll see that the images start to appear by one by one.
This is not because the images are taking a long time to come down.
It's because we are we change the fragment so that it will actually send the HTML one block at a time.
But what's really amazing about this is that because all of these other fragments are self contained, they're already interactive as soon as they appear.
So if I start running this, I can start typing in here and you can see that this is already interactive even before all of these images have appeared.
So that the HTML stream is still sending the response, but the user gets to start interacting early.
So if you have, for instance, a large application where they were having to stream down a large amount of data or a large amount of content, and that was taking a long time, other parts of the page can already be interactive and the user can start to interact and make changes to what's going on.
So this is one of the really nice features of micro frontends in this regard in that we can separate out pieces and have them being useful to the user, even if there are other pieces of the application which might take longer to load.
And I'm assuming that has quite a positive effect on the core Web vitals scores as well, right?
Yes. So in this case, this particular app has like hundreds across the board for the lighthouse scores.
It's time to interactive is really, really low because these first early pieces are ready and intractable really upfront, really early.
To give you an idea of what this looks like in terms of the actual architecture behind the scenes.
And if you go down to the blog post, you can see basically that we have this client browser is making a request to a worker, and then the worker actually makes separate requests to the workers that are responsible for doing the SSR of the particular HTML fragment that they own.
There's a better one here.
So you can see here, this is the one where the main contains the head of the body in the footer.
The body actually contains further fragments, and you can imagine that each of these fragments could be cached if they are quite static and they're not particularly relevant to the current data, you could cache those so you wouldn't even need to run server side rendering every single time.
And that would result in a really much more improved time to the first byte and also a much better user experience.
How granular would you design these fragments per application?
So would this be something that you would.
Like design per page or per route.
Like, how would that work or what's what's your idea about that?
You can think of this one.
So. The main thing to remember is that micro frontends is typically a solution to organizational problems.
So if your organization is too big and it's getting too difficult for people to collaborate on a single code base and stick to a single release process, that's when you start breaking the organization down into several teams and you can use micro frontends to enable these teams to work on a scope of the application that is meaningful to the team but also to the user.
So in this demo that we showed, we actually taking the granularity to kind of an extreme where the type of head is very small fragment.
If you are working at Google and you're working on the Google search bar, it would make totally sense to build a micro fragment just for the search bar.
However, in typical application, that is probably overkill.
The way I would structure this application is if the application was bigger and had several routes, I would start carving out fragments in the form of a route.
So imagine that we have an eCommerce application that has product detail, product listing and checkout flows.
Those could be three different fragments that are built by independent teams.
Again, you should reach for the solutions once your organization needs it.
Premature optimization is never, never good, but I see it working on this big applications and you are working with bigger teams and often these boundaries and pillars become very clear.
Often it's based on root, but sometimes it does make sense to have a header in the footer or a navigation to be a separate fragment, because there is a lot of complexity in those part of the parts of the UI, even in these enterprise applications.
Thank you for that.
So I'm wondering, assuming you would want to move to this model, like, say you have like a very large application, like a monolithic one.
How do you migrate to something like to this architecture?
Because I'm assuming not a lot of people can do it like a big bank, right?
That is a common challenge. And with these big applications, they are just too big to rewrite or they are big, too big to make any major changes to.
And this is why today we published a second chapter of our research that is focused on incremental adoption of micro frontends and incremental incremental adoption of the fragments architecture that allows you to take an existing legacy application no matter how big it is built with traditional technologies that are commonly used these days, whether it's react or view or Angular or whatnot, and carve out the most valuable pieces of the UI into fragments.
Because the way you think about it, like you can approach immigration in many different ways, but the way you want to do it is incrementally.
You don't want it's very rare that you can afford to rewrite the application.
But what does it mean to incrementally rewrite a legacy monolithic application, especially application that many people are afraid to touch because so many like the reason why you want to do anything with this, because the application is hard to work with.
So you want a low risk way, a low risk strategy, but a strategy that allows you to have return on investment as early as possible.
And the proposal that we shared today is focused on carving out the most important or the most valuable part of the UI and turning those into fragments that are then embedded back into the legacy application.
Typically, migrating the shell of the application is actually quite an undertaking because in the shell of the application, whether you have the navigation bar or other things you need to deal with authentication, you need to do a deal with authorization if you have any experiments.
They all usually handled the route of this application within the application shell localization.
Many other things like these are very big concerns that are very difficult to tackle in an incremental way.
So that's why we are thinking, well, maybe in maybe we can actually focus on the core, like leave the shell as is and focus on the most valuable parts of the UI.
I think Pete is going to show the second demo that we built. Yeah.
So we're just sharing my screen because I thought it would be useful to see this diagram here, which is in the blog post that he is talking about.
This is the thing you are sharing empty screen right now.
Sharing the wrong screen.
Okay. So but then again.
While you're trying that, I just wanted to say that. So basically what you're saying is that you would kind of migrate from bottom to bottom up.
Like any application can be modeled as a tree. And there are different migration strategies top to bottom, bottom up, where we think that in many, many cases in this big enterprise applications, actually bottom up strategy makes a lot of sense.
The screen we have here, we have legacy shell that is built with React.
And then within that we dedicated a piece of.
A piece of the UI too, and piercing outlet, which is the place where fragments will be pierced through or will be will be embedded.
And then we built we extracted three different UI parts of the legacy application and turn them into independent fragments.
And just to prove that these this approach is framework agnostic, we actually implemented it using three different solutions.
So we have some fragments build with quick, some fragments build with solid, and then we have some to react.
So you actually.
All these. Sorry.
Sorry, Pete. I was just going to say, and here's the actual demo here we can show in the second.
Yeah, please go ahead.
I'll ask my question at the end. Cool.
So. So this. So the last app was mostly built by James.
This app actually was mostly built by Dario.
And he could probably talk a little about this, but as I'm driving the screen, I'll just point at things, so.
This app is designed to try and demonstrate exactly like the comparison between what it would be like before you do this approach and after.
So when we have this PC enabled pissing enabled turned off, then we get what would be the normal, normal feeling for the legacy app Legacy application before migration.
Yeah. And so this whole app would be like a big React application.
For instance, in this case, and we're going to add this delay, which is basically simulating how long it takes to boot the app.
I mean, if it was a really big app, it can take multiple seconds.
So if I refresh, you'll see that we have to wait 4 seconds before we see anything on the screen, before we get to interact with anything at all.
By taking some of these pieces here.
And if I click the show scenes, you can see that we have like two fragments on the screen.
If I turn the piercing enabled on and refresh again, this time what you'll see is that these two fragments are instantly available.
And not only the instantly available, they're instantly interactive.
So even while the legacy app is booting in the background, I can actually click on these things and start to edit and add items.
These are totally working applications like micro front end applications that are sitting inside this legacy application.
Then once the application has booted, then we take these two fragments and we make sure that they arrive in the right place in the application so that they can then exist as though they were just part of the normal application.
That's very cool.
And considering that each of these fragments was implemented with a like a different framework, they actually still communicate with each other, right?
Yeah, that's right.
So this fragment here is built with quick. This fragment here is built with React.
We have another one on the news page here, which is built with solid jazz.
And you can see that when I click on this one, it changes here.
But also this fragment is also updating.
And there's a we have we built a tool called a message bus, which is a like a an agnostic, a framework agnostic.
Way of passing messages from one fragment to another, or even between fragments and the legacy.
And so when these two are interacting, they don't actually require the main app to be to be bootstrapped at that point.
What was the one thing else to tell me earlier is that we actually did an amazing job of extracting a lot of this stuff out into like an abstract reusable form.
And if you read the blog post, you'll see that actually a lot of the work that that's gone into doing this is has been abstracted into this sort of what we call a piercing library.
It's not actually a production library that we're selling at this stage.
It's part of the proof of concept.
But if you go and look at the code, you'll see that it means that we can build these things quite quickly and wire them up quite easily.
And the message bus which uses use to communicate between these things as part of that.
There's a few bits of magic going on, which is quite hard to tell just when you're looking at this demo.
But one of the things is, is that when you first boot this up, there is no legacy app.
There is no HTML from the legacy app. It's going to be built at client time, but these things are already existing.
So they actually exist at the top level in the DOM.
If we look in the elements and we refresh here, you'll see that there are these two things called parsing fragment hosts, and those are where we host the actual two fragments.
But then when the app is booted, you'll see that they disappeared. And one of the cool things about what we built is that it will take these fragments and actually move them into the correct place inside the application.
So if I inspect this now, you'll see that this one is now sitting inside what's called an outlet.
And so these outlets specify where in the legacy application the fragment should appear.
And the tooling that we built will actually take these ones from the top level and move them to wherever they needed to be in the application.
So before we run out of time, my question is, why were workers a good solution for this?
You should let Dario tackle this one.
I did a lot of the implementation.
Speed mentioned that the personal library is mostly his work.
So tell us a little more about how workers play the role in all of this.
And maybe you mention some of the details about the library.
So basically, I think, as we mentioned before, workers are very close to the user, so they have a very low latency.
And this makes the whole user experience much snappier, better, like we can fetch in the show immediately the fragments, like for the initial page, for the initial view, we can very quickly stream from workers and we get responsive responses streamed in very quickly.
So again, we get a very immediate to view interactive role for the user.
And yes, so basically they are an instrument we can use to quickly.
Manage those fragments.
So and the structure, it's I think we also mention this is similar to what we had with the Cloud Gallery, but here we have a more direct abstraction of a piece in Gateway, which is a worker which manages all the requests from the browser and redirect them to the correct fragments.
So basically, when you initially look at the application, you in the single gateway worker, we start fetching the legacy application assets alongside data.
We start also streaming, taking the streams from the fragments and streaming the stream game to the browser through the legacy application HTML.
And when everything gets done, thanks to the client side code we created, we sort of wire everything up together on the browser.
So that's the gist of it.
Yeah, I think we're.
Definitely recommend going in looking at the code.
It's all available on GitHub.
If you look in the blog post, they'll be links to those and hopefully it will excite your taste buds and get you interested in trying some of this stuff out.
And we'd really love to have some feedback on the ideas and the implementation and ways that we could move this forward.
Yeah, and also reach out to Pete and TARDIO and Igor for questions or.
Yeah, I think that kind of wraps it up, right, Pete?
There's so much more to talk about with this. We could do like another half an hour of the places we'd like to take this and the difficulties that people are going to find when they're trying to build these kinds of things and the work that we're doing to overcome those.
So maybe we'll come back on another day.
But, yeah, it's been a great it's been great talking to you.
Thank you. Thank you to the three of you so much for explaining everything.
Thank you. and for your work.