Microapps – way to go for easier workflow and faster delivery of new functionalities

27.09.2022 | Łukasz Nowak, Senior Fronted Developer

In the beginning there was a monolith

I am curious how many of you remember your first developer training projects. Their aim was to teach the basics of diverse issues. For instance – connection to the source database, safe backend handling of such, or neat UI/UX design. Even if you went one step ahead and you have tried to apply layer separation in your application, I am pretty sure that it all has been stored in a single project repository.

At that time you probably had no idea that you were witnessing the birth of another monolith – a structure of closely connected elements of the whole application. Of course, it was not a problem at all, as the project was meant to teach you something new. The time for knowledge related to architecture was yet to come. Nevertheless, many years ago, the fundamentals of almost any application were brought forward in a similar way. In time, more and more developers were involved in writing their monolithic code and projects were growing to exorbitant sizes.

First microservices

For that reason, at the beginning of the XXI century, the idea of microservices started to emerge. It is an architectural concept in which the application consists of many services, each having its own, narrowed down purpose. Usually, they operate within a single business domain.

For that reason, larger applications started to resemble a compilation of modules and dev departments – domain or product teams. It had a positive impact on the ease of software development and particular teams efficiency in general. Their operations have been almost completely separated from each other. Developers could finally focus on their part of the business, not worrying about the functionality of the whole.

Hard-rock monoliths

Unfortunately, despite the many pros of such architecture, many large organizations still carry the burden of monolithic structures in one way or another. Usually, these are the oldest and most important parts of the system. Once such basic elements are in place, there is no need for any form of modification. All they require is periodic service work such as updates of software version in use.

Of course, many products we work on at HL Tech are usually new functionalities. However, from time to time, certain teams encounter codes written ten or fifteen years ago. In such instances, our job is to flake off part of such a monolith and safely transfer it (rewrite it) to a separate module. It resembles a bit of open surgery on a living organism. That is why before we set about rewriting such code, at first we pay particular attention to in-depth analysis of the product. And here comes the bonus. Through the contact with such an old code we get a chance to track many historical decisions within the IT world – the good ones but also the ones that, from a time perspective, seem misguided. It is always an informative experience for us, as observations like that cannot usually be made on projects developed from scratch.

Frontend granularity

At HL Tech we also apply the approach to divide into smaller modules in my specialization – frontend development. Just think about it – end users of the system can also be divided into several groups and each one of them will use a different part of it. Doesn’t it sound like a good way to divide the whole platform into modules? That way we enter the subject of micro-frontend architecture. About four months ago, me and Łukasz Fiszer had a chance to present it in more detail during Dev.js Summit 2022.

No doubt micro-frontend architecture greatly improves the everyday work of developers. However, for clients and their needs, it is unnoticeable because it does not affect general accessibility or application’s overall performance. Luckily, through observation of news from the developers’ world, I can honestly say that emerging technologies support such needs as well. Let’s have a look at some of them.

Download only what the user currently needs

In general, modern front-end applications utilize the approach called Client Side Rendering (CSR). It means that when one enters a specific URL address, the browser firstly downloads the whole JavaScript code and then renders the page view. Unfortunately, output versions of the majority of apps written in React are simply two large JavaScript files. It forces the client to download the whole code, including pages he may not even want to browse. In effect, excessive traffic online is generated and wide range computations only cause the delay of the view’s display. One of the easier and popular improvements of this process is the approach called Lazy Loading. It rests on the principle that whenever someone opens the website, a browser downloads only what is needed to display it – framework and current view codes. That way, it is possible to speed up the start of an application even by a couple of hundreds of milliseconds.

Rendering run through every case

Once it has been confirmed that a smaller number of JavaScript computations speeds up applications’ start, a question arose: is it true that the display of every single element has to be an outcome of a script work? It turns out that not necessarily. The process can be improved with an approach called Island Architecture. According to it, the view of every page can be divided into isolated areas. Let’s take the footer of an application as an example. It is independent of the whole and additionally, it does not change for months. The footer looks and works the same way on every single subpage. It opens the way to Server Side Rendering (SSR) in its most efficient form. Following SSR principles, the server generates a single static HTML code and distributes it among clients the output of such operation saved in internal memory. Thanks to such design, the client’s browser does not have to run time-consuming JavaScript computations and receives the final code, which in turn can be displayed on the screen.

Above approach can even be extended to whole pages. Especially those, which do not require continuous communication with the backend. It works great within areas similar to blogs, like articles, FAQ elements or product presentation pages. In such instances, HTML code is generated for the whole page, and time-consuming JavaScript computations are reduced to almost none on the client side. This method is called Static Site Generation (SSG).

Other ways for better performance

Lazy loading shortens the time required for displaying the page view thanks to giving up on downloading unnecessary data. However, when a client wants to open a different page, its code has to be downloaded from the server. Fortunately, current IT trends allow us to speed that process in such a way that it becomes almost unnoticeable.

An interesting and multilayered solution has been introduced with a framework called Remix. For instance – its creators noticed that it takes a couple of hundred milliseconds from the time a user moves the cursor over a button to clicking it. If that button is a hyperlink to another page, we can start the download of its code already on hover action and not – like it was until now – on click action.

This framework also gives easy control over advanced caching. It allows static content (such as graphics, CSS, HTML) to be stored at Content Delivery Network (CDN) access points. In case of providers with extended networks, access points can be located in several places across the country. What it means is that whenever a client opens a specific web page, every other client from their area is able to access the same content within a dozen or so milliseconds.

If, however, we need to display “live” data requiring constant communication with the server, Remix puts in favor already mentioned Island Architecture. That way, every single part of a page prompts the data query required to display within its area. The query is initiated instantly, run in parallel, and downloaded data is being shown independently from one another.

All of the above have one common goal – to decrease the time between entering a web page and displaying data it holds. Whenever it is possible, a client receives already generated static content. Whereas functionalities requiring server side calculations are significantly accelerated and the time spent on processing such queries holds back only small parts of the entire application.

Split up and be efficient

Aforementioned solutions show how splitting into smaller ranges and attributions positively affects overall efficiency. Interestingly, the rule seems to be true everywhere.

At the organizational level – when I work within a single domain, I do not have to know and understand every business rule of the company. When starting a new project, I always try to divide its production process into stages which in turn are split into smaller tasks. Thanks to this method I am able to easily understand every aspect of them, credibly estimate the time required to complete them and encompass the work within regular sprints.

At the specialist level – as a frontend developer I do not need to know all the backend, DevOps, security, or database management details. I can focus on high-quality specialization within my technological field.

At the developer level – while writing small applications, I am able to quickly manage their code. Narrow responsibility and modest level of functionalities’ complexity, both have a positive impact on their efficiency and security. That way I can provide new solutions and content to our clients in a shorter period of time.

Łukasz Nowak, Senior Fronted Developer
HL Tech recruiting in practice