Microapps – way to go for easier workflow and faster delivery of new functionalities
In the beginning there was a monolith
I am curious how many of you remember your first developer training projects. Their aim was to teach the basics of diverse issues. For instance – connection to the source database, safe backend handling of such, or neat UI/UX design. Even if you went one step ahead and you have tried to apply layer separation in your application, I am pretty sure that it all has been stored in a single project repository.
At that time you probably had no idea that you were witnessing the birth of another monolith – a structure of closely connected elements of the whole application. Of course, it was not a problem at all, as the project was meant to teach you something new. The time for knowledge related to architecture was yet to come. Nevertheless, many years ago, the fundamentals of almost any application were brought forward in a similar way. In time, more and more developers were involved in writing their monolithic code and projects were growing to exorbitant sizes.
For that reason, at the beginning of the XXI century, the idea of microservices started to emerge. It is an architectural concept in which the application consists of many services, each having its own, narrowed down purpose. Usually, they operate within a single business domain.
For that reason, larger applications started to resemble a compilation of modules and dev departments – domain or product teams. It had a positive impact on the ease of software development and particular teams efficiency in general. Their operations have been almost completely separated from each other. Developers could finally focus on their part of the business, not worrying about the functionality of the whole.
Unfortunately, despite the many pros of such architecture, many large organizations still carry the burden of monolithic structures in one way or another. Usually, these are the oldest and most important parts of the system. Once such basic elements are in place, there is no need for any form of modification. All they require is periodic service work such as updates of software version in use.
Of course, many products we work on at HL Tech are usually new functionalities. However, from time to time, certain teams encounter codes written ten or fifteen years ago. In such instances, our job is to flake off part of such a monolith and safely transfer it (rewrite it) to a separate module. It resembles a bit of open surgery on a living organism. That is why before we set about rewriting such code, at first we pay particular attention to in-depth analysis of the product. And here comes the bonus. Through the contact with such an old code we get a chance to track many historical decisions within the IT world – the good ones but also the ones that, from a time perspective, seem misguided. It is always an informative experience for us, as observations like that cannot usually be made on projects developed from scratch.
At HL Tech we also apply the approach to divide into smaller modules in my specialization – frontend development. Just think about it – end users of the system can also be divided into several groups and each one of them will use a different part of it. Doesn’t it sound like a good way to divide the whole platform into modules? That way we enter the subject of micro-frontend architecture. About four months ago, me and Łukasz Fiszer had a chance to present it in more detail during Dev.js Summit 2022.
No doubt micro-frontend architecture greatly improves the everyday work of developers. However, for clients and their needs, it is unnoticeable because it does not affect general accessibility or application’s overall performance. Luckily, through observation of news from the developers’ world, I can honestly say that emerging technologies support such needs as well. Let’s have a look at some of them.
Download only what the user currently needs
Rendering run through every case
Other ways for better performance
Lazy loading shortens the time required for displaying the page view thanks to giving up on downloading unnecessary data. However, when a client wants to open a different page, its code has to be downloaded from the server. Fortunately, current IT trends allow us to speed that process in such a way that it becomes almost unnoticeable.
An interesting and multilayered solution has been introduced with a framework called Remix. For instance – its creators noticed that it takes a couple of hundred milliseconds from the time a user moves the cursor over a button to clicking it. If that button is a hyperlink to another page, we can start the download of its code already on hover action and not – like it was until now – on click action.
This framework also gives easy control over advanced caching. It allows static content (such as graphics, CSS, HTML) to be stored at Content Delivery Network (CDN) access points. In case of providers with extended networks, access points can be located in several places across the country. What it means is that whenever a client opens a specific web page, every other client from their area is able to access the same content within a dozen or so milliseconds.
If, however, we need to display “live” data requiring constant communication with the server, Remix puts in favor already mentioned Island Architecture. That way, every single part of a page prompts the data query required to display within its area. The query is initiated instantly, run in parallel, and downloaded data is being shown independently from one another.
All of the above have one common goal – to decrease the time between entering a web page and displaying data it holds. Whenever it is possible, a client receives already generated static content. Whereas functionalities requiring server side calculations are significantly accelerated and the time spent on processing such queries holds back only small parts of the entire application.
Split up and be efficient
Aforementioned solutions show how splitting into smaller ranges and attributions positively affects overall efficiency. Interestingly, the rule seems to be true everywhere.
At the organizational level – when I work within a single domain, I do not have to know and understand every business rule of the company. When starting a new project, I always try to divide its production process into stages which in turn are split into smaller tasks. Thanks to this method I am able to easily understand every aspect of them, credibly estimate the time required to complete them and encompass the work within regular sprints.
At the specialist level – as a frontend developer I do not need to know all the backend, DevOps, security, or database management details. I can focus on high-quality specialization within my technological field.
At the developer level – while writing small applications, I am able to quickly manage their code. Narrow responsibility and modest level of functionalities’ complexity, both have a positive impact on their efficiency and security. That way I can provide new solutions and content to our clients in a shorter period of time.