Modular Monoliths using API Gateway, BFF pattern and Monorepo

In this article we will talk a bit about what was our problem with our legacy architecture and what we were trying to solve. Then how we approached the problem we had and what we proposed as a solution and what were our key goals we wanted to achieve. We had to set up key goals, out of the hundreds of goals and ideas we had as we were constantly brain storming and adding to our “wish list”.

We will touch a bit about microservices and monolithic architecture, and then jump into a discussion on what a Monorepo actually is and what it offers as an architectural solution.

We will then talk about BFF (Back-end for Front-end) pattern of work and through the whole talk I will talk about our experience implementing these architectures. Lastly, we will summarise as to what we have achieved from our implementation so far.

The idea behind this talk on the conference and this article is to manage to inspire all or some of you to take a more pragmatic approach towards problems solving as software engineers or in any role in the industry, and try to understand your problems better, so that you can come up with a solution that really works for you and your problem, instead of using the latest that might solve your problem, but add many other complexities and problems with it as it’s not the right fit for your team size, resources, time or architecture.

And the most important message of this article, never leave anything to “Magic”.

So, let’s jump into the Journey. We will first talk a bit about what we had or as we call it now “The Old World”.

Our “Old World” is consisting of a Full stack monolithic applications in one solution using WebMVC + Razor + Angular JS. Within the WebMVC project we have multiple applications/domains and each one has Angular JS front-end.

We also have another Full stack monolithic solution using ASP.Net WebForms + JS + jQuery/Ajax. And within this project we also have few applications/domains.

And at the end we have an ASP.Net Web Form API application that serves a native iOS mobile application

You can see visually from this slide how this looked like and how tightly coupled everything is. If you want to make a code change in one place you worry about everywhere else this might have an impact, so you have to get an approval from the whole team even for the sightless change to the codebase to make sure the change won’t bring down the tower :D

Not only that which is the main factor for concern, but the old code needed a whole re-write for many different reasons.

The codebase was almost un-testable in the way it was statically written, and logic was hardcoded and duplicated in many places. The front-end was not reactive and it was relaying on page refreshes, and the biggest thing that frustrated everyone when daily coding on these monoliths was the time it took to re-build or make a small change.

As you can see individually building the projects, it takes around 1.8 min to build our Web Forms web project, our admin project takes 2.1min, MVC takes 20 sec and the API is the fastest … A full deployable build (running unit tests, all projects together (around 10-12 apps) …) takes full 10-15min.

 So, it was time for a change.

We considered going Microservices, and although this offered the best outcome regarding scalability, our current infrastructure was never going to support that fully and our biggest challenge here was our monster of a Database. We have a single database shared across all domains/applications and sharding/splitting this database into smaller data sets was impossible as everything was very tightly coupled and tables, sprocs and functions were referencing each other heavily. We had some domain schema separation with our “newer Web MVC” projects but that wasn’t enough, even our projected size for the next few years were never going to be able to cope with the complexities that come with this architecture so the more we thought of that pattern, the more problems and complexities arose and we figured that this is not the right approach for us.

Even thought the benefits of strong Module boundaries, Single responsibility per service, Technology diversity, full ownership per team sounded exciting, we still had to consider High operational complexity, distribution policies, code consistency, how we version smaller API services, we need an automated deployment process, we need shared resilient message broker for ASYNC communication between services, we need to split ownership per domain and so on so this approach looked more and more un-achievable for our team and what we were facing.

The goal was to realistically move and de-couple services, de-couple the front and back-end but still keep the deployment process similar, key word here is Realistically.

 We needed this as our team was and still is very small and we can’t handle full scale micro-service architecture as it comes with a lot of complexity, baggage, maintenance and deployment processes etc that we just simply can’t have.

So, to achieve the plan we set ourselves a goal to migrate service by service, controller by controller slowly the codebase from our Monolith to now different modular architecture whilst still allowing the old and new world to work simultaneously.

Here is how we envisioned to slice and dice or modularise our back-end.

We’ve created 3 key projects that will be the centre/skeleton for all our applications. 

  1. A Monorepo workspace that holds all our client applications
  2. A BFF Project in .Net 6 that provides routing for our front end, provides API Gateway URIs for our future domain specific API’s as well as serves as a bootstrapping core BFF for the monorepo (Handles SPA integration in .NET 6).
  3. an API project which holds all our REST Endpoints and Domain controllers. (The plan is to split these up as we grow and route towards them through our API Gateway)
  4. And finally, a Shared Class Library in .Net standard 2.0 contains our shared data access and Entities so that it can communicate with both .NET Framework 3-4.7, and .Net 6 (as we are doing a progressive re-write)

The database remains the same, multi tenanted, multiple schemas per application where applicable and an instance for each client together with an instance of our applications.

This is how that visually looks, and you can immediate see how it is now more separated and modular compared to what we had previously. The client apps are wrapped in a monorepo which allows them to be built in isolation, worked on in isolation, they can even be in different tech, but all in one repository.

The BFF and API Gateway project is central as it provides bootstrapping for our front-end app, as well as routing, data sharing and also handles Authentication (acts as our OAuth2 Authentication Server) which allows our client apps from the new world to talk to the API, as well as handle authentication now for our “Old World”.

The REST API project is literally that. Single responsibility to expose Internal and Public API Endpoints that will serve our client apps. We will have multiple API’s as we re-write our domains slowly intro this modular architecture.

And the apps are in separate projects which all share a .NET standard 2.0 library which holds our shared data entities and data contexts.

Lastly as we mentioned previously, we continue to use a single database for all our domains.

The BFF and API Gateway project is central as it provides bootstrapping for our front-end app, as well as routing, data sharing and also handles Authentication which allows our client apps to talk to the API.

The REST API project is literally that. Single responsibility to expose Internal and Public API Endpoints that will serve our client apps.

We’ve used Open API documentation when designing this API project so that it is properly versioned, all endpoints are RESTFULL, documentation is built through SWAGGER tooling and it only serves to provide a communication between the client and our back-end servers, as an API should.

And the apps are in separate projects which all share a .net standard library which holds our shared data entities and data contexts.

Excellent, now we needed to also decide what we will do with our Front-end, as our old tech stack was mostly based around Angular JS or Angular 1.x as many of you know it. We prototyped a bit in React, Blazor and Angular 8 at the time, to decide what will be the best way moving forward and we have concluded that for our team, the best solution moving forward was to re-write the front-end in Angular 8+ and that’s what we did.

We have also decided to take advantage of Webpack module bundler and control our bundling processes and also introduce a Monorepo approach as we are aiming still for single repo, single deployment process for all our apps, so the general aim so to decouple everything internally, treat each app as separate project which will leave room for further separation of projects if we wanted to scale these in the future.

So as I said we wanted to keep everything through a single deployment process you might ask how did you end up hoisting your angular applications via your Core project. Well remember how at the beginning of the presentation I said we don’t want to use any magic and we will aim to understand and write our own code to solve our problems so that we are in full control as to how our processes operate?

Well, that’s what we did, kind of. We first tried to host our angular applications by simply using the existing SPA extension from Microsoft and that way we can configure each applications path, output folder etc in our start-up class of our Web API project and everything will work as it should.

As you can see visually how this works is when the .Net 6 application is started, a SPA Dev Server is started also. This is acting as a middleware that checks if the SPA Application is running at the configured URL and starts the Dev server if it’s simply not running. Requests to the backend are then handled by the SPA dev server and forwarded to the backed using a Proxy of the SPA Dev server to the ASP.NET Core Host.

This feels clunky.

The biggest problem we saw here was that we were not controlling the deployment process. The SPA extension was acting as a proxy using the angular dev server instead of IIS as we would intend to use in Production. The control over bundling, minification and loaders is minimal and our Dev and Prod servers would be slightly different.

On top of that Microsoft had a fun time when releasing new versions of the SPA integration extensions, as they kept contradicting themselves in regards to supporting and upgrading this feature and if they will support it as LTS.

This is something we didn’t feel comfortable including into our production codebase and we knew we had to write our own way of bootstrapping/starting our Angular applications without relying on “Magic”.

So, we decided that we will host our app through the IIS Server using our BFF and this is how we approached it

As we were going with the idea that our front-end will have multiple applications our BFF Core project provided a controller for each application in our front-end layer

Each controller is then decorated with appropriate authorisation attribute as well as routing attribute that would handle access to the app as well as URL routing from the built-in .NET Router.

This way each controller will have a view (which is dynamically build by Webpack from the front end running a build command on startup) that injects the Angular app that it needs to serve

Per App Routing is handled by extending the original routing functionality in .Net 6 to recognise which controller serves an Angular app and route to its default page, so that from there Angular Routing can take over.

This also allowed us to still be able to extend our BFF Core application further to serve multiple types of applications (Razor, Angular, Blazor etc) if we wanted to.

We’ve configured Webpack so that it uses an HTML Plugin to output CSHTML Razor View and inject it directly into the appropriate folder where our Controller is expecting a view to be.

Through webpack configuration we also configured our minification and bundling processes for both dev and prod environment bringing them very close to each other so that development and prod don’t differ much from one another.

We were also in full control of how our assets were loaded and served by using file loaders and writing custom rules to the web-pack config.

When our BFF project starts, it runs a build command on our NX Monorepo, this chains a command to build all our individual Angular applications through Webpack, Webpack looks at a generic template CSHTML file we provided and builds and bundles our angular application and squirts out an CSHTML view per application which is placed in our BFF Views folder, which the controllers are configured to point to. This way our Backend serves our Angular apps directly through IIS without it even knowing that it has external angular applications that are being served.

Ok, so we’ve explained how we modularised our back-end, and how we wired everything up but you might ask what about the actual Frontend?

Our thinking process to create our new front-end stack was similar to what we have discussed for our back-end, we had to think again about our current resources, team size and what we wanted to achieve long term

  • We knew we needed to have fully modular multi project architecture
  • We knew we needed a way to manage and version all our projects under 1 roof
  • We knew we needed a way to deploy all projects in one go together with our BFF
  • We needed our codebase to be split into apps and libraries for code reusability

We have again considered Micro-frontends. And micro frontends are pretty much the new trend and there is a big hype around it, because it is a cool technology that has its own advantages, like independent deployments, and use of different technologies per micro front-end.

But often, the disadvantages are not mentioned, whereas the most concerning disadvantage is the increased complexity, such that this architecture should not be considered for many applications and smaller team sizes.

The reasoning behind choosing a micro frontend-architecture in companies usually is not a technical decision, but rather an organisational decision for large companies with many frontend teams that would like to work independently.

That’s why most people stick with the usual monolith, where all modules and features live in one big application which is separated into domains/features. Although, it is portrayed to be old-school, it is still a very valid architecture for most applications, because the complexity is very low, deployment and bundling is straight forward and therefore the speed-to-market is excellent.

The main disadvantage that is pretty common is that it does not enforce any boundaries and can lead to an unstable software in the long term, which is hard to maintain and extend. Scaling apps like this also proves to be a problem once they reach a certain size, and if you expand to multiple front-end teams, it becomes a nightmare to collaborate on the same codebase.

So, here comes the Mono-repo pattern to the rescue.

Mono-repos are a hot topic for discussion these days. There are too many articles about why we should or should not use a mono-repo, so I will try to explain why and how we choose this pattern for our front-end tech stack.

Mono-repo is an architectural concept that allows managing of isolated code parts inside one repository (so you get the benefits of micro-frontends within a single repo)

While you might say storing multiple projects in a single repo makes it a monolith, right? Well not really as in the case of a monorepo with tools like Nx each application is in fact an independent project and can be operated in isolation if wanted or necessary.

As you can see the most used way in the industry right now is to create 1 monolithic application and serve multiple domains through that 1 SPA. This way you have 1 big app, shared codebase, single deployment but scaling becomes a problem as this grows to more enterprise size.

Multi-App approach, or single app per repo is the other popular approach when you want to separate your apps completely and treat them as a single application per domain. This way you have decoupled the apps, you can have each team own an application and they can choose their own tech stack etc. The complexity here comes in the orchestration, when multiple apps form 1 big project, deployments need to be automated, Versioning and control over project needs to be handled, cross app communication protocols need to be established etc.

So lastly, we look at the monorepo which gives us the benefits of both without much of the complexity of the 2nd approach. You can have multiple applications which are independent from one another, you can introduce shared-reusable libraries that all apps can talk to, you can introduce communication patters between the apps easily as they share the same workspace, and they can be versioned and deployed as either one or all, as the monorepo gives us single projects config file that handles all apps and can be configured in one central place.

To help us with our path we have chosen for our monorepo tooling

So, let’s talk about what is NX.Dev.

Nx was created by the Angular team at Google and then the core members decided to start a company (Nrwl) providing Nx toolchain, consulting and education.

Nx is a Typescript based mono repo tool, built on top of Angular DevKit (CLI and Schematics) and provides a workspace, CLI, a local and cloud-based computation caching, as well as a great language level IDE support

Nx manages all of your npm dependencies in a single root package.json no meter how many projects/applications you might have inside the monorepo. This decision is driven by a single version policy of Google

This makes version control very easy once you scale your monorepo to have multiple complex and enterprise projects.

So this for us was already a massive win, but let’s also look at what else does NX offer?

Nx can also show dependency graphs of your mono-repo. Dep-graph command shows the dependency graph of a workspace so you can visualise what libraries are used for an app or what apps are dependent on a library.

Nx uses a combination of projects and implicit Dependencies in nx.json (you might know this as workspace.json) to identify dependencies and used for a companion CLI command called affected.

The goal of affected command is to only build, lint, test and deploy apps affected by changes in the repository. Nx simply uses git diff to identify the changes, find the affected apps by looking at the dep-graph.

Here is an example of how a Depth-graph works. It shows for all 3 applications, which libraries are being referenced as a simple example, and the affected command then knows which components and libraries to build depending on which ones are affected by the targeted application.

But that’s not all. Nx also supports a powerful concept called computation caching. It is a form of caching to avoid re-build, re-lint, re-test, re-deploy if it has been done once.

This concept is similar to Build once run anywhere, often mentioned in the Docker community. Nx computation cache is focusing on minimising the development loop for a monorepo environment, in which the codebase can be evolved significantly in size and you will want to save every second that can effect on the team’s productivity.

Nx computation cache naturally happens in a local environment and you can feel the difference every time you run build, lint or test. Nx takes a step forward and try to save computation across the development machines and CI environment, by caching the outputs of your computations on Nx Cloud.

Here we can see how this looks in real world application when we perform a normal rebuild on a change, and when we actually perform an incremental build only on affected files using NX Affected command.

A full re-build of our apps (3 in this instance) generally takes around 62-68 seconds in our scenario. With NX-Affected on file change, this process takes 0.1 to 0.3 seconds, so a massive gain in time for development.

So to summarise NX on a high level.

We have less configuration/boilerplate: Monorepo reduces a lot of redundant configuration and scripts found in multi-repo strategy as all configuration/scripts are available across the apps and you can easily parameterise them as the code structure is predictable

We promote modular architecture: Thanks to a generator, you can effortlessly make a shared library and it is instantly available for all apps

Shared components can be naturally evolved: You can first create a component in an app and easily promote to a library, which includes models, interfaces, validations and DSL specific business logic

Nx works well for any technology

So, let’s close the circle and see how our architecture now looks compared to what we had previously. So we started with monolithic Web MVC and Web Forms projects where everything was tightly coupled and un-scalable.

And we ended up here:

  • REST API Project holds the REST API’s
  • CORE BFF project holds the Routing, Authentication and Controllers for our Angular apps in our monorepo
  • Front-end NX Monorepo holds all our Angular projects in a single modular repo/workspace following DDD
  • Webpack is configured to create a CSHTML view that gets loaded in the BFF project and is used by the controllers
  • .Net 6 change detection sits on top so that any change to the angular code generates a view on the fly, and a change in the view refreshes the page straight away

So what was achieved:

  • A unified modularised project that can be deployed as a single solution
  • Modular Backend that can be split up into independent deployable services
  • Modular Frontend that can be split up into independent deployable services
  • No Magic!

Hope this article has intrigued you to take a more pragmatic and innovative approach to your codebase and to always understand and solve your problems in the most efficient and reliable way that is suited to YOUR needs as a team/company, not following trends for the sake of following trends.

If you want to read more about these topics I’ve left my personal website here, together with my blog (if you want to explore more about API gateways and BFF patterns of work etc). Also you can find a lot of information on the NX.Dev website and for architecture you can learn a lot from Microsoft’s Azure Architecture Center.