Project organization is like a computer network: when done right, it gets out of the way and nobody notices it. But when done wrong, everyone suffers. If you’ve had the unfortunate experience of working on a poorly organized project, you’ll understand what I’m talking about. When you feel like you’re spending more time fighting with the environment than getting work done, it’s demoralizing and counter productive.
In this post, I’ll focus on the low level aspect of organization: where should the code go? How should I break down my files and folders? I intend to go into more details in follow-up posts but I felt this was a good place to start because it really influences everything else such as developer environment, testing and deployment.
The project I’ll be discussing is a large React application with a healthy dose of legacy Angular code. But this post will directly apply to any project that can use the Yarn package manager and might also prove useful if you use a different technology altogether.
When the word came that Flexera’s suite of SaaS applications needed to be combined into the single, unified front-end that would become Flexera One, I knew it was just the beginning. The already large project would keep growing over time. It was a good opportunity to lay a foundation that would support the team for years to come.
I had the following goals in mind:
Before detailing the solution that achieved those goals for us, let’s take a look at some other options and why they weren’t chosen.
The most straightforward way to go would be to use a tool like
create-react-app and start one big project with a single node package. This was non-starter for us because we already had several projects using different frameworks (AngularJS, Angular, React) and there was no upside to combining them this way.
I also wanted to ensure that our code was modular with clear boundaries so the more generic parts could be treated as libraries and potentially used outside of the main application. Putting everything in one package would have prevented this.
On the other end of the spectrum, we could break everything into many small repositories, one per package, and publish everything to a registry such as npm. The application would reference all those libraries using a
This has the advantage of clearly separating the code into isolated modules that can be tested and updated independently. At first sight, this is an attractive option but the flaws to this approach become apparent very quickly.
First, the more you break things into small modules, the more modules you have to work on at the same time. This is especially true at the beginning of a project when everything needs to be built. However, there is no efficient way to work on several interdependent repositories and npm modules. Tools like
npm link and
yarn link can help in local development but will quickly give you headaches because dependencies remain local to the linked packages. This is fine for Node projects, not so much when you use a bundler for front-end.
Then releasing becomes a nightmare: you need to cut and release a new version of each package, following your dependency tree from leaves to trunk. This is huge overhead if you’re alone, with a team it’s simply unmaintainable.
This has been a bit of a buzzword in recent years, although as usual in the front-end world, there is no explicit standard. In general, a micro front-end architecture aims to facilitate the organization of very large projects by breaking them down along team lines. One team works on the core infrastructure while other teams each work on what are essentially separate applications, with their own release schedule and sometimes different tech stacks. The full application is composed at runtime from those various parts.
While some companies have been successful with this approach, I find that it has significant drawbacks that are worth considering before you commit to a micro frontend solution.
In our case, we have a single UI team responsible for everyone’s front-end. So breaking things into a micro-frontend would have represented a large and unnecessary overhead.
Now that we examined what didn’t work, let’s take a look at what did work for us: the monorepo.
If you haven’t heard of a monorepo before, the idea is to have all of the code in a single repository. So this takes us back to the first option of a monolithic front-end right? Well, not exactly, because we still break things down into packages to enforce a modular architecture. So it’s a mix of Options 1 and 2 then? Again, not exactly, because when using a single repository, we can start taking advantage of Yarn’s workspaces functionality. And that’s really the key to success here.
Workspaces solve the problem of local package dependencies by making all local and most 3rd party packages available through a node_modules folder at the root of your main package. This makes it easy for a bundler to reference all files used to build a project while avoiding duplication.
There is plenty of information out there on how to use yarn workspaces so I won’t go too deep here. Our setup is to have a
package.json file at the repo root with the following configuration for yarn:
This implements a two-level folder structure in the form of:
packages/<category>/<package> which allows us to group multiple packages by function.
So how does this setup accomplishes the goals I outlined at the beginning of this post?
Having a single repository greatly helps here. When a new team member joins, they clone one single repository and are ready to work on any part of our front-end. The organization into packages allows them to more easily find what they’re looking for.
With the whole team pushing code into one single repository, one might expect a high frequency of merge conflicts. In practice, the breakdown into packages helps mitigate this because developers tend to own and update different areas of the code.
On the other hand, each change, no matter how large, only results in one pull request, there is very low friction. Furthermore, anyone on the team can review the changes because each package follows a standard pattern. It’s easy to spot changes that could affect other parts of the app.
Integrating the monorepo with a CI solution makes releases really easy: whenever we’re ready to push a new version, we simply merge a pull request from our development branch into our release branch. The new commits to the release branch trigger a build job which automatically deploys all of our front-end to our CDN.
CI is also used to run our test suites on our feature and development branches, so before a release we have a high level of confidence that everything will be good.
Technical debt is a real problem and the risk increases with the number of developers on a team. This is where yarn workspaces become really useful: there is virtually no overhead to creating a large number of packages in a monorepo. What this means is you can really push your team to break things up into a highly modular design where each module follows the single responsibility principle.
It still requires discipline from the team, but by using many small and single purpose packages, you improve code navigation and reusability. Those small packages are also easier and faster to test in isolation.
Keep in mind that things evolve and you’re unlikely to have the perfect setup from the start. Sometimes you are constrained by an existing code base, sometimes priorities change. So it’s important to keep an eye out for points of friction and tackle them as they appear.
For example, an issue that remains in our setup is that I’ve not found any decent tool to manage 3rd party packages across our monorepos. So we’ll be writing our own in the future.
But overall it’s proven to be a great move and in future posts I will go over some great productivity enhancements we’ve been able to build on top of our monorepo architecture.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.