This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.
The Azure SDK team generates many client libraries across multiple languages, so it should come as no surprise that we constantly think about the most efficient mechanism for storing, writing, and distributing those libraries. In an ongoing series, we’re going to talk about some of the decisions we made and the implications of those decisions.
The Azure SDK is distributed across multiple repositories, along language/runtime lines. So we have a .NET repository, a Java repository, a Python repository, a JavaScript/TypeScript repository, and so on. Each repository contains numerous libraries that ship on a separate cadence but otherwise share a common engineering system. In effect, we are managing multiple mono-repositories (mono-repos). As a result, we get plenty of experience dealing with the challenges of mono-repos.
Non-monolithic monoliths
One of the key things with the Azure SDK is that each library ships on a separate cadence. This means that the version numbers for libraries are updated at different times. Let’s say that we made a major update to our App Configuration library, it wouldn’t make sense to also bump the major version number of the Storage library. After all, it is possible that nothing changed.
We have a mono-repo per language, but we operate the build and release pipelines for each logical grouping of libraries as independently as possible. This generally works well until you run into dependencies between the libraries that are used by developers to access a particular service. For example, the Event Hubs library takes a dependency on the Storage library to support state synchronization in scaled out event processing scenarios. If Event Hubs depends on Storage, how can we ship them independently? The answer lies in the way that we manage dependencies.
Source composition vs. binary composition
We realized pretty early on in this journey that using the latest version of an internal dependency (e.g. Storage depended on by Event Hubs) meant that we could end up creating a problem where we silently took a dependency on unreleased features in our dependencies. We wouldn’t detect the dependency issue until after we shipped, which would break customer code.
Within the team, we use the language of source composition vs. binary composition to differentiate between dependencies that we take against a version of a library in source versus a version that has been built into a package and published to a public registry (such as NuGet or NPM).
Whilst binary composition does introduce some rigor to dependency management (every dependency you take must have an explicit immutable version) it does start to create problems around inner loop productivity, such as when you are working on some of our core HTTP pipeline logic and downstream consumers of that library. To try and make it easier, we publish nightly builds from each of our build pipelines into a package registry. Those “nightly” packages then can be used as a dependency.
Even with the nightly package builds, the friction can still be too high. In some scenarios, a particular pull request will be operating in source composition mode. That leads to problems downstream when you go to land a pull request that spans libraries built from different pipelines.
Shippable unit and cadence defines everything
One of the things that our recent experience has taught us is that the nature of what and when you ship has an outsized impact on the way that you structure your build and release pipeline and how the tooling that the pipeline automates must function.
If you have a mono-repo which has one large component or set of components that all ship together then you are going to generally have one set of pipelines and tooling will tend to operate in aggregate across the codebase.
However, if you have a mono-repo that has multiple independent components then you are going to have a pipeline per component and it is best if your tooling is best designed to scope its operation to just the components that are shipping through this pipeline. This applies not only to build and testing, but also other things like static analyzers and report generation.
Micro or mono repo? The pain is the same
Ultimately, we’ve learned that the mono-repo vs. micro-repo debate is a matter of what is actually possible with the tools in the ecosystem, and preference. The number of repos is almost irrelevant. What matters most is how you ship your components. If you have a mono-repo with many independently shipping components, you are going to end up treating it like a set of micro-repos with the only upside being the reduced number of actual repositories. The integration pain remains the same.
Micro-repos allow for local variation (if that is even desirable) and can make things like servicing a bit easier to manage. As before, if the aggregate codebase is moving quickly you are going to have integration pain.
Considerations for repo structuring
When it comes to picking mono-repo versus micro-repo you need to consider the following:
- Granularity of shippable unit
- Level of interdependence
- Stability over time
- Servicing & support model
- Tool chain support
If you have a huge lump of code that ships all at the same time, then a mono-repo is going to be a lot less work. If, on the other hand, you have lots of small libraries that ship independently, a mono-repo doesn’t provide you with any substantial benefits. You end up needing to do binary composition and deal with the same kind of integration pain you would have with micro-repos.
Whether you use that as a catalyst to break out from a mono-repo probably depends on the level of interdependence. If you have a lot of independently shippable libraries but a comparatively low number of internal dependencies then you could more easily adopt a micro-repo model. Sometimes internal dependencies start off being painful but lessen over time (for example if you are developing a core library which later becomes stable).
Finally the way that your team wants to support the libraries matters. On the Azure SDK team, we really want to make it easy for developers to be able to come to one place for their language and log issues with the SDK without us having to bounce them around repositories. On the flip-side, if you do need to create a servicing release that can be more challenging in a mono-repo because you effectively share the engineering system across many components and have the support all the different iterations of that engineering system in order to support shipping hotfixes.
There isn’t a simple answer here unfortunately. We’ve picked the user of our library over our own convenience. We make it easy for you to log issues and find the source code you are looking for over a simpler engineering system to ship the SDKs. In short, we take on the complexity so you don’t have to.
Want to hear more?
Follow us on Twitter. We’ll be covering more best practices in cloud-native development as well as providing updates on our progress in developing the next generation of Azure SDK.