Unraveling the Plethora of Gaffes in a Microservices Odyssey

Shailendra Bhatt
14 min readNov 23, 2019
  1. Introduction
  2. The Microservices Onset Enigma
  3. Microservices Metamorphosis
  4. Socio-Technical Approach to building Microservices
  5. Steer Clear of Concorde Effect Fallacy
  6. Singular Source of State
  7. Concern Based Modularization
  8. Less Entropy and More Negentropy
  9. Organizational Perception Shift
  10. Concluding thoughts

Introduction

Microservices in the last few years has become a hackneyed concept and the modus operandi methodology for building applications to drive digital transformations. The so-called adoption of modern Hydra with tentacles to replace the legacy one-eyed Cyclops seems what most organizations are striving towards when strategizing Modernization, Cloud adoption journeys, or Agile and DevOps ways of working.

After spending few years learning and understanding the platform revamp of moving away from a bulky monolithic commerce platform to service-based architecture, I have learned myriad ways on how IT organizations bungle up a journey and at times come a whole full circle.

There is no direct method to this madness but for sure are best practices and learning from earlier blunders and possibilities to circumvent hidden traps and pitfalls.

The Onset Enigma

The inception of the microservices journey is typically to combat the imponderable conundrums pertaining to “Slow time to market”, “Complex and heavily customized applications”, “Testing nightmares”, “Scalability and Stability issues”, “Velocity,” “Technology binding”, “Adopting modern stacks”, “Continuous Integration and Continuous Delivery”, “Cost factor” and locking vendor licensing model” and the list goes on.

Microservices approaches differ from organization to organization and there is no ready-made panacea. Time and time, teams perform a mere replication of these approaches resulting in incurring insurmountable technical debt.

Before embarking on a microservices journey, it’s essential to comprehend what issues one is trying to deal with and where one needs to start?

A lot of initial time has to be spent to understand the “complexities of the existing monolith”, “core issues with the legacy”, “technology forecast”, “business limitations”, “future business requirements”, “organization processes”, “their strengths and weaknesses, “team sizes”, “operational readiness” “ways of working”, “cultural standpoint”, “integration dependencies” and several other factors before choosing an approach.

It’s pretty common to see that modern IT teams yearn for re-architecting existing monolithic applications and move away from the mundane system to a more services-oriented modern stack and technology.

But, before even attempting the journey, teams need to evaluate if the journey is worth the endeavor? Typical questions teams need to ask themselves are

  • Maybe the release cycles of the monolith are of way too less frequency and just leaving it alone is the best solution?
  • Maybe the monolith is of a specific size and estimated to not grow further?
  • Maybe the organization is not yet ready to begin the journey from an operational, technology, process or even from a cultural point of view?
  • Maybe there are way too many features in the monolith that is more crucial from a business and cost standpoint?
  • Maybe the monolith has a short lifespan and is getting replaced?
  • Maybe the organization is not yet agile or has not yet adopted DevOps which is pivotal for a microservices journey?
  • Maybe breaking the monolith is way too complex and it is easier to rewrite code from scratch using new software?
  • Maybe building new features is of more priority than breaking out new services?

Microservices Metamorphosis

A typical transformation of moving to microservices can be divided into the concept of Brownfield vs Greenfield vs Bluefield development.

Understanding these approaches and their pros/cons is very critical in the nascent stage of the microservices journey.

Brownfield approach is where the monolith along with the new system co-exist together for the entire journey. The end customer has no major impacts as there are facades created to redirect to either the new or the legacy platform.

This approach is apt when the legacy application is not that complex and it’s easy to create a demarcation between legacy and the newly built services around it.

If the core domains of the monolith are not tampered with, and new services are built around the perimeter of the monolith, it ensures quick implementation of new features and enhancements. However, the downfall is that flexibility is decreased as the core domains are hard to be modified.

If the organization’s priority is faster time to market and velocity, maybe this is not a suitable approach.

In a Greenfield approach, the whole monolithic application is built from scratch on completely new software with no dependency on the existing platform. Teams fear and are most hesitant with this approach especially keeping cost factors in mind.

This is the most straightforward choice if the monolith is pretty complex and there is a need for a new strategy or direction for the business. Software development has changed in the last decade and with the advent of Cloud and innumerable SaaS-based platforms, this may be a pragmatic approach.

The third approach is the Bluefield approach which is a kind of amalgamation of Brownfield and Greenfield approach. It is fundamentally building the new microservices-based system from the ground up by dismantling the existing legacy application. This is the most complicated and also the most common of all approaches that many teams end up implementing.

One of the prevalent hurdles in this approach is the underestimation of the business teams who want IT organizations to deliver new functionalities irrespective of the underlying running system.

In reality, the major proportion of the effort of developing new microservices is spent on understanding, migrating away, and finding possible agreements integrating with the existing monolith system rather than delivering actual value or functionalities.

This is where IT Organizations typically come a whole full circle and a major proportion of them are compelled to go back to adopting the GreenField approach.

So, it’s very essential that all stakeholders in the transformation journey are on the same page and planning out every nuts and bolt of the journey. Else, Organizations will end up running multiple platforms doing the same damn thing.

Socio-Technical Approach

Once the initial approach is selected, the next challenge is defining teams and how to manage the key challenges to attain velocity and sizing within teams.

A socio-technical framework is being used more and more when building complex systems, especially those that deal with the principles of working in teams and agile environments where adaptability is the key. Below are a few common questions that the teams need to dwell upon and ask themselves?

How big a microservice application has to be? Should it be a Micro, Macro or even a Mini service?

How to keep in context without swapping out or referring documentation, or human intervention?

How to embrace changes in the application? How to build software that is faster time to market?

How to bring in a new person and develop without any wait times?

How to adapt to changing technologies and train teams?

The concept of “Cognitive Load” is getting used quite often when sizing microservices. The concept has been borrowed from the psychological dictionary where the cognitive load refers to the information one keeps in the head without referring to something else. It’s a universal concept used for learning and to help maintain optimum load on the working entity. It can be defined as a temporary working memory like a Cache or Ram.

“Producing maintainable software depends on readable information be it code, modules, or documents, and readable information depends on controlling Cognitive Load.”

There are three types of Cognitive Loads defined

  • Intrinsic Load is related to the complexity of information that a team is paying attention to and processing. It’s skills and demands on specific tasks or technology that one is grasping and the difficulty of learning the task itself. In building microservices this can be the initial load that is required to create an end to end service.
  • Extraneous load is completely unrelated to the learning task. These are all the negative distractions and additional effort imposed on the team or system.
  • Germane load is the mental processing effort and is directly going towards supporting of development of schema and long-term memory. In building software, it is the repeated decorative knowledge and conglomeration on how this thing to play in a more complicated system.

Intrinsic and Germane load are considered “good” and Extrinsic is “bad”.

Always in teams try to find and offload those extrinsic loads that affect the teams.

The extrinsic loads identified in the teams include environmental issues, complicated subsystems, unnecessary complex or business Logic, complex code, misfit teams, team meetings, etc….

Many a time, it could also be as simple as spending time on unnecessary documentation that no one reads or to over-complicate the code or even spending hours contemplating method and object names.

Too much Germane load makes learning a new task harder and slower.

Microservices have a handful of moving parts at the same time and the more the moving parts, to begin with, the complicated it gets. Systematic tradeoffs need to be made by teams when building services. Teams have to communicate and learn from each other.

The Intrinsic load should reduce over time as new lower-level knowledge and experience are formed and documented.

Always try to complete that initial simple microservice end to end, and deploy it in production with a database, CICD, alert and monitoring system, testing strategies, and version control in place. This makes the subsequent services to be carved out more easily. After a few services, spinning up new services becomes a cakewalk.

As the experience increases handling cognitive load gets more mature and achievable.

Steer Clear of Concorde Effect Fallacy

One major point of argument that developers have in every team when building micro-services is whether to use the existing code or rewrite the code?

When building new functionality the majority of the time it makes sense to rewrite capability and not the code. This may be time-consuming to build, but the monolithic platform already has a lot of redundant code.

By rewriting capability it gives an opportunity to improve the granularity of the service, revisit business functionality, and maintain a clean codebase.

Business and IT teams always spend a lot of money on solutions that are deemed to be the right way. But, many a time the only reason these extravagant solutions are running is because they are spending a lot of money on it.

In most organizations, business spends a huge amount of licensing money every year on proprietary solutions. This software is typically pretty tedious and very slow to build functionalities around.

One way to circumvent this problem is typically to try out something small, a quick POC which in a few weeks could build a small part of the same core functionalities with open-source software. This will open up a surfeit of opportunities for the business team to learn from and improve.

Be rational in decision making and avoid sunk cost fallacy situation.

Stay practical in what needs to be rewritten and what needs to be reused. Sometimes the latter may be a good choice as not everything in the monolith is a throwaway.

Also, It’s never too late to pull the plug when building services. In situations where certain core functionalities built don’t change much just reuse stuff. Maybe a good code review is what is required.

Singular Source of State

Each microservices built needs to have a data model that is completely decoupled and focused on the rest of the system landscape. The only way other systems or services can access the dedicated state is via API that the service provides and avoid conceptualized interfaces.

When decomposing the monolith, if the core services are clear then always try splitting out the schema first and keep the services together before splitting the application code out into microservices.

If the services are too coarse-grained they will be split into smaller services creating another data migration.

The initial approach in one of the teams that I was part of was to decompose the Monolith platform as a standalone microservices using multiple ORM’s. This would have been the simplest approach to move towards adopting microservices. It's easier to maintain one large database with several schemas with a firm partition between each microservices data. However, the following challenges were seen

a)Monolith Core Modules were not modular enough

b) A lot of unclear module dependencies for application packages

c)Tight coupling of custom modules and libraries.

Another disadvantage with this approach is that even if the Monolith modules were made loosely coupled, the database used for the microservices was the same one. Once multiple services were decomposed, reading/writing the same data introduced tight coupling.

If a database table was updated all the services were having to change and this created dependencies between development teams. Further testing the applications became a nightmare.

Concern Based Modularization

Modularization is another indispensable prerequisite, to begin with when designing microservices. Without modules defined, I have seen teams building services in a wild goose chase fulfilling the anti-patterns of complex data interoperability across distributed systems, tightly coupling of services by maximizing dependencies, minimal composability and abstraction and the list goes on….

Uncle Bob once famously said

“Gather together the things that change for the same reasons. Separate those things that change for different reasons”.

It’s essentially an abstraction of concerns of business capabilities by understanding and defining the exact core services or modules of an application landscape. It is the same basic concept followed for designing any monolithic application.

Where teams fail to understand is that with an improper composition of core components or modules, all you are doing is shifting complexities to more gray areas. This is a whole reason why in the first place the monolithic application was designed to be a haphazardly structured application or otherwise famously defined as “Big Ball of Mud”.

“Deciphering the concern based modularization of an enterprise is an essential requisite to amalgamate the intricacies of the microservices system.”

In one of the microservices journeys, I was part of a team that wanted to jump-start creating services and in parallel define their core modules. In doing so they came across a situation where they had the same core source information in multiple systems and this resulted in a pretty complex data interoperability across multiple services. Fixing things at this point is tedious impacting several applications.

Also, another issue seen is the over-creating of core services which is a mess. It will lead to several unnecessary layers. It just adds complications to the application landscape where every service is depending on the other without any realm of responsibility.

Get the core and sub-core concern components that are baselined at the earliest.

Domain-driven design (DDD) needs to be adopted to help choose domain boundaries or business context.

Applications need to be divided and conquered to identify the organic sized chunks of these components.

Core services by definition are services that mainly focus on persistence and are pure data-centric. Each isolated core services will be the future master of information and the discrete source of truth.

Once the core services are nailed down, detailing out the Peripheral and Composite services becomes much easier.

Less Entropy and More Negentropy

Microservice’s journey is all about gradually overhaul, every time you make a change you need to keep the system in a better state or the state that it was before.

Moving out of Services can happen either Vertically or Horizontally. Clearly nail down those horizontal services that need to be common overall. Try to identify those vertical services and the best way typically are to migrate them by first moving the data, then the business logic, and later the front end.

Always try to target the first step as is to create a macro service until the core services are demarcated. Once the demarcations are clear it is easy to further split into microservices.

In the beginning, the teams have less operational maturity, during this phase minimize the number of microservices and reduce the cognitive load.

The simplest of services, to begin with, in my experience are read-only applications. Especially, the ones that are business-centric and that change very often. This allowed us to get the business team’s confidence that the teams can move faster and deliver those features rapidly.

The team that I was part of initially were depending a lot on the monolith application. The services were deployed multiple times a day, whereas the monolith application was deployed once a week. The services should never wait for the monolith to be deployed.

Also, every time changes went into the monolith, it was always ensured that there were feature flags. This gave the developers the leeway to revert and test changes if in case hell broke loose.

Do not add a dependency to the monolithic platform. Ensure that the new services do not call the monolithic application directly and always access it via the anti-corruption layer.

Organizational Perception Shift

A very core principle of building microservices is enabling teams to deliver at a constant incremental pace and not in a big bang or spasmodic approach. Teams must be trained and prepared for cogent disruption in adopting agile methodologies processes.

Microservices requires teams to follow the core principle of agile in working at a constant pace, which in turn enables teams to deliver at a constant pace.

DevOps and DataOps culture needs to be embedded into teams. Teams must be enabled to have greater control over their environment and make multiple releases into production with a fail fast and fail forward approach.

Don’t get into a situation where certain teams are more agile than others. This can lead to a lot of slow down, especially with cross-team communication and integration.

Containers should be a standard part of any agile infrastructure, especially when working on legacy platforms that require specific infrastructure and length installation. Build service containerization to take advantage of the flexibility, speed of deployment, configurability, and automation.

Do spend resources and time on the monolith and its improvement. Seldom, teams just start concentrating on building new-age technology solutions and hardly spend time understanding the legacy system. Without understanding the monolith, it’s hysterical to even attempt breaking it.

Concluding thoughts

A microservice journey is complex and seldom have organizations been successful. And if you have gone ahead and started that journey, do take intermittent checks on where you stand and correlate to where you started from.

  • If after building services you are in a situation where all developers congregate to fix production issues
  • If teams require several developers and take umpteen number of days to fix issues
  • If your applications have several hours of downtime
  • If your services cannot be tested as a single entity
  • If the teams fear to make changes to the code when adding new features
  • If you are reverting code and releases instead of failing fast or failing forward
  • If you are building services that access the same database
  • If the services and functionality is spread across multiple services and teams
  • If your applications take multiple teams and several people to deploy changes to production
  • If there are too many services depending on each other
  • If your teams are still writing and performing manual tests
  • If you are building services with hundred of Classes having hundreds of lines of code
  • If you have several services that have not been modified for several months

Maybe, you have ended up building another monolith.

--

--

Shailendra Bhatt

Sr Architect- The stories are purely my personal views and not reflective of any organizations that I've been part of. Follow me https://www.shailendrabhatt.com