The Single Responsibility Principle in Microservices
Design Each Microservice to Address One Business Capability

Most software teams do not start with a pain. They start with a monolith that quietly collects responsibilities until everything is coupled together. Product has new requests and each of them lands in the same codebase and release train. Process becomes slow, risk grows, dependencies grow, and support becomes stressful. The root problem is not the size of the application. It is that one deployable unit carries many reasons to change.
I look to the Single Responsibility Principle as a way out. A unit should have one reason to change, tied to one actor or role that asks for change. When you scale that idea up from a code level to service design, it becomes the guiding rule for boundaries in your services. A service should align to one business capability and one primary audience for change. For example, when a change to your profile fields force changes that include payment code and notification templates, SRP is being ignored at the service level.
Break it up! Split responsibilities into independently deployable services that each map to a clear capability. Most people look to containerization for something like this. Whether you are on AKS or Beanstalk for hosting, you will will be able to deploy services without interruption to other services. The payoff is there. But the reality is you don’t need those technologies to do it. Microservices aren't technology dependent. Teams can ship on their own timeline because contracts are stable and data ownership is clear. Any exceptions or faults are contained to a slice of user value rather than the entire product. The flow of business processes matches the flow of the business. One service, one reason to change.
You can slice a monolith into twelve angry services and still end up with the same headaches. Size is not the goal. Responsibility is.
Where SRP Comes From and Why It Scales Up To Services
I’m not a particularly clever guy, and I feel like I keep going back to the fundamentals that have been around forever because they work. I apply Uncle Bob’s SOLID principles every day in my life, even away from work. If I didn’t, I’d be storing paper towels in the refrigerator.
One of the SOLID principles is the Single Responsibility Principle. Uncle Bob explained it as “a module should have one reason to change”, and he clarified that ‘reason’ maps to an actor or role that asks for change. If different roles push different kinds of changes, then you have multiple responsibilities bundled together. He makes the clarification of “reason” being a responsibility, hence the Single Responsibility Principle.
Martin Fowler’s description of microservices points in the same direction. Services should be built around business capabilities and be independently deployable. Those two ideas together is the thing we need to keep in mind when we develop our system. When a service is organized around a single capability, it should change for that capability and not for others. When we deploy the service, it shouldn’t require in change on others!
Sam Newman has become the new go to guy for microservices. He pushes the same message in his book “Building Microservices”. Put boundaries on business concepts so teams can move on their own. When you slice your monolith up, prefer business capability over the technical layers. You want your programs to represent the real world in which they will operate.
Just Don’t Get It?
A Profile Service should change when the definition of a profile changes. That might include a new display name rule or a profile picture. It should not require deployment when you decide to add Apple Pay functionality, rotate payment keys, or change your invoices. Those are payment concerns, and possibly a billing concern. If profile functionality must ship when finance or marketing makes a change, your service owns more than one reason to change and has tight coupling. We know this type of behavior is frowned upon from the SOLID Principles and our code, but now it is time to go a little bit higher and treat it as its own independent service.
Before and After: A Boundary Story
Before: Just Shove Everything in a Monolithic Service
This approach isn’t very easy
Every unrelated change rides the same release train. If finance switches payment gateways, you rebuild and redeploy the single artifact that also holds your profile, Auth and Email Controller. That means long coordination windows, wide regression risk, and hotfixes that carry additional passengers. Managing the source code for this becomes exponentially difficult by introducing cognitive load. If the Email Controller isn’t ready, your Payment Controller isn’t going out either. Deal with it.
Fowler makes it very clear - independent deployability is a defining trait of microservices precisely because the monolith ties unrelated concerns together.Change amplification. A small tweak to email templates forces a full retest cycle across user flows that had nothing to do with the change. Your cycle time stretches and your teams avoid changes because they know the blast radius is unpredictable. Again, it is all in the same artifact.
Cognitive load. The on call person needs to be fluent in identity, billing, and notifications to ship a fix at 2 a.m. That is not a badge of honor. It is a boundary problem.
After: Introducing Boundaries and the SRP
What improves and why
Assume each application has its own deployment. Teams will deploy on their own schedules because the services are independently deployable units with separate pipelines and artifacts. Payment can ship a new gateway adapter and rotate secrets without rebuilding profile or auth. This is the concrete benefit of aligning services to business capabilities and keeping contracts stable.
The blast radius is smaller. The notification service can be as good as dead, and users can still update profiles and pay. Teams can recover one capability at a time instead of everything at once. This aligns with Newman’s guidance to design for team autonomy and failure isolation.
Ownership is clean. Each team owns a single primary data model. Profile owns profile data. Payments owns payment state and reconciliation records. That clarity keeps responsibilities from leaking across boundaries and keeps SRP intact at the service level.
Anti Pattern: The Distributed Monolith
You know you have a distributed monolith when services live in different processes but move as a herd. Synchronized releases, shared databases and resources, or failures cascading from one service to the next are some of the usual tells. Andre Newman (not Sam) puts it bluntly: it is deployed like microservices but designed like a monolith.
Here is an extreme example of what it can look like:
Why this blocks progress:
A schema change for one service becomes a breaking change for other services that are coupled with it. Independent deployability disappears because contracts are not respected at the boundary and the database is the contract. Zhamak Dehghani warns that this kind of coupling invites a painful mix of distributed complexity.
Coordinated testing and releases slow everyone down. The calendar becomes the main constraint and teams start avoiding change. At that point you pay the cost of distributed systems and still release like a monolith. You’ve got the worst parts of monoliths and microservices.
Are you even getting the true benefits of a Microservice by doing some of these things?
How do you get out of the above situation?
Give each service its own data store or its own schema slice with strictly versioned interfaces that are owned by the service. Use events or APIs for the management and retrieval of your data.
Treat changes as contract evolution. Version your payloads and provide a sunset policy. When a consumer needs something different than what is being offered, they ask for a new version rather than reaching across your boundary into your tables.
Pull shared code into real libraries with stable interfaces instead of sharing internal models. You want to keep these interactions stateless, and share capabilities.
Keep it Copacetic - What Is It Like with Clean Boundaries?
Product asks for a profile display name rule. Only the profile service changes. It updates validation and schema, publishes a non breaking version bump on the API, and ships a new container. Payments and notifications feel no change or stress. Users who consume the event get the “ProfileUpdated” record and react to the event on their own time. Oh, but the contract that comes back has an additional field! Doesn’t matter, that is not a breaking change.
This is “one service, one reason to change” in action.
A week later finance selects a new payment gateway. Payments swaps an adapter behind a stable interface, flips secrets, and deploys. The deployment takes significantly longer than expected, but the Profile and Notification Services keep running. In fact, Profile and Notification Services don’t care what is being used in your payment gateway because it is hidden away.
You did not need a freeze. You did not need to gather everyone alive to fix it. Independent deployability at its finest.
Practical Boundary Heuristics
Anchor names in capabilities. Names like User Profile, Payments, Catalog, and Order Routing cue product owners and engineers to the single audience for change. Names like SharedService or UserUtil are warning signs. Newman’s capability slicing guidance is a good check during reviews.
Keep APIs cohesive. A profile API should talk about profile concepts. If you expose card capture calls from the profile API “for convenience,” or because there’s a deadline you just crossed the line and invited tight coupling.
Own one primary data model. A service should own its data. Read models in other services are copies that can be refreshed or rebuilt. Doing a join of data across services, or sharing data tables raise the likelihood of synchronized releases and drift toward a distributed monolith. You shouldn’t care how data is stored by other services. It should be hidden from your service.
Design for independent deployability. Think about independent versioning, separate repositories, pipelines, and runtime identities. In AKS, map services to namespaces and deploy with separate charts. In Azure Container Apps, use separate container apps and per service revisions. The platform gives you the levers, but the boundary is what makes those levers useful.
Treat cross cutting concerns as platform concerns. Identity policy, logging, and tracing should be shared through gateways, libraries, or sidecars, not copied into every service as a second job. Keep domain services focused on their capability.
What This is Not
This is not “small services everywhere.” A service can be small and still violate SRP if it changes for unrelated actors. I’m not even opposed to rolling with a monolith in many cases. Fowler’s “Monolith First” essay is a great read. Start where you are, tighten boundaries, and split only when the benefits are clear.
This is not Continuous Deployment, where changes get put into production everyday and you’re deploying to production multiple times a day. This is closer to Continuous Delivery where there are gates but frequent deployments.
Early Warning Signs
Is there a single audience for change for this service, such as a product owner or business role for that capability? If requests routinely arrive from unrelated groups, you probably need to revisit your boundaries. This follows the SRP idea that responsibilities map to actors.
Can this service ship without waiting for other teams? If not, identify the coupling. Shared databases, shared release pipelines, or unstable contracts are the usual causes. Independent deployability is a signature trait of microservices.
Does the service own one the data model? If it depends on another service’s tables or requires heavy usage of another services data where you want to just hit their database, you are drifting toward a distributed monolith.
Are the APIs and events cohesive and versioned? Are you leveraging gateways? Contract evolution supports independence. Publishing events for changes in your capability lets other teams adapt on their own timeline. Having your team and application operate autonomous is why this whole idea works.
Hold That Thought For Now
SOLID started as design for classes. I use The Single Responsibility Principle to help create a reliable compass for service boundaries when you connect “reason to change” to business capability and team ownership. Do that, and your services become small units that ship when they should, fail safely, are observable and stay understandable. There’s a lot to microservices, but without understanding the idea of having one service having a single responsibility, you’re always going to struggle.
And credit to anyone who read this and picked up my Local H references.




