<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Broken Intellisense]]></title><description><![CDATA[Larry Gasik's Technical Leadership findings.]]></description><link>https://brokenintellisense.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Apr 2026 10:05:49 GMT</lastBuildDate><atom:link href="https://brokenintellisense.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Stop Guessing - Automated Unit Tests Tell You How Your Code Behaves
]]></title><description><![CDATA[Automated Unit Tests, or AUT, are a concept that most developers do not initially see as beneficial. When I was introduced to AUT, my reaction was was, “I’m going to write buggy code, to test my buggy]]></description><link>https://brokenintellisense.com/stop-guessing-automated-unit-tests-tell-you-how-your-code-behaves</link><guid isPermaLink="true">https://brokenintellisense.com/stop-guessing-automated-unit-tests-tell-you-how-your-code-behaves</guid><category><![CDATA[unit testing]]></category><category><![CDATA[Automated Testing]]></category><category><![CDATA[Continuous Integration]]></category><category><![CDATA[continuous delivery]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Wed, 18 Mar 2026 03:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/67eaf273ede8d8809306d073/67d37bd4-e926-46d3-94e2-e1007d1ae605.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Automated Unit Tests, or AUT, are a concept that most developers do not initially see as beneficial. When I was introduced to AUT, my reaction was was, “I’m going to write buggy code, to test my buggy code.” It takes takes time to see the true benefit of AUT. As time has gone on, I've become a huge proponent for AUT.</p>
<p><strong>The true purpose of AUT is to allow the developer to be sure the code behaves as expected.</strong></p>
<p>Burn that bold text into your head because it will be the theme of everything here.</p>
<p>On the surface, it sounds like I just said the same thing about buggy code testing buggy code. But that is not really what is happening. I am able to test what happens inside my code. I am able to go through different scenarios in milliseconds. I am able to verify success cases, failure cases, edge cases, and exception paths without waiting on a database, a file system, another service, or human interaction. That is where the value starts to show up.</p>
<p>A few years ago, a friend reached out to me saying that he needed to write AUT for his coding assessment, but he did not fully understand what the true purpose of AUT is. “Why do I need to write code to show that 2 == 2?” I’m a huge proponent of AUT, and my friend did not see the value. We discussed back and forth some different challenges that come with testing. Some of my questions were:</p>
<ol>
<li>How do you test your classes based on what is returned from a third party service? What if that service is down?</li>
<li>How do you simulate exceptions?</li>
<li>How do you test interacting with the System namespace?</li>
<li>When do you test?</li>
<li>What if you are maintaining code that you did not write?</li>
</ol>
<p>I got some rather lame answers, like, “It doesn’t make sense to test for a service being down, what are the chances it is going to be down?” or “Why test exceptions? I’d just write a catch for it so it doesn’t get thrown to the end user.” If you have ever worked in a distributed environment, you know systems are unavailable from time to time and it is out of your control. Networks fail. DNS changes and they aren't communicated. Tokens expire or APIs throttle. Databases go offline or move. File shares disappear. Cats and dogs living together!!! Somebody rotates a secret and forgets to tell you. I am in awe that it works at all sometimes. But, it was clear that an example was needed.</p>
<h2>What are Automated Unit Tests?</h2>
<p>You do not test a bridge by driving a single car over it right down the middle on a clear, calm day. You do extreme things. You add load. You check wind conditions. You make sure the supports are deep enough. You make sure parts do not shift when the weather changes, especially where there are freezing temperatures. You verify those little reflectors don't pop out of the road when hit by a truck. You look for the weak points before the public uses the bridge</p>
<p>That is what unit testing is doing for your code. You are not proving your code works in one happy path under perfect conditions. You are deliberately looking for the places it can bend, crack, or do something you did not intend.</p>
<p>What I’m highlighting are edge cases. Edge cases are the conditions that sit at the extremes of what your code is supposed to handle. They are the places most likely to show breakdowns, different behavior, and exceptions in your code. You are simulating stress on your solution, much like you were simulating stress on your bridge.</p>
<p>A unit test is meant to exercise a small piece of code, avoid external infrastructure, and run fast enough that developers can execute it frequently. These unit tests shouldn't depend on databases, file systems, or network resources. <a href="https://martinfowler.com/bliki/UnitTest.html">Fowler</a> similarly describes unit tests as small in scope and fast enough to run constantly while coding.</p>
<p>These tests should be automated, quick, repeatable, and consistent. You are testing the smallest practical path in your codebase. It may be an orchestrating method. It could be a validation rule. It may be a helper method. The point is that you can go through a single path and hit the cases you need to hit.</p>
<p>It also has a great side effect. Writing unit testable code tends to push you toward better design. If a class is miserable to test, that is often a design smell. Maybe it has too many responsibilities. Maybe it reaches into too many dependencies. Maybe it hides logic behind framework calls or static state. Testing has a way of shining a light on that.</p>
<h2>What good unit tests look like</h2>
<h3>Automated</h3>
<p>Automated Unit Tests are automated - it is right there in the name. They require no manual intervention, no setup gymnastics, no babysitting, and no clicking around a UI. Kick off your tests and get a result.</p>
<p>That matters because the real value is not just writing the test once. The real value is rerunning it every time you make a change. A test that needs a human to prepare the environment is already losing its value.</p>
<h3>Quick</h3>
<p>A good unit test should execute in milliseconds. There's no network traffic, no real database work, no disk operations, and no waiting on external systems. I can have no access to the VPN, no LAN, no internet access and still execute my tests. Good unit tests are fast, isolated, and repeatable, and Fowler makes the same case for keeping them small and fast enough to run constantly during development.</p>
<p>That speed changes developer behavior. If your tests run in milliseconds, you will actually use them. If they take twenty minutes, your team will start asking questions about whether you really need to run them. That is where quality and value starts slipping.</p>
<h3>Repeatable</h3>
<p>Because unit tests do not interact with unstable outside systems, they become repeatable. Data does not need to be manually configured. A network does not need to be available. The same test can run over and over and produce the same result.</p>
<p>That is a massive benefit during refactoring. When the result changes, you know it is because something changed in the code or the test, not because the planets aren't in line or a shared environment had a bad morning.</p>
<h3>Consistent</h3>
<p>Consistency is what separates a useful test suite from a noisy one. A flaky test that passes on one run and fails on the next without any meaningful change is not a safety net. It is background noise. Non deterministic tests become effectively useless because teams stop trusting failures once they become unreliable.</p>
<p>That is why isolating system behavior matters so much. If your code depends on the current date, abstract the clock. If it depends on file access, wrap the file system. If it depends on an external service, introduce an interface. Then your tests can simulate exactly what you need, and they can do it the same way every time.</p>
<p>This allows red flags to be raised immediately when a test fails. I can't tell you the number of times when teams write poor tests, they fail, and they just continue on because it is expected.</p>
<h2>Why we use Automated Testing</h2>
<p>Unit tests make sure the developer understands the behavior of the code. In a good codebase, the business rules are not just buried in business logic. They are also reflected in tests that sit side by side the code and explain what is expected to happen.</p>
<p>The first time I really saw value in an Automated Unit Test was when I was writing validation logic. A client would pass in details to the back end that needed to be verified based on existing information in the database. If the submission was valid, process the update. If not, return an error and do not update the database.</p>
<p>I wrote the code and it worked. Then I went back and refactored the conditionals. In the process, I introduced a bug that always updated the database, even when the request was invalid. Had I had unit tests verifying that invalid requests never call the update method, I would have caught it immediately.</p>
<p>That is a big part of the real value. They are there to catch the moment you accidentally broke behavior that used to behave as expected. Regression protection is one of the major benefits of unit tests, and will help you when you come back to a module, or someone new is tinkering around.</p>
<p>They also serve as living documentation. A well named test tells the next developer, including future you, what the code is supposed to do. Sometimes reading a good test is faster than tracing through production code. If you ever get the chance, check out the names of some of the methods for my unit tests.</p>
<h2>When do we execute our tests?</h2>
<p>Run your unit tests all the time. The sooner you get feedback, the sooner you can correct the problem. Unit tests should be part of your normal workflow, not a special event. One day, I'm going to write about <a href="https://www.jetbrains.com/help/dotcover/Continuous_Testing.html">continuous testing in dotCover</a>.</p>
<h3>During development</h3>
<p>As you build a feature, run your tests. Fowler’s guidance on unit tests is very direct here. Fast tests are valuable because they can be run constantly while programming, often after every meaningful change.</p>
<p>The faster the feedback, the easier it is to locate the defect. If I break something and find out thirty seconds later, I know roughly where to look because it is fresh in my mind. I'm not going to remember what I did two weeks ago, or even yesterday.</p>
<h3>During refactoring</h3>
<p>If you are making changes to the codebase, you need a quick way to verify that you did not break behavior. That is where a good test suite is valuable. And don't act like you're going back to write unit tests for large chunks of code after it is in production.</p>
<p>Refactoring without tests is like a surgeon without monitors to vitals. They may make changes in the process, but they have no idea if they broke something in the process.</p>
<h3>In Continuous Integration</h3>
<p>The whole point of Continuous Integration is fast feedback. When code is pushed, you want an automated compilation, and a test run telling you whether the application still behaves as expected. Running a broader commit suite as part of CI, commonly including all unit tests, because the speed and scope make them ideal for that layer of feedback. Keep in mind, these are not integration tests. Those are slower, which serve a different purpose of what you're trying to do here Because unit tests are cheap and fast, they belong in CI. They catch regressions while the change is still fresh in the engineer’s mind.</p>
<h2>Misconceptions about Unit Tests</h2>
<p>I grow frustrated when people get the wrong idea of unit tests. Unit tests are a tool, and not a silver bullet. I've found myself fighting some of the same battles over and over.</p>
<h3>1. Unit tests result in bug free code</h3>
<p>You're still going to have bugs in your code. Unit tests reduce risk. They increase confidence. They catch regressions. They absolutely do not guarantee bug free software. A unit test only tells you whether a specific behavior matched a specific expectation under a specific condition. It is a big reason why you need many tests.</p>
<p>We still need integration tests, system tests, exploratory testing, and plain old human judgment.</p>
<h3>2. Unit tests are difficult to maintain</h3>
<p>Bad unit tests are difficult to maintain. Good unit tests are usually a reflection of good design.</p>
<p>When production code follows sane design principles, especially explicit dependencies and separation of concerns, the tests are easier to write and easier to keep. Microsoft’s ASP.NET Core testing guidance leans on dependency injection and explicit dependencies specifically because those patterns make code testable.</p>
<p>Honestly, if you follow the SOLID Principles, AUT is really easy.</p>
<h3>3. AUT is the same thing as Test Driven Development</h3>
<p>This statement can make me red in the face. TDD is a development practice. Unit testing is a testing technique. They overlap, but they are not the same thing. Fowler has also written about how people often confuse self testing code with TDD, even though TDD is only one path to getting there.</p>
<p>You can write good unit tests without following a strict red, green, refactor cycle. From my experience, There aren't many shops following TDD as designed. You can tell when someone is doing TDD because their code is a bit different.</p>
<h3>4. More tests means more quality</h3>
<p>Garbage tests are a waste of time. Garbage tests are often a result of a misunderstanding of what unit tests are supposed to do, or a need to fill a vanity metric.</p>
<p>A hundred fragile, shallow, badly named tests do not make a codebase healthy. They make it noisy. Quality comes from meaningful tests that verify behavior people actually care about.</p>
<p>I fully support a mandate around Unit Test Code Coverage. It can be useful as a signal, but it is not the goal. You can hit a coverage number and still miss the real business rules, but you can't have a good test suite, and a low coverage percentage.</p>
<h3>5. Unit tests eliminate the need for manual testing</h3>
<p>They do not.</p>
<p>Manual testing, especially exploratory testing, still matters. Unit tests are excellent at fast, repeatable checks of expected behavior. They are not great at discovering confusing workflows, odd usability problems, or the kinds of real world chaos people create the minute your software lands in front of them.</p>
<h2>Common challenges</h2>
<h3>1. Writing testable code</h3>
<p>This is usually the first real hurdle. If your code reaches directly into the current time, the file system, static state, configuration, HTTP calls, and database access all in one method, testing it is going to hurt.</p>
<p>That pain is often telling you something useful about the design. Take a step back and redesign your code.</p>
<h3>2. Legacy code</h3>
<p>Legacy code often means tightly coupled code with very few seams. That makes introducing tests harder, but also more valuable.</p>
<p>You may not be able to drop in perfect unit coverage on day one. Sometimes the first step is characterization testing, writing tests around current behavior so you can make changes without guessing.</p>
<h3>3. Mocking and dependency injection</h3>
<p>There is a learning curve here. Developers who are new to interfaces, dependency injection, and test doubles often feel like this is extra ceremony with no benefit.</p>
<p>In practice, these design patterns let you replace unstable collaborators with predictable ones. That is exactly why you need to isolate your unit tests. Dependency injection allows swapping implementations for testing, including mocked services in controller tests.</p>
<h3>4. Balance</h3>
<p>How many tests do we write? How many assertions in one method are too many? What do we verify?</p>
<p>Those are real questions, and there is no magic number. I would rather have one sharp test that verifies a meaningful rule than five vague tests that mostly restate the code.</p>
<h3>5. Continuous Integration</h3>
<p>Not every shop is doing Continuous Integration, and it can be hard to bring to the team if your build process is convoluted. It can be hard to introduce testing into an existing CI process, especially if teams are already used to slower, unstable test suites. That said, this is exactly where unit tests shine, because they are the cheapest automated feedback you can add to the pipeline.</p>
<h3>6. Skill gap</h3>
<p>Effective unit testing requires shared understanding. The team needs to agree on what a unit test is, what belongs in one, what does not, and what “good” looks like.Otherwise, one person writes isolated tests, another person writes mini integration tests and calls them unit tests, and then everybody argues about testing while the code rots.</p>
<p>Getting everyone on the same page is the biggest challenge.</p>
<h3>7. Knowing what to test</h3>
<p>What if you're coding and you don't have the true requirements defined yet? Sometimes the logic is unclear. Sometimes the code is unclear. Sometimes product doesn't even know what is supposed to happen.</p>
<p>Another hidden benefit of testing brought to light.... writing tests forces clarity and engineers have to ask questions. “What is this thing actually supposed to do?” Don't get answers, and you're in a bad shape.</p>
<h2>Hands On is the way to be</h2>
<p>Look, I’ve been writing unit tests for over 15 years. I’ve helped others start writing unit tests. I’ve struggled, and I’ve seen others struggle to write them too. It isn’t about 2 == 2. It’s about giving the developer confidence that the code behaves as expected. You’re testing happy paths, failures, edge cases, and exception handling in milliseconds. You’re building confidence in your codebase.</p>
<p>Focus on the most concerning areas first. You’re going to have to isolate dependencies, and if you’ve thrown things together quickly, it will take time to get it right. Go incrementally, and as you do, treat your tests as documentation for what you’ve built.</p>
<p>In an upcoming post, I’m going to walk through some code samples and patterns that I’ve used. I’ll cover testing exceptions, code coverage, mocking, and all the good stuff. Hands-on is the way to go.</p>
]]></content:encoded></item><item><title><![CDATA[What's going on? 2026-03-09]]></title><description><![CDATA[I haven't posted anything in a month, and I'm finally coming back to it. It feels as if February flew by. There have been a few projects at work that required additional attention, and I think they to]]></description><link>https://brokenintellisense.com/what-s-going-on-2026-03-09</link><guid isPermaLink="true">https://brokenintellisense.com/what-s-going-on-2026-03-09</guid><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Tue, 10 Mar 2026 01:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/67eaf273ede8d8809306d073/6c6b73be-9e14-4ca7-a61c-50869f937335.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I haven't posted anything in a month, and I'm finally coming back to it. It feels as if February flew by. There have been a few projects at work that required additional attention, and I think they took some mental capacity away from personal development. As part of my regular routines, I noticed I was slipping a bit, so I need to make that a bigger priority.</p>
<p>That doesn't mean I have been stagnant.</p>
<ul>
<li><p>I spent a lot of time with Mermaid and understanding the syntax better. I've used it on the blog before, but now I have a better understanding of what is possible and when to use different approaches. Obsidian is also something I've been exploring more lately.</p>
</li>
<li><p>I have come to the conclusion that I need a blog series on Microservices, SOLID, and UML. I wrote about SOLID more than ten years ago for work, but it is not here anymore, and that just feels wrong.</p>
</li>
<li><p>I've also looked into leaving the Hashnode platform. With recent shifts in my goals and my working style, it feels somewhat inevitable. But where would I go? What is important to me? What is really missing that is causing this desire to shift?</p>
</li>
</ul>
<p>Here are a couple of things I saw recently that I found interesting.</p>
<ul>
<li><p><a href="https://discord.com/blog/getting-global-age-assurance-right-what-we-got-wrong-and-whats-changing">Discord and Age Verification</a> - I have a lot of mixed emotions about this. I don't like the idea of handing out my ID to any corporation. I even get annoyed when the doctor's office scans it. At the same time, I believe there is some responsibility to protect kids. But is that Discord's responsibility?</p>
</li>
<li><p><a href="https://www.tomshardware.com/tech-industry/memory-spot-prices-climbed-again-in-february-nand-wafer-costs-surge-25-percent">Memory Prices</a> - They are still insane. I wonder if the resistance toward AI might help bring prices down in the next couple of years. Everything I read suggests the next 18 months could be difficult since most of the chips that are going to be produced are already allocated.</p>
</li>
<li><p><a href="https://finance.yahoo.com/news/apple-now-has-a-macbook-for-everyone-and-that-should-worry-google-and-microsoft-150018709.html">MacBook Neo</a> - This feels like a smart move by Apple. You can walk into classrooms today and see rows of Chromebooks. That is what kids are learning on. The Neo makes Apple devices more accessible to the average user. It reminds me of the phrase, "The best ability is availability." Now more people have access to the platform.</p>
</li>
</ul>
<p>On a personal note, I spent some time really enjoying the Winter Olympics. I find the Winter Olympics more creative and impressive than the summer games. I am also a big ice hockey fan, which helps. I started reading a bit of fiction as well, mostly to give my brain a break from constantly analyzing technical things and following the endless news cycle.</p>
<p>There was also a seemingly endless cold spell across the United States that drained a lot of my energy. I also need to finish my office renovation that I started back in November. Once that is done, it should help me stay more focused here.</p>
<p>It is important to step back and reflect. Sometimes you need to course correct, and that may mean something else has to change as well. That applies not only to your personal life, but also to the projects you lead.</p>
]]></content:encoded></item><item><title><![CDATA[Dependency Injection Scopes]]></title><description><![CDATA[I had a discussion with a colleague recently about fundamentals of software engineering. If you know me, you know I come back to SOLID constantly. This conversation touched on dependency injection, inversion of control, and the Singleton pattern and ...]]></description><link>https://brokenintellisense.com/dependency-injection-scopes</link><guid isPermaLink="true">https://brokenintellisense.com/dependency-injection-scopes</guid><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Wed, 04 Feb 2026 05:01:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/UAsyRieP47A/upload/a0307be0ea3d236b2023422143923288.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I had a discussion with a colleague recently about fundamentals of software engineering. If you know me, you know I come back to SOLID constantly. This conversation touched on dependency injection, inversion of control, and the Singleton pattern and whether it violates <a target="_blank" href="https://en.wikipedia.org/wiki/SOLID">SOLID</a>. There is a lot buried in those topics, but I want to narrow the focus here to service lifetimes and scopes, using <a target="_blank" href="https://www.nuget.org/packages/microsoft.extensions.dependencyinjection">Microsoft.Extensions.DependencyInjection</a> as the entry point.</p>
<p>We're going to explore those lifetimes through a concrete proof of concept. Something anyone can download, run, and modify to see the behavior for themselves. My experience lately is that Microsoft has made dependency injection so central to modern .NET development that if you are not using it, or do not understand how it actually behaves, you are leaving capability and correctness on the table.</p>
<p>Before getting there, we need to align on a few foundational ideas, specifically how SOLID, inversion of control, and dependency injection relate to one another.</p>
<p>The D in SOLID stands for the <a target="_blank" href="https://objectmentor.com/resources/articles/dip.pdf">Dependency Inversion Principle</a>. At a high level, it says that high level modules should not depend on low level modules. Both should depend on abstractions. Indirectly, it also means that changes in low level implementation details should not force changes in higher level policy or orchestration code. Abstraction exists to reduce coupling and to localize change.</p>
<p>Without inversion of control, a class typically constructs its own dependencies. The class will new up instances directly, decide which concrete implementation to use, and binds itself tightly to that decision. That works in trivial cases, but it quickly becomes a maintenance problem. Construction logic spreads, and replacing behavior requires invasive changes. Good luck writing automated unit tests against that!</p>
<p>With inversion of control, the class still declares what it needs, but it does not decide how those needs are satisfied. Something external takes responsibility for choosing and supplying the implementation. The class depends on an abstraction and trusts that the system will provide a valid instance.</p>
<p>Dependency injection is the most common and most approachable way of achieving inversion of control. If you have ever injected a concrete implementation into a constructor that expects an interface, you have used dependency injection. That is how most of us first learn it. The consuming class does not know or care which implementation it receives, it is coded to the contract. Dependency injection is not the principle itself. It is a mechanism that enables inversion of control.</p>
<p>If dependency injection and inversion of control are so closely related, are there other ways to achieve inversion of control. Yes, but they depend on the shape of the system. Callbacks, events, and message based systems all invert control in different ways. In distributed systems especially, control often flows through asynchronous messaging rather than direct calls. That is a deeper topic and one I plan to write about separately. What keeps pulling me back here is that when you strip those systems down to their fundamentals, it all comes back to SOLID.</p>
<p>With that foundational understanding in place, we can move into dependency injection in modern .NET and how little friction Microsoft has introduced to make it usable. To do that, I built a simple WebAPI that exposes a few endpoints. Each endpoint demonstrates how service lifetimes behave when resolved under different scopes. The services themselves are intentionally trivial so the lifecycle behavior is the only thing you are observing.</p>
<h2 id="heading-what-are-scopes">What are scopes?</h2>
<p>In the <a target="_blank" href="https://learn.microsoft.com/en-us/dotnet/core/extensions/dependency-injection/guidelines">Microsoft dependency injection model</a>, scopes describe the lifetime of a service instance. Choosing the correct lifetime is not a stylistic preference. It is a correctness decision. Using the wrong lifetime can introduce subtle bugs, state leakage, and concurrency issues.</p>
<p>There are three lifetimes, <a target="_blank" href="https://learn.microsoft.com/en-us/dotnet/core/extensions/dependency-injection/service-lifetimes">Singleton, Scoped, and Transient</a>.</p>
<ul>
<li><p><a target="_blank" href="https://learn.microsoft.com/en-us/dotnet/core/extensions/dependency-injection/service-lifetimes#singleton">Singleton</a> - A single instance is created and shared for the lifetime of the application. This is appropriate when the service holds immutable or read only state that is safe to share across all consumers. A common example is a service that loads reference data that never changes. The critical requirement is that the service must be thread safe. Don't you dare hold request specific information in here either!</p>
</li>
<li><p><a target="_blank" href="https://learn.microsoft.com/en-us/dotnet/core/extensions/dependency-injection/service-lifetimes#scoped">Scoped</a> - A single instance is created per logical scope. In a Web API, the default scope is an HTTP request. All resolutions within that request receive the same instance. This is useful for units of work that span multiple services, such as transactional operations. Scoped services are isolated per request and are not shared across concurrent requests.</p>
</li>
<li><p><a target="_blank" href="https://learn.microsoft.com/en-us/dotnet/core/extensions/dependency-injection/service-lifetimes#transient">Transient</a> - A new instance is created every time the service is resolved. This is appropriate for stateless services or short lived operations such as validators or formatters. Because no instance is shared, there is no risk of state bleeding across resolutions.</p>
</li>
</ul>
<p>Understanding these lifetimes and their implications is essential. Many of the most common dependency injection bugs come from violating lifetime boundaries, especially when a longer lived service depends on a shorter lived one.</p>
<h2 id="heading-lets-take-a-look-at-some-code">Let's take a look at some code!</h2>
<p>For the proof of concept, I created a Web API using Microsoft.Extensions.DependencyInjection. The service itself is a simple utility that produces a timestamp when constructed and when invoked. The controller, named ScopeDemonstrationController, accepts multiple instances of the same service through constructor injection. Each instance is keyed with a different lifetime.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770181742437/cd846e40-303c-4c31-a88f-b4b61ac4d872.png" alt class="image--center mx-auto" /></p>
<p>In Program.cs, the services are registered using keyed registrations, all pointing to the same concrete implementation. This makes it explicit which lifetime is being resolved at each injection point. Inside the controller methods, I return the timestamps along with labels so you can clearly see when each instance was created and whether instances are shared.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770172294942/371342aa-2bdd-4098-95d0-0ce9c1995220.png" alt class="image--center mx-auto" /></p>
<p>The <a target="_blank" href="https://github.com/LarryGasik/DiScope">repository</a> is available on GitHub. To be honest, it is probably easier to explore there. You can clone it, run the API locally, modify the registrations, and observe how the behavior changes. In fact, I encourage you to do just that.</p>
<h2 id="heading-lets-run-it">Let's Run It!</h2>
<p>I started the API and hit the endpoints using Postman. The screenshots below show the results in the order the calls were executed. Keep the definition of each lifetime in mind as you look at the timestamps of these screen captures.</p>
<h3 id="heading-transient">Transient</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770181780310/4cc604bf-35af-4395-a4e0-f72c39aab15c.png" alt class="image--center mx-auto" /></p>
<p>Transient is the default lifetime. Each resolution produces a new instance. In the output, you can see that each injected service has a distinct creation timestamp. Even within the same request, no instances are shared. This confirms that transient services are never reused.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770172361707/0a2e6884-a5d3-4338-9c12-b6a5ce37a5cc.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-scoped">Scoped</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770181804065/32aab4c5-15ec-4b5a-8666-d280b9807c60.png" alt class="image--center mx-auto" /></p>
<p>Scoped behavior is more subtle. All resolutions within the same request reference the same instance. The timestamps show that the service was created once, then reused across injections. When a new request is made, a new instance is created. This aligns with the idea of a request scoped unit of work.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770172372916/ada22fa4-d405-4e3d-8fc3-c6bef42aac4f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-singleton">Singleton</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770181826250/2f95fcd0-a481-4966-80fb-1afc7e976751.png" alt class="image--center mx-auto" /></p>
<p>Singleton behavior is the most visually distinct. The creation timestamp predates all request execution. The instance exists before any controller action is invoked and persists across all requests. This confirms that the instance is created once at application startup and reused everywhere.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770172382810/75ca4fa4-39d5-4cfb-a81c-5d808b0ca91a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-going-back-to-fundamentals">Going Back to Fundamentals</h2>
<p>This isn't a ground breaking idea, and that's why I call it fundamental. Dependency injection, inversion of control, and service lifetimes are not advanced tricks. They sit right next to SOLID in your mind, and they quietly shape whether a system stays understandable or slowly turns into something brittle and surprising. Most bugs caused by lifetimes are not dramatic. They are subtle. They show up months later, under load, during a refactor, or when someone says, “This worked when I tested it.” I love how .NET has basically forced good practices into us to do this. The container is there. The lifetimes are explicit. The behavior is observable. You can prove it to yourself with a few endpoints and some timestamps. That is not magic. That is good engineering.</p>
<p>If you take nothing else away from this post, take this - master the lifetime of your services. They are architectural decisions. Treat them with the same respect you give public APIs, threading models, and data ownership.</p>
<p>And if you ever find yourself saying, “It is fine, I will just make it a singleton and it will go faster," understand that you may have just introduced numerous design flaws into your system that will have you playing whack-a-mole while debugging.</p>
]]></content:encoded></item><item><title><![CDATA[Reading Will Make Your Career and Life Better.]]></title><description><![CDATA[Last week I shared an article by Ed Wisniowski over at Dirty Fingers titled Reading is Your Executive Secret Weapon. In it, Ed describes how authors shape an idea through their writing, then hand down lessons and mental models through books, not just...]]></description><link>https://brokenintellisense.com/reading-will-make-your-career-and-life-better</link><guid isPermaLink="true">https://brokenintellisense.com/reading-will-make-your-career-and-life-better</guid><category><![CDATA[books]]></category><category><![CDATA[leadership]]></category><category><![CDATA[learning]]></category><category><![CDATA[depth]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Fri, 16 Jan 2026 18:37:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/eeSdJfLfx1A/upload/6b862013c738fb3df14f210686d59bc7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://brokenintellisense.com/whats-going-on-2026-01-08">Last week</a> I shared an article by <a target="_blank" href="https://www.edyouragilecoach.com/reading-is-your-executive-secret-weapon/">Ed Wisniowski over at Dirty Fingers titled Reading is Your Executive Secret Weapon</a>. In it, Ed describes how authors shape an idea through their writing, then hand down lessons and mental models through books, not just news or blogs. He also points out how much you can learn about someone by asking a simple question: What are you reading?</p>
<p>That question hits home with me because someone’s bookshelf will give insights to a person in ways that a person cannot. A bookshelf shows curiosity, values, and the kinds of challenges someone is focused on. It also shows whether someone is willing to sit with complexity long enough for it to change them.</p>
<p>Right now I am reading <a target="_blank" href="https://peterattiamd.com/outlive/">Outlive by Peter Attia</a>. The underlying theme is not just living longer, it is extending the quality years of life. The goal is more years of quality health with independence, capability, and energy and not simply more years managed by prescriptions and limitations.</p>
<p>Outlive is dense. It is not my natural wheelhouse, and that is part of why I like it. When I finished the cholesterol section, I could not recite the details like a doctor. But I did come away with a sturdier base understanding for how different lipids relate to risk, and how inflammation and the health of blood vessels factor into outcomes. More importantly, it sparked curiosity. The book does what non-fiction books are supposed to: build a foundation strong enough that you can return to it, question it, argue with it, and layer new knowledge on top of it as your understanding grows.</p>
<p>That is what depth feels like. It is not just information. It is information organized into a system that starts to shape your instincts.</p>
<p>I also noticed something else happen as I read it. The ideas started bleeding into my day to day conversations. I mentioned the book to a friend, and we ended up talking about glucose, wearable health tracking devices, and why some people respond differently to the same inputs. He tied parts of the discussion to his own experience with gluten sensitivity and glucose monitoring. Because we could anchor the conversation to a shared reference point, we were not just swapping opinions. We were building a shared mental map with different opinions and perspectives. The book created a stronger connection, and a more productive conversation.</p>
<p>This is not really about Outlive. It is about how books enable connections and thoughts. Shared reading gives you shared vocabulary. Shared vocabulary gives you speed, precision, and understanding. When two professionals have the same foundational understandings and principles, they can move from “what do you mean” to “what tradeoff are we choosing” much faster.</p>
<p>That matters in technology because our work is not defined by syntax anymore. Syntax is the easy part. The hard part is judgment.</p>
<p>I look at the state of many entry level and mid level engineers today and I see a learning culture built around quick tutorials, bootcamps, and increasingly AI driven approaches. Those tools can be useful, and they absolutely can help someone become productive faster. But there is a trade hiding in that convenience. Tutorials are often optimized for simplified examples, not for building the deeper understanding that explains why it works, when it fails, and what it costs. There’s often no consideration around how other enterprise level demands need to be considered in an approach.</p>
<p>Cue the old man comments - I remember having shelves full of Wrox books and spending hours with documentation. Not because it was fun in the moment, but because it forced me to build my own mental model. Documentation and books have a way of showing the whole terrain: the boundaries, the constraints, the edge cases, the language the industry uses, and the historical context that explains why the current approach exists at all.</p>
<p>Today, when I talk to engineers about dependency injection, many can implement it quickly. Fewer can explain the underlying why it is done, or the impact on coupling, testability, object lifetime, composition roots, or the conditions where a pattern becomes a liability. That gap shows up later, when systems get messy and tradeoffs get expensive. Books close that gap because they force you to think in principles, not recipes.</p>
<p>Ed and I have this in common: we both read widely, and we both treat reading as part of our professional practice, not as a hobby we do when we have spare time. Ten years ago, he lent me <a target="_blank" href="https://www.oreilly.com/library/view/mythical-man-month-the/0201835959/">The Mythical Man Month by Fred Brooks</a>, which I never returned (sorry, Ed). That book is decades old, and it is still relevant because it is not teaching a framework. It is teaching how complex work behaves when humans, schedules, and coordination collide.</p>
<p>The challenges described in The Mythical Man Month still exist today, which is why its lessons remain worth applying across our industry. Teams that internalize those principles tend to build software in a more predictable way. That is another reason books matter for career development: tools change, but principles compound.</p>
<p>Reading also strengthens your ability to focus. In a world where most of our attention is constantly fragmented, the simple act of sustained reading is practice for sustained thinking. That is not a soft benefit in tech. It is a competitive advantage. In <a target="_blank" href="https://calnewport.com/deep-work-rules-for-focused-success-in-a-distracted-world/">Deep Work, Cal Newport</a> makes a similar case in his work on deep focus, arguing that the ability to concentrate on cognitively demanding work is what enables faster learning and higher value output.</p>
<p>And it is not just technical books that matter.</p>
<p>Nonfiction outside your domain expands your problem solving range. Reading in health, psychology, history, economics, leadership, and biography gives you more analogies and more ways to frame problems. It broadens the set of solutions you can imagine. It also makes you more effective in the parts of the job that are not code: influencing, coaching, prioritizing, communicating, and navigating ambiguity.</p>
<p>Fiction matters too. It is one of the few ways adults regularly practice stepping into someone else’s imagination. There is research suggesting that reading literary fiction can temporarily improve theory of mind, which is a way of saying it can sharpen your ability to infer what others may be thinking and feeling. It is a practice of perspective. In leadership, perspective shows up as better collaboration and better decision making across boundaries.</p>
<p>Even the personal benefits are not separate from the professional ones. Reading is associated with reduced stress, and lower stress directly affects the quality of your thinking and decision making.</p>
<p>There is also a basic, underrated mechanism: reading exposes you to more words, more sentence structures, and more precise ways to express an idea. That tends to improve vocabulary and communication over time, which matters when your job increasingly depends on explaining complex ideas clearly.</p>
<p>As a business professional, how does one start to gain the maximum benefit from the ideas that Ed described?</p>
<p>It starts by choosing depth on purpose. It means treating reading like training, not like entertainment you only do when you can get around to it.</p>
<p>It means reading across the entire spectrum. Yes, read blogs and short articles. They are great for discovering ideas quickly, and you can often engage directly with the author. But when something matters, graduate it into a book, a long essay, or primary documentation so you can understand the full shape of the topic.</p>
<p>It means writing alongside reading. Writing is where you find out what you actually believe. If you cannot explain an idea in your own words, you do not own it yet. Reading builds the raw material. Writing turns it into usable judgment. Writing this article alone has made me fully flush out all of the ways reading has helped me, and forced me to look into how it has helped others as well.</p>
<p>Part of being a leader is modeling the behavior. Ask your teams what they are reading. Share what you are reading. Normalize curiosity. Create space for reflection, even if it is informal. Over time, this builds a culture where people are not just shipping features, they are improving how they think.</p>
<p>I encourage anyone to read. Read books. Read blogs. Read magazines. Read history. Read science. Read novels. Pick topics that are close to your job and topics that have nothing to do with it. Go to the library. If you want career growth, do not just chase the next tool. Build the kind of mind that can evaluate tools, tradeoffs, and ideas with depth. Books are still one of the most reliable ways to do that.</p>
]]></content:encoded></item><item><title><![CDATA[Honesty vs. Transparency]]></title><description><![CDATA[In defining my values as a person lately, I thought about honesty. While honesty matters to me, but it is not the bar I hold myself to as a technology professional. Honesty is telling the truth when asked. It is accurate, but it can still be incomple...]]></description><link>https://brokenintellisense.com/honesty-vs-transparency</link><guid isPermaLink="true">https://brokenintellisense.com/honesty-vs-transparency</guid><category><![CDATA[Honesty]]></category><category><![CDATA[values]]></category><category><![CDATA[technology]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Sat, 10 Jan 2026 17:04:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/FwjWUGbrxOU/upload/647ba2ed821bc3c0210b641492c7592f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In defining my values as a person lately, I thought about honesty. While honesty matters to me, but it is not the bar I hold myself to as a technology professional. Honesty is telling the truth when asked. It is accurate, but it can still be incomplete. Incomplete truth creates gaps, and gaps create risk. Transparency is what fills those gaps with the context people need to make good decisions.</p>
<h3 id="heading-learning-the-hard-way-early-in-my-career">Learning the Hard Way Early in My Career</h3>
<p>I had just closed on a condo when I lost my job. A few months into my mortgage, I was terrified. I went on a handful of interviews and took the first offer I received, not because it was the perfect fit, but because I needed the job for stability. The timeline still sticks with me. I interviewed on Tuesday, accepted on Wednesday, started on Thursday, and resigned Friday morning.</p>
<p>The interview went well. I focused my questions on what I would build, where my career could go, and the types of systems I would be working on. They mentioned office space on the second floor of the building and gave me a quick walkthrough. It was a small company and the office was quiet that day, but nothing seemed unusual. I assumed I was joining a standard team, in a standard office, doing standard engineering work.</p>
<p>On day one, I arrived and filled out HR paperwork. Then they told me I needed to drive to a consultant office about thirty minutes away. That was the first moment where the ground shifted. This was not a minor detail in my book. It fundamentally changed what the job was. Driving all around Chicago was not something I was up for. There wasn't even going to be reimbursement for using my car!</p>
<p>In hindsight, it is obvious that the traveling should have been discussed before an offer was accepted. Not because it is inherently bad, but because it affects your life, your routine, and your boundaries.</p>
<p>I was young enough to think I should just take the hit, learn the lesson, and keep an open mind. I needed the job.</p>
<p>I arrived at my first assignment and immediately felt lost. I was told to call a phone number with no name attached. I had no clear point of contact, no onboarding plan, and no setup. I was working off my own laptop with my own tooling, which raises questions I did not even know how to ask until later in my career. Heck, I didn't even know what part of town I was in!</p>
<p>After about thirty minutes, someone eventually let me in, walked me to a small office, gave me a quick verbal summary of a project that was supposedly in its early stages, and told me my first task was to code email templates. There were no real requirements, just a rough idea of what they wanted. I did what developers do in ambiguity. I started building with what details I had, and would fill in the gaps later.</p>
<p>About ninety minutes later, he returned and said he wanted me to meet the client so they could explain the project. Before the meeting, he pulled me aside and added context that changed everything. The project was far behind. The client believed it was nearly complete. Then he told me he wanted me to introduce myself as a QA professional who was there to start the QA process.</p>
<p>I clarified that I was an engineer. He understood. He still wanted me to represent myself as QA. It was still my first day.</p>
<p>When I got home, I realized the issue was not simply a rough first day. It was an integrity problem. If this organization was willing to misrepresent progress to a client and misrepresent my role to cover a delivery gap, then I had no reason to believe they would be truthful with me when it mattered. I went along with it in the moment because I felt trapped by the timing and my finances, but I felt disgusted with myself. I needed the job.</p>
<p>Before 8 AM the next morning, I resigned from the job I needed.</p>
<p>I cannot remember the names of the companies involved. I barely knew anything beyond a marketing brochure and the experience of a single day. But that day clarified something fundamental: honesty is a value I will not violate. And over time, I have realized that honesty is also not enough.</p>
<p>That company could claim they were honest. They did have a position open. I would work with clients. They never explicitly promised I would not travel. They simply omitted information that any reasonable person would consider material. Honesty without context becomes a technicality. It can be “true” while still being misleading.</p>
<h3 id="heading-transparency-builds-respect">Transparency Builds Respect</h3>
<p>Transparency is different. Transparency is proactively sharing information that helps other people plan, decide, and manage risk. In a hiring conversation, that would have looked like simple, responsible questions: Do you have reliable transportation? Are you comfortable driving to client sites? Are you willing to move between locations across the city? That is not oversharing. That is respecting the other person’s ability to evaluate the position.</p>
<p>There are limits, of course. Not all information can be shared. Some things are confidential, legally restricted, or inappropriate for broad distribution. Transparency is not the same as publishing everything to everyone. It is the discipline of sharing what is relevant, verifiable, and impactful.</p>
<p>Later in my career, I took a role that was objectively brutal. There were many nights that went until morning, and stretches where I worked for extended periods with minimal rest. I showed up to family events exhausted and missed things that mattered. The overall message from most of my friends and family was quit.</p>
<p>I did not need this job. I did not quit.</p>
<p>The difference was trust, and trust came from transparency.</p>
<p>I believed in my direct supervisor. We shared the same instinct to do right by the customer, and we shared a genuine passion for the craft of building systems that work. I did not yet understand the full value system of the company, but I understood his. He was direct with everyone, and expected it in return. If my work was not up to standard, he said so. If I nailed something, he said so. He did not hide risk, soften reality, or let problems drift until they became emergencies. Everything he asked of me, he demanded of himself. That feedback loop gave me a clear model of expectations, and it gave me the confidence to adjust quickly.</p>
<h3 id="heading-transparency-isnt-always-easy">Transparency isn't always easy</h3>
<p>That is what transparency does in engineering environments. It reduces guessing. It reduces politics. It reduces the cognitive overhead of interpreting what is really going on. When leaders and teams are transparent, engineers can spend their energy on execution and problem solving instead of reading between the lines.</p>
<p>The book <a target="_blank" href="https://speedoftrust.com/">Speed of Trust by Stephen M. R. Covey</a> frames trust as something you can build deliberately through both character and competence, and Covey explicitly calls out “Create Transparency” as a key behavior. The point is not theatrical openness. It is being real, telling the truth in a way others can verify, and avoiding the illusion that things are different than they are. When you apply that thinking to delivery, and stakeholder management, the impact is fairly clear... low trust environments create unnecessary red tape because everyone is compensating for uncertainty. High trust environments move faster because people can act on reality.</p>
<p>Transparency still requires <a target="_blank" href="https://hbr.org/2016/07/when-transparency-backfires-and-how-to-prevent-it">judgment</a>. If someone is insecure, manipulative, or incentivized to weaponize information, broadcasting it more is probably a bad idea. In technology, there are some rules designed to protect the company, like not exposing production systems or sensitive data “in the name of transparency.” Strong teams pair transparency with good controls, clear access models, and deliberate communication. It can also reduce meetings, but that can be a different post.</p>
<p>When I look back on the day I quit after my first day, and I contrast it with the nights I stayed late for a leader I trusted, the values are clear. I can handle hard work. I can handle ambiguity. I can handle pressure. What I will not tolerate is being asked to operate around deceptive behavior where words are technically true but strategically misleading.</p>
]]></content:encoded></item><item><title><![CDATA[What's going on? 2026-01-08]]></title><description><![CDATA[As I enter the year and start to revisit certain things in my own career, along with having discussions with other professionals, I try to slow down and think about different perspectives. There are challenges in every role, but what feels simple in ...]]></description><link>https://brokenintellisense.com/whats-going-on-2026-01-08</link><guid isPermaLink="true">https://brokenintellisense.com/whats-going-on-2026-01-08</guid><category><![CDATA[durableFunction]]></category><category><![CDATA[news]]></category><category><![CDATA[reading]]></category><category><![CDATA[transferable skills examples]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Fri, 09 Jan 2026 02:34:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/RjqCk9MqhNg/upload/51c4c60c01169379537b33b9863756a5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As I enter the year and start to revisit certain things in my own career, along with having discussions with other professionals, I try to slow down and think about different perspectives. There are challenges in every role, but what feels simple in one organization can be genuinely difficult in another.</p>
<p>I have also been spending time thinking about setting clearer examples for how I operate and intentionally creating structure that others can interface with, rather than just being reactive</p>
<p>Here are a few articles I have found interesting lately.</p>
<ul>
<li><p><a target="_blank" href="https://www.cnbc.com/2026/01/07/openai-chatgpt-health-medical-records.html">OpenAI launches ChatGPT Health to connect user medical records, wellness apps</a> - OpenAI launched a health focused chat. I know that it is going to be a success. I know I've used AI to help me interpret health metrics, but I didn't just give it all of my medical records. What happens when they decide to start selling data? What happens when there's a security breach? I'm good, I won't participate.</p>
</li>
<li><p><a target="_blank" href="https://www.edyouragilecoach.com/reading-is-your-executive-secret-weapon/">Reading is Your Executive Secret Weapon</a> - Ed talks about how reading is becoming less commonplace in the work place in favor of things like AI summaries. I mostly agree with Ed, but I am coming from more of a depth perspective when I do so.</p>
</li>
<li><p><a target="_blank" href="https://medium.com/@garciajvincent/transferable-skills-software-development-taught-me-3830b3d3129b">Transferable Skills Software Development Taught Me!</a> - An older article at this point, but Vincent Garcia talks about some of the less technical skills developers need to be successful, and how they're transferable across other experiences.</p>
</li>
<li><p><a target="_blank" href="https://medium.com/@robertdennyson/the-ultimate-guide-to-azure-durable-functions-a-deep-dive-into-long-running-processes-best-bacc53fcc6ba">The Ultimate Guide to Azure Durable Functions: A Deep Dive into Long-Running Processes, Best Practices, and Comparisons with Azure Batch</a> - I've been spending more time on server less architecture in my AZ-204 training, and found this to be a great introduction to durable functions.</p>
</li>
<li><p><a target="_blank" href="https://hbr.org/2024/04/how-to-get-the-most-out-of-a-one-on-one-with-your-boss">How to Get the Most Out of a One-on-One with Your Boss</a> - One one ones tend to be a grind when you have direct reports. But when you are the direct report, there's a lot of preparation that can help you get the most out of it. Make sure it isn't just another stand up.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[What's going on? 2026-01-04]]></title><description><![CDATA[The holidays are officially over for me. We’re back to work at full bore. Hopefully people haven’t given up on their New Year’s Resolutions so far. This is going to be the first real week of the year for testing people, so start off on the right foot...]]></description><link>https://brokenintellisense.com/whats-going-on-2026-01-04</link><guid isPermaLink="true">https://brokenintellisense.com/whats-going-on-2026-01-04</guid><category><![CDATA[news]]></category><category><![CDATA[AI]]></category><category><![CDATA[leadership]]></category><category><![CDATA[CVE]]></category><category><![CDATA[static analysis]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Sun, 04 Jan 2026 15:18:30 GMT</pubDate><content:encoded><![CDATA[<p>The holidays are officially over for me. We’re back to work at full bore. Hopefully people haven’t given up on their New Year’s Resolutions so far. This is going to be the first real week of the year for testing people, so start off on the right foot. Stay focused.</p>
<p>I’ve got a handful of things I want to write about that are half way done, but here are some other things I’ve read over the past couple of days that grabbed my attention.</p>
<ul>
<li><p><a target="_blank" href="https://www.herodevs.com/blog-posts/when-no-cves-isnt-a-security-guarantee-what-the-latest-angular-vulnerabilities-reveal-about-open-source-risk">When “No CVEs” Isn’t a Security Guarantee: What the Latest Angular Vulnerabilities Reveal About Open-Source Risk</a> - The conclusion of this article is where the message needs to be. A quiet list of updates or news doesn't mean your software is secure. It means it isn't being examined. Regular audits, security reviews, pen tests and maintenance is critical for the security of your system.</p>
</li>
<li><p><a target="_blank" href="https://www.pcloadletter.dev/blog/craftsmanship-is-dead/">Software craftsmanship is dead</a> - I felt this one. The further in my career I get, the more I see pressure for speed to market taking over. It reminds me of when I was an intern, and XBox games started getting patches over the internet. My mentor explained how it will result in a greater urgency to ship garbage, and fix it later. With vibe coding coming around, won't it get worse?</p>
</li>
<li><p><a target="_blank" href="https://hbr.org/2025/12/the-hbr-charts-that-help-explain-2025">The HBR Charts that Help Explain 2025</a> - How are people using AI in 2025? How do people feel about the economy, and how does that pair with their news source? What about their level of joy in life? There's a great section at the end that talks about when to use different coaching styles that seems like common sense, but sometimes reviewing it always seems nice.</p>
</li>
<li><p><a target="_blank" href="https://www.jetbrains.com/qodana/">Qodana</a> - I've been using this on some of my personal projects as a way to run static analysis on my projects and see what other tools are out there. It is my own personal project, so it may take some time to figure out how good this is for teams.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[What's going on? 2026-01-02]]></title><description><![CDATA[New Years, new goals, new ideas... new fears, another year older. I'm still a believer in consistent change. Show up regularly, do your best, and you're going to see a difference. Personally, I'm gearing up for an intense quarter, and I'm feeling gre...]]></description><link>https://brokenintellisense.com/whats-going-on-2026-01-02</link><guid isPermaLink="true">https://brokenintellisense.com/whats-going-on-2026-01-02</guid><category><![CDATA[news]]></category><category><![CDATA[AI]]></category><category><![CDATA[metrics]]></category><category><![CDATA[obsidian]]></category><category><![CDATA[#entry-level]]></category><category><![CDATA[entryleveljobs]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Sat, 03 Jan 2026 01:19:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/zUVOBK8_LUw/upload/bfbdfe8ad6621c18329157d462c154b7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>New Years, new goals, new ideas... new fears, another year older. I'm still a believer in consistent change. Show up regularly, do your best, and you're going to see a difference. Personally, I'm gearing up for an intense quarter, and I'm feeling great about things.</p>
<ul>
<li><p>The most durable tech is boring, old, and everywhere - We’re pushed with constant updates to software, new phones, and another AI capability every other day it seems. It can become overwhelming. There’s some different technologies that may still be here at the end of our careers, so it makes you wonder what would be the best long term area to spend your training time.</p>
</li>
<li><p><a target="_blank" href="https://medium.com/@austin-starks/the-death-of-the-code-monkey-why-i-fear-for-the-class-of-2026-13dbf531a76f">The Death of the Code Monkey: Why I Fear for the Class of 2026</a> - Great article about how capable AI is today in an engineering perspective, and what the role of the junior engineer will need to evolve to. I love his idea of doing things by hand just to learn, and identify what is AI Slop. The problem is, you’ll have to do it on your own time.</p>
</li>
<li><p><a target="_blank" href="https://youtu.be/z4AbijUCoKU">Give me 15 Minutes. I’ll Teach You 80% of Obsidian</a> - Nick Milo is strong in Obsidian, and as I re-train myself on Obsidian, I find myself going to him quite a bit for help. Obsidian is a knowledge management app that uses markdown to build your own private knowledge base. What it means is instead of having to use notepad for your todo lists, you can now have a tool that helps organize it and find your ideas.</p>
</li>
<li><p><a target="_blank" href="https://techleadjournal.dev/episodes/241/">Tech Lead Journal - <strong>Your Code as a Crime Scene: The Psychology Behind Software Quality - Adam Tornhill</strong></a> <strong>-</strong> I’m half way through this episode, and I love it. Tornhill talks about how there’s no way to identify “good code”, only identifiers for bad practices. He goes into how many code health metrics are “vanity metrics” which made me laugh because I cannot find a better term. Odds are I’m going to write a full article on this one.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[One Database Per Service]]></title><description><![CDATA[I've had to work with engineers across different areas of microservices lately to help them operate more efficiently. Time and time again, I tell them the same thing: a microservice should be responsible for a single business capability, and the boun...]]></description><link>https://brokenintellisense.com/one-database-per-service</link><guid isPermaLink="true">https://brokenintellisense.com/one-database-per-service</guid><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Thu, 01 Jan 2026 17:09:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4QVqSh4VvP4/upload/3209b18bbdf4c8f552d3a3739cff9568.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I've had to work with engineers across different areas of microservices lately to help them operate more efficiently. Time and time again, I tell them the same thing: <strong>a microservice should be responsible for a single business capability, and the boundary should be clear</strong>.</p>
<p>Independent deployments require independent schemas. If two services share a data schema, they share a reason to change. You just coupled change at the data layer.</p>
<p>Fowler echoes this in his writing on microservices, stating that each service should manage its own database. He also highlights <a target="_blank" href="https://martinfowler.com/bliki/PolyglotPersistence.html">polyglot persistence</a>, where an enterprise uses different data storage strategies based on the specific needs of each service. Clear component boundaries let you abstract the storage technology so the rest of the system can interact with data through contracts, rather than through table knowledge.</p>
<p>Chris Richardson makes the same point: services should keep persistent data private and only accessible through the service API, not by direct database access. A shared schema is an integration mechanism, and it is a very expensive one.</p>
<p>I've had a Post-it note on my monitor for the past few months to work with my team about data ownership, and how it should really be called "<a target="_blank" href="https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/data-sovereignty-per-microservice">data sovereignty</a>". That term connects with me because it forces the team to choose what data makes sense for them. For example, if I have a service that stores a list of states and countries, I don't really care about data related to inventory.</p>
<h2 id="heading-using-a-shared-database">Using a Shared Database</h2>
<p>"Alright, Larry. This sounds like a pain and like you're being dogmatic about microservices. I'm <a target="_blank" href="https://microservices.io/patterns/data/shared-database.html">using a shared database</a> anyway."</p>
<p>Okay. Let's play this out.</p>
<p>Initially, this is going to feel really good. It allows the system to behave like one tidy unit. Reports are easy because you can write a single SQL query joining across different domains. Referential integrity can exist across those domains. Good times.</p>
<p>But then comes the coordination tax.</p>
<p>Who owns the schema? Is it now your DBA team? What happens when one backlog item depends on a schema change that also impacts other services? What happens when Service A introduces a change that is perfectly reasonable for its domain, but breaks Service B because Service B was reading Service A tables through a join?</p>
<p>And it gets worse with long running processes.</p>
<p>If one service kicks off a long running process that holds locks in the shared database, another service can get blocked waiting on those same resources. Even if the services are deployed separately, they are now coupled at runtime through database contention.</p>
<p>I'm not even going to get into all the technical approaches and operational strategies for databases here. That could be a blog series on its own.</p>
<p>In this approach, you turned it into a distributed monolith. Great job, you split the application, but because you coupled the data, you coupled the release train. This is another way to get the worst of both worlds.</p>
<p>If your service should have a single reason to change, a shared schema produces many reasons to change for everyone involved.</p>
<h2 id="heading-data-sovereignty">Data Sovereignty</h2>
<p>It is all about data sovereignty. When we say database per service, we mean the service is the sole owner of its data and schema. That service is the only actor allowed to change it. Sure, you can run multiple databases on the same physical server or managed platform, but the real constraint is ownership and access boundaries.</p>
<p>Changes to your data can have an autonomous lifecycle <a target="_blank" href="https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/data-sovereignty-per-microservice">when each service owns that domain's data</a>. When you define your domains well, this is usually straightforward. But if one service needs data from another service, you now need to evaluate how integrations behave. That is not always cut and dry.</p>
<h2 id="heading-data-duplication">Data Duplication</h2>
<p>One of the first things I was passionate about early in my career was database normalization. I remember spending a ton of time trying to get to Fifth Normal Form to reduce redundancy, plus leaning heavily on referential integrity. Nowadays, I would struggle to define what is and is not part of Fifth Normal Form from memory. Many mid level engineers have never even heard of normalization.</p>
<p>But in a microservices world, where each service owns its data, the overall system will have duplication. It is the nature of the beast. Often, that duplication is intentional and performance friendly. It can also support autonomy by allowing a service to answer questions without synchronously calling other services on every request.</p>
<p>It took me a while to accept this, because I had "data duplication is wrong" burned into my brain. In distributed systems, the real issue is often coupling, not duplication.</p>
<p>We see eventual consistency all the time in large systems and do not even think about it. Ever unsubscribe from an email list and get a message saying it may take 48 hours to go into effect? In that system, there might be a settings service and a scheduled mail service. The point is not that the mail service checks the settings service every time it sends an email. Updates flow asynchronously, which leads to eventual consistency. How you implement that flow depends on your system.</p>
<p>Keep in mind that when you have independent services with independent data (even if some of it is duplicated), not everything is always in sync down to the millisecond. An <a target="_blank" href="https://microservices.io/patterns/data/event-driven-architecture.html">event driven</a> approach can give you cross service consistency without relying on distributed transactions.</p>
<p>ACID (atomic, consistent, isolated, durable) transactions generally do not exist across service boundaries. You can get ACID behavior inside a service and its own database. The moment you cross into another domain and another data store, there is no guarantee that everything stays perfectly consistent at the same instant. You design for convergence over time.</p>
<h2 id="heading-cqrs-helps-but-it-is-not-that-simple">CQRS helps, but it is not that simple</h2>
<p>Every application eventually needs something like a dashboard that cuts across multiple domains. You do not want a UI making five network calls and stitching together a view, and you also cannot do a SQL join across service boundaries because you are not allowed to reach into another service's data store. The data inside a service is private and only accessible through its API.</p>
<p><a target="_blank" href="https://martinfowler.com/bliki/CQRS.html">CQRS</a> (Command Query Responsibility Segregation) can help.</p>
<p>Fowler describes CQRS as separating models for reads and writes. In practice, this often means your write model stays focused on enforcing business rules within a service boundary, while a separate read model is optimized for querying.</p>
<p>That read model is frequently built from events published by multiple services. You consume those events and build a projection that is already shaped like the query you want to run. Then your query path reads from that projection instead of trying to query the domain model directly.</p>
<p>Think of it as building a purpose built read model and querying that, rather than querying the domain write model directly. It can be extremely useful, but it introduces more moving parts: event publication, event consumption, projection rebuilds, and debugging eventual consistency issues.</p>
<p>CQRS can be a topic on its own, including when to use it, when not to use it, and how to keep it from turning into accidental complexity.</p>
<h2 id="heading-how-to-do-this">How to do this</h2>
<p>As I said earlier, polyglot persistence in microservices means I do not really care what technologies you choose. You should do what is right for your project based on requirements and team capability.</p>
<p>With the maturity of code first approaches to database design and the availability of schema less databases, your schema can evolve with your application instead of requiring tightly coordinated database changes at the exact moment you need them.</p>
<p>I'm still big on referential integrity and I love relational databases. But I also recognize that I do not always know what my schema will look like on a project. On personal projects especially, I often ingest data I did not fully anticipate. In those cases, I tend to lean toward schema less storage when I am experimenting and learning.</p>
<p>For CQRS read models, document databases can also be a great fit because denormalization is the point. You can evolve the read model over time without constantly fighting relational shape changes in a way that introduces unnecessary coupling.</p>
<h2 id="heading-bringing-it-back-to-srp-for-microservices">Bringing it back to SRP for microservices</h2>
<p>This comes back to <a target="_blank" href="https://brokenintellisense.com/the-single-responsibility-principle-in-microservices">SOLID and SRP for microservices</a>. Once you see it, it should feel obvious why each service needs its own schema.</p>
<p>Service boundaries allow a clean separation of concerns across responsibilities and data ownership. Your API enables interaction with that data while keeping responsibility within the owning application.</p>
<p>If your data is shared in the same database, it is not a real boundary. It is a facade with separate deployment units that still share the same change surface area, and it will become a mess. Stop bringing a mess into your codebase.</p>
<p>You <a target="_blank" href="https://microservices.io/patterns/data/shared-database.html#problem">want</a> loose coupling. This is deliberate data ownership. This keeps teams and systems honest.</p>
<p>I get it. If you're experimenting in proof of concept stages, a shared database might feel like the fastest path. I've just seen too many proof of concepts make their way to production, never get cleaned up, and then nobody wants to pay to fix it because "it works".</p>
]]></content:encoded></item><item><title><![CDATA[What's going on 2025-12-29]]></title><description><![CDATA[Year end things are happening. Reviews, reflection, goal setting… dance recitals, family get togethers, holiday parties… illness, travel, pipes bursting, extreme weather, the Blackhawks are losing… OUR PETS’ HEADS ARE FALLING OFF!!!
Let me take a few...]]></description><link>https://brokenintellisense.com/whats-going-on-2025-12-29</link><guid isPermaLink="true">https://brokenintellisense.com/whats-going-on-2025-12-29</guid><category><![CDATA[update ]]></category><category><![CDATA[Microservices]]></category><category><![CDATA[monolithic architecture]]></category><category><![CDATA[#monolithic #microservices]]></category><category><![CDATA[communication skills]]></category><category><![CDATA[Career]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Mon, 29 Dec 2025 18:21:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/bK9-VLLCpeU/upload/92a7c8f4b1a53280968e1ac0f9dba570.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Year end things are happening. Reviews, reflection, goal setting… dance recitals, family get togethers, holiday parties… illness, travel, pipes bursting, extreme weather, the Blackhawks are losing… OUR PETS’ HEADS ARE FALLING OFF!!!</p>
<p>Let me take a few minutes to brain dump some things I’ve been reading and the thoughts they sparked.</p>
<ul>
<li><p><a target="_blank" href="https://frombadge.medium.com/microservices-vs-monolith-what-i-learned-building-two-fintech-marketplaces-under-insane-deadlines-fe7a4256b63a">Microservices vs Monolith: What I Learned Building Two Fintech Marketplaces Under Insane Deadlines</a></p>
<p>Time, team, process, and domain boundaries are what actually decide whether microservices or a monolith make sense. The author ties it to a soccer analogy at the end, but the real takeaway is that organizational structure drives architecture far more than engineering ideals. This resonates because so many debates pretend the technology exists in a vacuum when it very clearly does not.</p>
</li>
<li><p><a target="_blank" href="https://terriblesoftware.org/2025/11/25/what-actually-makes-you-senior">What Actually Makes You Senior – Terrible Software</a></p>
<p>This came out of a work discussion around responsibility and ownership at different engineering levels. The article frames mid-level engineers as people who can solve well-defined problems, while senior engineers operate more strategically and help define the problem.</p>
<p>I look at it a bit differently. To me, senior engineers require deeper technical and product understanding, along with the ability to communicate clearly and effectively. I also see senior engineers as a critical force multiplier for junior and mid-level engineers. They should be leveling others up, building more maintainable systems, and making decisions that age well. I think a lot of that gets ignored, and it makes me wonder how I would define these roles if I were designing them from scratch.</p>
</li>
<li><p><a target="_blank" href="https://news.yuezhao.coach/p/visibility-and-communication-is-the">Visibility and Communication is The Job – by Yue Zhao</a></p>
<p>I am not sure where this originally came from, but it was sitting in my notes and felt like a natural continuation of the point above. A huge part of being a good engineer is communication.</p>
<p>While regular touch points and updates are valuable, I also think it is important to train people on how to get information from you without constant interruption. During my most productive development years, I was very deliberate about keeping my work items up to date so others could help themselves to status and context. That only works if you are disciplined about it, but when it works, it scales far better than being a perpetual human status endpoint.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Year End Reflection: Responsibility, Growth, and What Comes Next]]></title><description><![CDATA[As the year winds down, a lot of us start doing the same thing. We look back at what we shipped, how we showed up for our teams and our families, and where we want to grow next year. We ask ourselves, what is going on my annual performance review? Am...]]></description><link>https://brokenintellisense.com/year-end-reflection-responsibility-growth-and-what-comes-next</link><guid isPermaLink="true">https://brokenintellisense.com/year-end-reflection-responsibility-growth-and-what-comes-next</guid><category><![CDATA[goals]]></category><category><![CDATA[goal-setting]]></category><category><![CDATA[yearinreview]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Fri, 26 Dec 2025 19:54:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/PAykYb-8Er8/upload/3450763422e4d0d6e8a2fd950f4eb455.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As the year winds down, a lot of us start doing the same thing. We look back at what we shipped, how we showed up for our teams and our families, and where we want to grow next year. We ask ourselves, what is going on my annual performance review? Am I becoming the type of person I want to be?</p>
<p>I am absolutely in that camp.</p>
<p>I took a little time off over the holidays and used it to look in the mirror. Not only at what I delivered at work, but at how I operated through the year as a person. What I found is a familiar pattern: a strong sense of responsibility, very high standards, and a tendency to carry more weight than is actually mine to carry.</p>
<p>I wanted to get this down, as well as where I want to go. It is partly personal reflection and partly a commitment to how I want to operate going forward, both as a professional and as a human being who would like to avoid turning everything into a dumpster fire in my head and letting everything feed off everything else.</p>
<h2 id="heading-owning-responsibility-without-owning-the-universe">Owning Responsibility Without Owning the Universe</h2>
<p>One of my core beliefs is that if something is important to you, you will take care of it. I learned that from having a bike as a child, and it has carried through into my career and my family life.</p>
<p>If I am attached to an initiative in any way, I operate with the belief that it is on me to prepare, anticipate problems, and make sure things go as smoothly as possible. When they do not go smoothly, I have a habit of taking it hard. I want success for all of the projects I am on and every project around me.</p>
<p>I need to recognize that I can care deeply and act responsibly while also accepting that not everything is in my control.</p>
<p>That means a few adjustments in how I process ideas:</p>
<ul>
<li><p>Accepting that not every outcome is a direct reflection of my competence, contribution, preparedness, interest, or effort.</p>
</li>
<li><p>Understanding more of the situation and the accepted risks and outcomes without immediately judging myself harshly.</p>
</li>
<li><p>Separating what I can influence from what is simply the reality of the situation.</p>
</li>
</ul>
<p>Part of me hates that I even have to write this. It reminds me of a phrase I heard from a developer early in my career: “That is not my job,” said in response to fixing something only partially. It set off a few people and could probably be its own post with names changed.</p>
<p>I need to recognize that this is not about lowering the bar. It is about refusing to crush myself under a bar that nobody can realistically hold up. I have seen people try to do it all and watch their lives fall apart. Their relationships suffer, their families see them differently, and their health goes sideways from late nights and terrible food choices.</p>
<p>I would rather not end up there.</p>
<h2 id="heading-evolution-of-broken-intellisense">Evolution of Broken Intellisense</h2>
<p>I genuinely enjoy learning, staying up to date, and sharing what I know. The idea of a “blog” might feel about twenty years out of date, but I also recognize that reading/writing lets you go deeper than surface level posts or quick chats. A blog is not a book, but it does force you to put ideas into a consumable and detailed format.</p>
<p>I have toyed with the idea of doing YouTube videos, a podcast, or weekly shows on Twitch. One thing is clear: I want to do more than drop an article once a month. I want Broken Intellisense to become more of a dumping ground for ideas and articles.</p>
<p>Not everything has to be perfectly polished. I said that <a target="_blank" href="https://brokenintellisense.com/writing-again">when I started writing again</a>. What I do know is that more frequent, honest content will create a more meaningful impact beyond my day to day team.</p>
<p>I will be honest, I do not like surface level knowledge. I like to know everything. To me, that is part of becoming an expert. I recently renewed my love for reading by shifting into a bit more fiction, a genre I hardly ever reach for. That change actually reenergized my interest in reading technical content as well.</p>
<p>Who knows. I do know that I want to experiment more.</p>
<p>I expect more focus related to technical leadership. I am going to be less afraid to “fail” or look silly. In fact, I probably need more things out in the world that I personally view as failures. If you never fail, how do you know you are pushing yourself to your actual capabilities?</p>
<h3 id="heading-in-summary">In Summary</h3>
<ul>
<li><p>Weekly content, even if it is unfinished.</p>
</li>
<li><p>More leadership content than I have done in the past.</p>
</li>
<li><p>A better opportunity for people to interact with me and my ideas.</p>
</li>
<li><p>I am going to push myself to make more mistakes in public.</p>
</li>
<li><p>We are going to read six books on leadership this year.</p>
</li>
<li><p>I am bringing back book summaries like I used to do.</p>
</li>
<li><p>Follow me everywhere, because I will be trying different things.</p>
</li>
</ul>
<h2 id="heading-earn-the-az-204-certification">Earn the AZ-204 Certification</h2>
<p>If you have talked to me in the last seven or eight years, you know I have been toying with the idea of getting a certification. I start, I stall, certifications change, I fall into slumps, and the cycle repeats.</p>
<p>Not this year.</p>
<p>I am close to having this one, and I want to finish it. Help me. Keep me accountable. I am going to get the AZ-204 certification this year.</p>
<p>When I look at the skills outline, a lot of it overlaps with what I already do day to day. I know how to use this stuff. The barrier at this point is intimidation. Early in my career, one of the smartest developers I knew failed a certification exam. That stuck in my head and made certifications feel impossible.</p>
<p>The funny part is that I do not remember what his exam was on, how hard it was, or what his actual preparation looked like. I am also not sure he was as great as I thought at the time. I was an impressionable young lad and his story got lodged in my brain as a cautionary tale.</p>
<p>So this probably ties directly into everything I wrote earlier. I want to turn AZ-204 into a different kind of content stream too. Maybe I will write a whole new series that follows the journey. Maybe I will build a few reference projects and share what I learn along the way. We will see.</p>
<p>What I do know is that there will be content around the path to AZ-204. I will need dedicated time to study that is not just “I skimmed the skills outline for blog material.” There has to be focused learning time that exists for its own sake.</p>
<h2 id="heading-yeah-i-know">Yeah, I Know</h2>
<p>I get it. This is a lot. I know I have the passion for this, and I also know real life exists with all of its demands and complications.</p>
<p>The truth is, I find this fulfilling. I enjoy doing this. This is fun for me.</p>
<p>I like using my head and learning. I recognize that the industry is shifting, and I need to stay up to date. This is how I do it. I am going to have to keep track of all of this. This is how I advance my career. This is how I stay relevant.</p>
<p>I am excited for the new year. I have projects I want to build, ideas I want to explore, and a brain I plan to keep very busy.</p>
<p>Let us see where it goes.</p>
]]></content:encoded></item><item><title><![CDATA[The Single Responsibility Principle in Microservices]]></title><description><![CDATA[Most software teams do not start with a pain. They start with a monolith that quietly collects responsibilities until everything is coupled together. Product has new requests and each of them lands in the same codebase and release train. Process beco...]]></description><link>https://brokenintellisense.com/the-single-responsibility-principle-in-microservices</link><guid isPermaLink="true">https://brokenintellisense.com/the-single-responsibility-principle-in-microservices</guid><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Wed, 19 Nov 2025 06:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763575382720/56cf8ffb-24c8-4d93-b3ae-de6356ed0027.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most software teams do not start with a pain. They start with a monolith that quietly collects responsibilities until everything is coupled together. Product has new requests and each of them lands in the same codebase and release train. Process becomes slow, risk grows, dependencies grow, and support becomes stressful. The root problem is not the size of the application. It is that one deployable unit carries many reasons to change.</p>
<p>I look to the Single Responsibility Principle as a way out. A unit should have one reason to change, tied to one actor or role that asks for change. When you scale that idea up from a code level to service design, it becomes the guiding rule for boundaries in your services. <strong>A service should align to one business capability and one primary audience for change.</strong> For example, when a change to your profile fields force changes that include payment code and notification templates, SRP is being ignored at the service level.</p>
<p>Break it up! Split responsibilities into independently deployable services that each map to a clear capability. Most people look to containerization for something like this. Whether you are on AKS or Beanstalk for hosting, you will will be able to deploy services without interruption to other services. The payoff is there. But the reality is you don’t need those technologies to do it. Microservices aren't technology dependent. Teams can ship on their own timeline because contracts are stable and data ownership is clear. Any exceptions or faults are contained to a slice of user value rather than the entire product. The flow of business processes matches the flow of the business. <strong>One service, one reason to change.</strong></p>
<p>You can slice a monolith into twelve angry services and still end up with the same headaches. Size is not the goal. Responsibility is.</p>
<h3 id="heading-where-srp-comes-from-and-why-it-scales-up-to-services">Where SRP Comes From and Why It Scales Up To Services</h3>
<p>I’m not a particularly clever guy, and I feel like I keep going back to the fundamentals that have been around forever because they work. I apply Uncle Bob’s SOLID principles every day in my life, even away from work. If I didn’t, I’d be storing paper towels in the refrigerator.</p>
<p>One of the SOLID principles is the Single Responsibility Principle. Uncle Bob explained it as “<a target="_blank" href="https://blog.cleancoder.com/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html">a module should have one reason to change</a>”, and he clarified that ‘reason’ maps to an actor or role that asks for change. If different roles push different kinds of changes, then you have multiple responsibilities bundled together. He makes the clarification of “reason” being a responsibility, hence the Single Responsibility Principle.</p>
<p><a target="_blank" href="https://martinfowler.com/articles/microservices.html">Martin Fowler’s description of microservices</a> points in the same direction. <strong>Services should be built around business capabilities and be independently deployable</strong>. Those two ideas together is the thing we need to keep in mind when we develop our system. When a service is organized around a single capability, it should change for that capability and not for others. When we deploy the service, it shouldn’t require in change on others!</p>
<p>Sam Newman has become the new go to guy for microservices. He pushes the same message in his book “Building Microservices”. Put boundaries on business concepts so teams can move on their own. When you slice your monolith up, prefer business capability over the technical layers. You want your programs to represent the real world in which they will operate.</p>
<h2 id="heading-just-dont-get-it">Just Don’t Get It?</h2>
<p>A Profile Service should change when the definition of a profile changes. That might include a new display name rule or a profile picture. It should not require deployment when you decide to add Apple Pay functionality, rotate payment keys, or change your invoices. Those are payment concerns, and possibly a billing concern. If profile functionality must ship when finance or marketing makes a change, your service owns more than one reason to change and has tight coupling. We know this type of behavior is frowned upon from the SOLID Principles and our code, but now it is time to go a little bit higher and treat it as its own independent service.</p>
<h2 id="heading-before-and-after-a-boundary-story">Before and After: A Boundary Story</h2>
<h3 id="heading-before-just-shove-everything-in-a-monolithic-service">Before: Just Shove Everything in a Monolithic Service</h3>
<pre><code class="lang-mermaid">graph TD
  A[UserService &lt;br /&gt;REST API]:::bad --&gt; B[(Users DB)]
  A --&gt; C[Profile Controller]
  A --&gt; D[Auth Controller]
  A --&gt; E[Payment Controller]
  A --&gt; F[Email Controller]
  E --&gt; G[(Payments DB)]
  F --&gt; H[(Templates)]
  classDef bad fill:#ffe0e0,stroke:#c00,stroke-width:2px;
</code></pre>
<p><strong>This approach isn’t very easy</strong></p>
<ul>
<li><p>Every unrelated change rides the same release train. If finance switches payment gateways, you rebuild and redeploy the single artifact that also holds your profile, Auth and Email Controller. That means long coordination windows, wide regression risk, and hotfixes that carry additional passengers. Managing the source code for this becomes exponentially difficult by introducing cognitive load. If the Email Controller isn’t ready, your Payment Controller isn’t going out either. Deal with it.<br />  Fowler makes it very clear - independent deployability is a defining trait of microservices precisely because the monolith ties unrelated concerns together.</p>
</li>
<li><p>Change amplification. A small tweak to email templates forces a full retest cycle across user flows that had nothing to do with the change. Your cycle time stretches and your teams avoid changes because they know the blast radius is unpredictable. Again, it is all in the same artifact.</p>
</li>
<li><p>Cognitive load. The on call person needs to be fluent in identity, billing, and notifications to ship a fix at 2 a.m. That is not a badge of honor. It is a boundary problem.</p>
</li>
</ul>
<h3 id="heading-after-introducing-boundaries-and-the-srp">After: Introducing Boundaries and the SRP</h3>
<pre><code class="lang-mermaid">graph TD
  subgraph Identity
    P1[Auth Service] --&gt; PDB[(Auth DB)]
  end

  subgraph Users
    U1[User Profile Service]:::good --&gt; UDB[(Profile DB)]
  end

  subgraph Billing
    B1[Payments Service]:::good --&gt; BDB[(Payments DB)]
  end

  subgraph Comms
    N1[Notification Service]:::good --&gt; NQ[(Email Queue)]
  end

  Client[Web or Mobile App]
  Client --&gt; P1
  Client --&gt; U1
  Client --&gt; B1
  B1 --&gt;|Receipt Event| N1
  U1 --&gt;|ProfileUpdated Event| N1

  classDef good fill:#e8f7e8,stroke:#2b8a3e,stroke-width:2px;
</code></pre>
<p><strong>What improves and why</strong></p>
<ul>
<li><p>Assume each application has its own deployment. Teams will deploy on their own schedules because the services are independently deployable units with separate pipelines and artifacts. Payment can ship a new gateway adapter and rotate secrets without rebuilding profile or auth. This is the concrete benefit of aligning services to business capabilities and keeping contracts stable.</p>
</li>
<li><p>The blast radius is smaller. The notification service can be as good as dead, and users can still update profiles and pay. Teams can recover one capability at a time instead of everything at once. This aligns with Newman’s guidance to design for team autonomy and failure isolation.</p>
</li>
<li><p>Ownership is clean. Each team owns a single primary data model. Profile owns profile data. Payments owns payment state and reconciliation records. That clarity keeps responsibilities from leaking across boundaries and keeps SRP intact at the service level.</p>
</li>
</ul>
<h2 id="heading-anti-pattern-the-distributed-monolith">Anti Pattern: The Distributed Monolith</h2>
<p>You know you have a distributed monolith when services live in different processes but move as a herd. Synchronized releases, shared databases and resources, or failures cascading from one service to the next are some of the usual tells. <a target="_blank" href="https://www.gremlin.com/blog/is-your-microservice-a-distributed-monolith">Andre Newman (not Sam)</a> puts it bluntly: it is deployed like microservices but designed like a monolith.</p>
<p>Here is an extreme example of what it can look like:</p>
<pre><code class="lang-mermaid">graph TD
  subgraph "Looks like services"
    P[Profile Service]:::warn --&gt; S[(Shared DB)]
    B[Payments Service]:::warn --&gt; S
    N[Notification Service]:::warn --&gt; S
  end

  subgraph "Looks like one release package"
    Rel[A Single Pipeline]:::warn --&gt; P
    Rel --&gt; B
    Rel --&gt; N
  end

  P -. "Shared ORM models" .- B
  B -. "Shared ORM models" .- N

  classDef warn fill:#fff1cc,stroke:#c77d00,stroke-width:2px;
</code></pre>
<p>Why this blocks progress:</p>
<ul>
<li><p>A schema change for one service becomes a breaking change for other services that are coupled with it. Independent deployability disappears because contracts are not respected at the boundary and the database is the contract. <a target="_blank" href="https://martinfowler.com/articles/break-monolith-into-microservices.html">Zhamak Dehghani</a> warns that this kind of coupling invites a painful mix of distributed complexity.</p>
</li>
<li><p>Coordinated testing and releases slow everyone down. The calendar becomes the main constraint and teams start avoiding change. At that point you pay the cost of distributed systems and still release like a monolith. You’ve got the worst parts of monoliths and microservices.</p>
</li>
</ul>
<p>Are you even getting the true benefits of a Microservice by doing some of these things?</p>
<p>How do you get out of the above situation?</p>
<ul>
<li><p>Give each service its own data store or its own schema slice with strictly versioned interfaces that are owned by the service. Use events or APIs for the management and retrieval of your data.</p>
</li>
<li><p>Treat changes as contract evolution. Version your payloads and provide a sunset policy. When a consumer needs something different than what is being offered, they ask for a new version rather than reaching across your boundary into your tables.</p>
</li>
<li><p>Pull shared code into real libraries with stable interfaces instead of sharing internal models. You want to keep these interactions stateless, and share capabilities.</p>
</li>
</ul>
<h2 id="heading-keep-it-copacetic-what-is-it-like-with-clean-boundaries">Keep it Copacetic - What Is It Like with Clean Boundaries?</h2>
<p>Product asks for a profile display name rule. Only the profile service changes. It updates validation and schema, publishes a non breaking version bump on the API, and ships a new container. Payments and notifications feel no change or stress. Users who consume the event get the “ProfileUpdated” record and react to the event on their own time. Oh, but the contract that comes back has an additional field! Doesn’t matter, that is not a breaking change.</p>
<p>This is “one service, one reason to change” in action.</p>
<p>A week later finance selects a new payment gateway. Payments swaps an adapter behind a stable interface, flips secrets, and deploys. The deployment takes significantly longer than expected, but the Profile and Notification Services keep running. In fact, Profile and Notification Services don’t care what is being used in your payment gateway because it is hidden away.</p>
<p>You did not need a freeze. You did not need to gather everyone alive to fix it. <a target="_blank" href="https://martinfowler.com/articles/microservice-trade-offs.html#deployment">Independent deployability</a> at its finest.</p>
<h2 id="heading-practical-boundary-heuristics">Practical Boundary Heuristics</h2>
<p><strong>Anchor names in capabilities.</strong> Names like User Profile, Payments, Catalog, and Order Routing cue product owners and engineers to the single audience for change. Names like SharedService or UserUtil are warning signs. Newman’s capability slicing guidance is a good check during reviews.</p>
<p><strong>Keep APIs cohesive.</strong> A profile API should talk about profile concepts. If you expose card capture calls from the profile API “for convenience,” or because there’s a deadline you just crossed the line and invited tight coupling.</p>
<p><strong>Own one primary data model.</strong> A service should own its data. Read models in other services are copies that can be refreshed or rebuilt. Doing a join of data across services, or sharing data tables raise the likelihood of synchronized releases and drift toward a distributed monolith. You shouldn’t care how data is stored by other services. It should be hidden from your service.</p>
<p><strong>Design for independent deployability.</strong> Think about independent versioning, separate repositories, pipelines, and runtime identities. In AKS, map services to namespaces and deploy with separate charts. In Azure Container Apps, use separate container apps and per service revisions. The platform gives you the levers, but the boundary is what makes those levers useful.</p>
<p><strong>Treat cross cutting concerns as platform concerns.</strong> Identity policy, logging, and tracing should be shared through gateways, libraries, or sidecars, not copied into every service as a second job. Keep domain services focused on their capability.</p>
<h2 id="heading-what-this-is-not">What This is Not</h2>
<p>This is not “small services everywhere.” A service can be small and still violate SRP if it changes for unrelated actors. I’m not even opposed to rolling with a monolith in many cases. Fowler’s “<a target="_blank" href="https://martinfowler.com/bliki/MonolithFirst.html">Monolith First</a>” essay is a great read. Start where you are, tighten boundaries, and split only when the benefits are clear.</p>
<p>This is not Continuous Deployment, where changes get put into production everyday and you’re deploying to production multiple times a day. This is closer to Continuous Delivery where there are gates but frequent deployments.</p>
<h2 id="heading-early-warning-signs">Early Warning Signs</h2>
<ul>
<li><p>Is there a single audience for change for this service, such as a product owner or business role for that capability? If requests routinely arrive from unrelated groups, you probably need to revisit your boundaries. This follows the SRP idea that responsibilities map to actors.</p>
</li>
<li><p>Can this service ship without waiting for other teams? If not, identify the coupling. Shared databases, shared release pipelines, or unstable contracts are the usual causes. Independent deployability is a signature trait of <a target="_blank" href="https://martinfowler.com/articles/microservices.html">microservices</a>.</p>
</li>
<li><p>Does the service own one the data model? If it depends on another service’s tables or requires heavy usage of another services data where you want to just hit their database, you are drifting toward a distributed monolith.</p>
</li>
<li><p>Are the APIs and events cohesive and versioned? Are you leveraging gateways? Contract evolution supports independence. Publishing events for changes in your capability lets other teams adapt on their own timeline. Having your team and application operate autonomous is why this whole idea works.</p>
</li>
</ul>
<h2 id="heading-hold-that-thought-for-now">Hold That Thought For Now</h2>
<p>SOLID started as design for classes. I use The Single Responsibility Principle to help create a reliable compass for service boundaries when you connect “reason to change” to business capability and team ownership. Do that, and your services become small units that ship when they should, fail safely, are observable and stay understandable. There’s a lot to microservices, but without understanding the idea of having one service having a single responsibility, you’re always going to struggle.</p>
<p>And credit to anyone who read this and picked up my Local H references.</p>
]]></content:encoded></item><item><title><![CDATA[Technical Debt? Not really...]]></title><description><![CDATA[One of the lines that hit me the hardest, but also helped me truly understand my role at work, is:

“We are not here to build cool software; we’re here to support the business.”

While I love using the latest technologies and writing clean, modern co...]]></description><link>https://brokenintellisense.com/technical-debt-not-really</link><guid isPermaLink="true">https://brokenintellisense.com/technical-debt-not-really</guid><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Mon, 10 Nov 2025 18:10:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Olki5QpHxts/upload/c8341b2dae243410f3f4ba9173ff8a19.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the lines that hit me the hardest, but also helped me truly understand my role at work, is:</p>
<blockquote>
<p>“We are not here to build cool software; we’re here to support the business.”</p>
</blockquote>
<p>While I love using the latest technologies and writing clean, modern code, those things are only tools that help the business succeed.</p>
<p>Quarterly revenues must be met. Features must be released. Promises made by an overly ambitious sales team must be kept. Our purpose is to impact the bottom line, and our solutions are how we do that.</p>
<p>Sometimes, a “solution” may be as simple as having someone manually write on a form. You could digitize it, but that introduces something else to maintain. Is it worth it? If the form is used once a month or changes regularly, maybe not. Maintenance can quickly outweigh the benefit. When it is not kept up, the form becomes useless.</p>
<p>As we create technical solutions, decisions must be made. Some of these decisions do not immediately show up as dollars and cents. Product teams might see them as optional. However, defining the kind of work you are doing, why you are doing it, and how it benefits the business is essential. Here is how I have been organizing that conversation.</p>
<h2 id="heading-not-all-work-is-technical-debt">Not All Work Is Technical Debt</h2>
<p>It is tempting to label all behind-the-scenes technical work as technical debt. After all, it rarely delivers visible features or direct value to users. But just because a task does not increase revenue directly does not mean it is debt.</p>
<p>I group this kind of work into three categories that help make sense of it:</p>
<ul>
<li><p><strong>Technical Debt</strong> – The interest you pay because of delayed or poor decisions.</p>
</li>
<li><p><strong>Technical Maintenance</strong> – The cost of staying in business.</p>
</li>
<li><p><strong>Technical Enablers</strong> – Today’s investments that create tomorrow’s efficiency.</p>
</li>
</ul>
<h3 id="heading-technical-debt">Technical Debt</h3>
<p>Here is an example that happens often. Developers finish a feature that meets the business definition of “done,” but they skip writing unit tests. A concession is made, the work goes out the door, and the missing tests are never written.</p>
<p>Months later, a change is needed. There are no tests, so developers must verify everything manually. It is slow, error-prone, and frustrating. That is technical debt.</p>
<p>Technical debt is not the same as old code. Old code can be perfectly fine. I probably have code running in production that I wrote over ten years ago. Debt happens when changes become harder and more expensive because of earlier shortcuts.</p>
<p>In this case, skipping automation was the shortcut. Every time we revisit that code, we spend extra time remembering edge cases and scenarios. Over time, the cost adds up until the team decides it would be easier to rewrite it from scratch.</p>
<h3 id="heading-technical-maintenance">Technical Maintenance</h3>
<p>Ten years ago, I could not have predicted how common cloud adoption would become. Three years ago, I could not have predicted how AI would reshape our work. Three months ago, I could not have predicted the latest change in business requirements.</p>
<p>The world evolves, and our systems must evolve with it. Maintaining our tools and platforms ensures that we can support the business, use supported technologies, integrate cleanly, and remain secure. It also helps us retain and attract talent. Very few developers want to work on outdated technology.</p>
<p>Technical maintenance does not add direct revenue, but it prevents problems and allows the business to continue operating smoothly.</p>
<h3 id="heading-technical-enablers">Technical Enablers</h3>
<p>As businesses grow, requirements change. We want our systems to be more resilient, maintainable, and secure. Some work might not immediately affect revenue but can reduce errors, improve developer efficiency, and prevent data loss.</p>
<p>These improvements are <strong>technical enablers</strong>. They are investments that make future development easier, faster, and safer.</p>
<h2 id="heading-do-it-right-even-if-youre-doing-it-wrong">Do It Right, Even if You’re Doing it Wrong</h2>
<p>Concessions are made every day in software development. The moment you write a line of code, you have introduced some level of debt into your system. The key is to take shortcuts strategically. Even if you accept technical debt, do it in a way that allows you to retrofit improvements later.</p>
<p>In the example of missing unit tests, if you are aware that debt is being introduced, at least design your code so that tests can be added later. If you are not adhering to sound coding practices, it becomes impossible to repay that debt.</p>
<p><a target="_blank" href="https://sites.google.com/site/unclebobconsultingllc/a-mess-is-not-a-technical-debt">Uncle Bob</a> states, a mess is not technical debt. Technical debt involves conscious decision-making. Messy, careless code is simply a mess and requires significant cleanup. Debt can be managed; chaos cannot.</p>
<h2 id="heading-a-simple-analogy-cars">A Simple Analogy: Cars</h2>
<p>To explain this concept to non-technical teams, I often use cars as an example. Everyone understands car maintenance and upgrades.</p>
<h3 id="heading-technical-maintenance-1">Technical Maintenance</h3>
<p>You need to get oil changes, refill gas, and check tire pressure. These tasks do not make the car faster, but they keep it running. If you ignore them, the car will eventually fail.</p>
<h3 id="heading-technical-debt-1">Technical Debt</h3>
<p>You skip oil changes because you are busy. At first, nothing happens. Later, gas mileage drops, and the engine starts making strange sounds. Eventually, the engine seizes, and you are stuck with a huge repair bill. That is the cost of ignoring maintenance and letting debt accumulate.</p>
<h3 id="heading-technical-enablers-1">Technical Enablers</h3>
<p>Think about how car keys have evolved.<br />My first car, built in 1996, used a metal key that I had to insert and turn. My 2005 car had a push-start button that required inserting the key fob. My 2022 car no longer requires inserting the fob at all. My next car will hopefully have remote start because winters in Chicago are cold.</p>
<p>Could I still drive a car with a metal key? Sure. But the newer systems make life easier and more efficient. Each upgrade represents a <strong>technical enabler</strong>.</p>
<h2 id="heading-it-is-not-always-clear-cut">It Is Not Always Clear-Cut</h2>
<p>There is often overlap between these categories. Some technical enablers require maintenance. Some maintenance removes debt. Sometimes an enabler can eliminate debt entirely.</p>
<p>If you know you have to ship with some technical debt, be intentional about it. Identify the debt, and keep it in your backlog with a real plan to come back to it. Designs evolve, and priorities shift, but documenting known debt helps your team stay honest and transparent about trade-offs.</p>
<p>But your team is agile (Right, right?!). Your product team will need to understand when something must be reworked without an immediate business benefit. That is part of continuous improvement.</p>
<p>As you work with your teams, use these terms to clearly explain where effort and resources are going. Words matter. Use them thoughtfully. Be prudent. Keep your backlog up to date, and acknowledge technical debt when you can.</p>
]]></content:encoded></item><item><title><![CDATA[Automated Unit Test Code Coverage Tools for .NET (2025)]]></title><description><![CDATA[In researching some tools, I came across TUnit- a newer .NET test framework that runs on the Microsoft Testing Platform (MTP) and is AOT Compatible. It emphasizes extensibility and performance. TUnit’s NuGet market share is still small compared to xU...]]></description><link>https://brokenintellisense.com/automated-unit-test-code-coverage-tools-for-net-2025</link><guid isPermaLink="true">https://brokenintellisense.com/automated-unit-test-code-coverage-tools-for-net-2025</guid><category><![CDATA[code coverage]]></category><category><![CDATA[unit testing]]></category><category><![CDATA[TUnit]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Thu, 11 Sep 2025 21:36:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/t3ofrCzqtes/upload/a9303b0cff0b62423ad57c0aaa1d92cb.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In researching some tools, I came across <a target="_blank" href="https://tunit.dev/">TUnit</a>- a newer .NET test framework that runs on the Microsoft Testing Platform (MTP) and is AOT Compatible. It emphasizes extensibility and performance. TUnit’s NuGet market share is still small compared to xUnit, NUnit, and MSTest, which makes total sense. But it is important to note that TUnit’s use of MTP affects how you collect code coverage. This lead to me asking, “What is being used for Automated Unit Test Code Coverage today?”</p>
<p>Unit test code coverage measures the proportion of your production code executed by your automated tests. It does not measure test quality, defect risk, security or complexity. Coverage can be useful for finding areas that have been neglected in your testing, but Goodhart’s Law applies (<strong><em>When a measure becomes a target, it ceases to be a good measure.)</em></strong>. High coverage can exist with poor assertions, which is why mutation testing with Stryker is a strong companion to coverage.</p>
<h2 id="heading-where-tunit-fits-among-c-test-frameworks">Where TUnit fits among C# test frameworks</h2>
<p>Many posts and threads claim xUnit exceeds NUnit in popularity. A safe, objective signal is <a target="_blank" href="https://www.nuget.org/">NuGet</a> download volume, where xUnit’s totals are higher than NUnit. Just remember NuGet counts restores at the endpoint and can be inflated by CI. It is not a one-to-one proxy for projects.</p>
<h3 id="heading-framework-snapshot">Framework snapshot</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Framework</td><td>NuGet package</td><td>NuGet Counts</td></tr>
</thead>
<tbody>
<tr>
<td>xUnit</td><td><a target="_blank" href="https://www.nuget.org/packages/xunit?utm_source=chatgpt.com">xunit</a></td><td>134.8 K</td></tr>
<tr>
<td>NUnit</td><td><a target="_blank" href="https://www.nuget.org/packages/nunit?utm_source=chatgpt.com">NUnit</a></td><td>95.9 K</td></tr>
<tr>
<td>MSTest</td><td><a target="_blank" href="https://www.nuget.org/packages/MSTest.TestFramework/?utm_source=chatgpt.com">MSTest.TestFramework</a></td><td>108.2 K</td></tr>
<tr>
<td>TUnit</td><td><a target="_blank" href="https://www.nuget.org/packages/TUnit?utm_source=chatgpt.com">TUnit</a></td><td>1.2 K</td></tr>
</tbody>
</table>
</div><h2 id="heading-what-unit-test-code-coverage-is-and-is-not">What “unit test code coverage” is and is not</h2>
<p>Coverage tools mark the lines, branches, or methods executed during your test run and can color those in your editor and in HTML or Cobertura reports. This helps you spot untested branches quickly. Coverage does not score assertion quality or mutation resistance. For that, check out <a target="_blank" href="https://brokenintellisense.com/stop-trusting-code-coverage-mutation-testing-with-stryker-will-change-how-you-write-unit-tests">my thoughts on Stryker.NET</a>.</p>
<h2 id="heading-local-code-coverage-tools-you-can-use-today">Local code coverage tools you can use today</h2>
<p>The focus here is quick developer-local feedback in the IDE.</p>
<h3 id="heading-visual-studio-built-in-code-coverage">Visual Studio built-in code coverage</h3>
<p><a target="_blank" href="https://learn.microsoft.com/en-us/visualstudio/test/using-code-coverage-to-determine-how-much-code-is-being-tested?view=vs-2022&amp;tabs=csharp">Visual Studio 2022</a> now offers integrated code coverage in Community, Professional, and Enterprise, with collection through the Microsoft Code Coverage engine and the <code>dotnet-coverage</code> tooling. It plugs into Test Explorer and works across frameworks supported by VSTest or MTP.</p>
<p>Quick CLI reference for local runs:</p>
<p><code>dotnet test -- collect: "XPlat Code Coverage"</code></p>
<p>That works and gives you an XML file to look at, which you need another tool to absorb its information, but it just seems like a lot of extra steps. I want what they have built right into the IDE.</p>
<p>With Visual Studio Enterprise, you get a nice UI that plugs right into the test runner that most people are using. Visual Studio Enterprise is the most seamless, painless way of running. But Enterprise is expensive and not everyone has access to it.</p>
<p><img src="https://learn.microsoft.com/en-us/visualstudio/test/media/vs-2022/code-coverage-highlight.png?view=vs-2022" alt="Screenshot showing code coverage highlighted." /></p>
<h3 id="heading-jetbrains-dotcover-resharper-and-rider">JetBrains dotCover (ReSharper and Rider)</h3>
<p>I know that <a target="_blank" href="https://www.jetbrains.com/dotcover/">dotCover</a> isn't free either, and I know that <a target="_blank" href="https://www.jetbrains.com/">JetBrains</a> hasn't been as popular as VisualStudio's Intellisense has gotten smarter and smarter. But I still use it, and I love it. dotCover is built on top of ReSharper, and plugs into the ReSharper Test Runner, much like Microsoft's is built in. What I love about dotCover is that it can run the dirty tests as you hit save. For example, say you're modifying a few lines of code that are covered by 10 tests. Once you hit save, continuous testing will execute just those 10 tests that are "dirty", and give you feedback quickly. So if the idea is to find your broken unit test earlier, the cheaper it is to fix—what could be faster than when you hit save? Maybe that tUnit tool? We'll look at that some other day.</p>
<p><img src="https://www.jetbrains.com/dotcover/img/screenshots/visual-studio-integration.png" alt="Seamless integration with Visual Studio" /></p>
<p>Another great tool that I like is the Hot Spots cloud. JetBrains will generate a word cloud of methods that are high in cyclomatic complexity and low code coverage. This is great when you want to jump into a new project and evaluate what is really going on. It also helps highlight areas that need attention first when you take on a new project that you're trying to improve.</p>
<p><img src="https://www.jetbrains.com/dotcover/img/screenshots/hotspots.png" alt="Hot spots view" /></p>
<p>These toolsets are available in Rider as well, which when I did use it, felt like a superior tool to Visual Studio, but it is time for a revisit in that.</p>
<h3 id="heading-fine-code-coverage-visual-studio-extension-free">Fine Code Coverage (Visual Studio extension, free)</h3>
<p><a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=FortuneNgwenya.FineCodeCoverage">Fine Code Coverage (FCC)</a> runs inside Visual Studio and renders coverage in the editor plus a browsable report. FCC supports Microsoft Code Coverage, Coverlet, and OpenCover through ReportGenerator. Recent versions added a path that uses Microsoft’s coverage so you do not need Visual Studio test adapters in many cases. FCC also has “Risk Hotspots” that highlight complex code with low coverage.</p>
<p><img src="https://raw.githubusercontent.com/FortuneN/FineCodeCoverage/master/Art/Output-Coverage.png" alt="Coverage View" /></p>
<p>This free tool has all of the basics that you would want and is easy to install. You don’t need to run CLI commands, and while it may feel a bit clunkier than others because you need to jump across multiple windows in Visual Studio, I’d still recommend this tool if your budget is a concern.</p>
<h2 id="heading-it-doesnt-matter-much-which-one-you-choose-but-use-something">It doesn't matter much which one you choose, but use something</h2>
<p>All of these tools measure coverage and they do it well. None of them will transform your daily workflow. Pick the one that fits your IDE and CI so you will actually use it. Since there is a free option, there is no excuse not to add coverage and improve your tests. Standardizing across the team helps, but choose what supports the way you want your tests to run.</p>
]]></content:encoded></item><item><title><![CDATA[Stop Trusting Code Coverage: Mutation Testing with Stryker Will Change How You Write Unit Tests]]></title><description><![CDATA[Unit test coverage is like flossing. You say you do it, but deep down, we know you’re not doing it well enough (not me though, I'm perfect). Allow me to introduce you to mutation testing, the slightly evil cousin of code coverage that intentionally b...]]></description><link>https://brokenintellisense.com/stop-trusting-code-coverage-mutation-testing-with-stryker-will-change-how-you-write-unit-tests</link><guid isPermaLink="true">https://brokenintellisense.com/stop-trusting-code-coverage-mutation-testing-with-stryker-will-change-how-you-write-unit-tests</guid><category><![CDATA[Mutations]]></category><category><![CDATA[dotnet]]></category><category><![CDATA[unit testing]]></category><category><![CDATA[code coverage]]></category><category><![CDATA[stryker]]></category><category><![CDATA[automation testing ]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Tue, 24 Jun 2025 00:46:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/mbL91Lg56zc/upload/503d7663da93c5d1107737cded96c8fb.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Unit test coverage is like flossing. You say you do it, but deep down, we know you’re not doing it well enough (not me though, I'm perfect). Allow me to introduce you to mutation testing, the slightly evil cousin of code coverage that intentionally breaks your code to see if your test suite even notices. In this post, I’ll walk through my first experience using <a target="_blank" href="https://stryker-mutator.io/docs/stryker-net/introduction/">Stryker Mutator</a>… the tool that gleefully mutates your production code and then judges your test suite for sport.</p>
<p>Stryker Mutation Testing works by making small changes (called <em>mutants</em>) to your production source code and then rerunning your test suite to see if the unit tests catch the changes. If a test fails, the mutant is “killed”, which indicates that the test suite is effectively validating the behavior of your production code. If the mutant survives, it means the tests did not detect the change, telling you there are gaps in your test coverage and overall test quality.</p>
<h3 id="heading-the-first-report">The First Report</h3>
<p>I wrote a project with some basic unit tests that were a mixture of good and not useful tests. I have an entire repository around this, but <a target="_blank" href="https://github.com/LarryGasik/MutationTesting/commit/c16a1b2678fe1645819dfa0a4e987324b75fc96a">here’s the commit</a> that I used for the first test. Using the <a target="_blank" href="https://stryker-mutator.io/docs/stryker-net/getting-started/">Getting Started</a> Instructions provided by Stryker, I <a target="_blank" href="https://www.nuget.org/packages/dotnet-stryker">installed Stryker globally</a>, and that allowed me to execute the commands via CLI. I did <code>dotnet new tool-manifest</code> because that’s what the instructions said to do, then executed Stryker via CLI on the testing project using the command <code>dotnet stryker</code> while having the unit test project as the working directory, and it generated a new directory at the root of the repository called <code>StrykerOutput</code>. Opening the HTML report gave me this summary page:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750697218260/40286593-0748-4592-b6f0-2424d51baed7.png" alt class="image--center mx-auto" /></p>
<p>That was easy. Right away, I liked what I saw. This is a clear summary broken down by file. Since my project only had one class, all the statistics naturally bubbled up to the top level. That said, I wasn't immediately sure how it came up with certain numbers in the "Killed" and "Survived" columns.</p>
<p>Digging into <code>SomeBusinessClass.cs</code>, I was greeted with a more detailed view:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750697057836/4e0725a1-3a09-4daf-b46a-ec16f2cd20d9.png" alt class="image--center mx-auto" /></p>
<p>This view showed some insightful metrics. I knew there were bad unit tests in the mix, and even though I had around 75% code coverage, it was clear that metric wasn’t telling the whole story. Stryker breaks things down into several categories:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong><em>Category</em></strong></td><td><strong><em>Definition</em></strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Killed</strong></td><td>At least one test failed when the code was mutated (this is what we strive for)</td></tr>
<tr>
<td><strong>Survived</strong></td><td>The mutation passed all unit tests, which likely means a missing or incomplete test.</td></tr>
<tr>
<td><strong>Timeout</strong></td><td>Tests took too long, possibly due to an infinite loop.</td></tr>
<tr>
<td><strong>No Coverage</strong></td><td>The mutated code wasn’t hit by any test at all.</td></tr>
<tr>
<td><strong>Ignored</strong></td><td>Mutants that were explicitly ignored.</td></tr>
<tr>
<td><strong>Runtime Errors</strong></td><td>Mutants that caused exceptions (e.g., out-of-memory). Interesting to compare with Timeout.</td></tr>
<tr>
<td><strong>Compile Errors</strong></td><td>Mutants that didn’t even compile.</td></tr>
<tr>
<td><strong>Detected</strong></td><td>Any mutant that was caught (i.e., Killed, Timeout, etc.).</td></tr>
<tr>
<td><strong>Undetected</strong></td><td>Mutants that snuck through and not covered or not asserted correctly.</td></tr>
<tr>
<td><strong>Total</strong></td><td>All mutants, minus runtime and compile errors.</td></tr>
</tbody>
</table>
</div><p>These definitions are provided directly in the Stryker report. At first, I wasn’t sure how to locate each mutant or understand exactly what led them to survive. Still, it clearly reported back that my tests needed work. That said, one of my early frustrations was that the report didn’t clearly explain why a mutant survived or what the change was.</p>
<p>Looking further into the test coverage, I found lines that weren’t hit and sure enough, Stryker highlighted them too. Stryker also highlighted vulnerabilities to your logic based on the mutations:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750704821407/9220d3d5-493d-4cc4-8deb-01c05472c9b7.png" alt class="image--center mx-auto" /></p>
<p>So I wrote a test to cover that line but just that line, not much else. This actually <strong>increased</strong> the number of surviving mutants. This is Stryker’s way of reporting back to you that you wrote a low quality test. While the unit test technically passed, it wasn’t comprehensive. I didn’t verify that <code>SomeCallAsync("no")</code> was invoked, only the result. Once I added an assertion to verify the method call, my “Killed” numbers jumped up and the mutation score improved dramatically.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750705530683/55ae8aa2-fbf0-4af8-9c78-fd662291d7df.png" alt class="image--center mx-auto" /></p>
<p>One thing I did notice is that even with this small project, the run took about 15 seconds. That made me wonder how long it would take on a larger, enterprise-level codebase. I also started to wonder whether Stryker could distinguish between different types of tests, like focusing only on unit tests and skipping integration ones. Then I remembered that the package manager is scoped to the project you have selected, so you’re good to go if they’re in a different project.</p>
<p>As I spent more time in the report view, things started to click. Stryker generates a report of your code base and adds small color-coded markers to your source code dots showing which lines were tested, mutated, or missed. At the top, toggles let you filter different mutation types, like Killed or Survived, which made the insights easier to parse:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750706756279/d9aa9458-c033-442b-9263-0ed2112221e6.png" alt class="image--center mx-auto" /></p>
<p>Looking at the surviving mutants (your problem children), I received clues that were especially helpful. For example, it flagged that I didn’t test the right side of a null-coalescing operator (a common one to miss). It also showed a mutation related to an arithmetic operation that slipped past my tests. Another survivor was a call to a dependency where I had mocked the input too generically.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750707304899/a2b73747-bb54-4069-b03f-e8c90b2fad21.png" alt class="image--center mx-auto" /></p>
<p>This showed me that Stryker doesn’t just mutate logic, it also checks whether calls to dependencies are verified. In my original tests, I used <code>Moq</code> with <code>It.IsAny&lt;string&gt;()</code>, and while the test passed, it wasn’t precise. Stryker picked up on that, which pushed me to write more explicit tests.</p>
<p>Interestingly, I even wrote a failing unit test on purpose, and Stryker didn’t flag it. Stryker isn’t about test <strong>results</strong>; it’s about test <strong>effectiveness</strong>. You can have passing tests that don’t catch regressions, and Stryker helps uncover exactly that.</p>
<p>Overall, I’m really impressed with Stryker so far. It was easy to set up (just make sure you have nuget.org as a package source because for some reason it was removed from one of my machines), the CLI experience was smooth, and the reports are packed with actionable insights. It helps shine a light on the weaker parts of your test suite, especially when paired with something like <a target="_blank" href="https://www.jetbrains.com/dotcover/">dotCover</a>. Sure, it takes a little getting used to (killed = good, survived = bad), and the feedback isn’t always crystal clear, but it does its job well.</p>
<p>There’s also functionality to ignore specific files or methods, which is handy when you’ve inherited code or need to set team-wide mutation coverage standards. Stryker intentionally skips <a target="_blank" href="https://stryker-mutator.io/docs/stryker-net/technical-reference/mutant-schemata/#constant-values">mutating constants</a>, which I’m totally fine with.</p>
<p>As I write this post and continue exploring the Stryker documentation, I find myself asking more and more questions about how to get the most out of mutation testing. There’s a lot to unpack, and I’m still deep in the weeds of experimenting including kicking off a run against one of my enterprise-level solutions to see how it performs at scale. Here’s what’s on my mind:</p>
<h2 id="heading-goodharts-law-business-business-business-numbers">Goodhart’s Law - Business, business, business. Numbers.</h2>
<p>Every time I hear someone talk about unit test coverage percentages, I instinctively cringe a little. Yes, coverage matters but almost no one talks about test quality.</p>
<p>This is where Goodhart’s Law comes into play:</p>
<blockquote>
<p><em>"When a measure becomes a target, it ceases to be a good measure."</em><br />- Marilyn Strathern</p>
</blockquote>
<p>If you tell engineers they need to hit 80% code coverage, they’ll hit it. But will they actually test meaningful logic? Probably not. You’ll get quick assertions like “make sure the return value isn’t null,” but that doesn’t truly verify behavior. Mutation testing helps raise the bar by holding our tests accountable to catch real issues.</p>
<h2 id="heading-execution-time-a-reality-check">Execution Time: A Reality Check</h2>
<p>That enterprise-level test run I mentioned? I started it over two hours ago and it’s still going. Stryker analyzes the project, generates mutations, compiles each mutant, and runs the full test suite for every one. It then bundles all that into a neat little report.</p>
<p>If each test takes 10 seconds and you’ve got hundreds of mutants, and thousands of tests, the total runtime starts to feel... exponential. It’s clear this isn’t something I’d want in the CI pipeline, at least not without some serious optimization. That’s an area I still need to explore.</p>
<h2 id="heading-where-and-when-do-i-do-this">Where and When Do I Do This?</h2>
<p>I’m a huge believer in automating what I can and delivering feedback to developers early and often. But now that I’ve seen the time and resources this takes, I’m asking: <strong>When should mutation testing actually run?</strong></p>
<p>It might make sense as part of a regular quality audit, or in pre-release cycles, rather than per-commit. Still figuring that out.</p>
<h2 id="heading-mutant-schemata">Mutant Schemata</h2>
<p>Earlier, I mentioned the different mutant categories, but it left me wondering… how can a mutant cause a runtime exception?</p>
<p>The answer hit me once I read up on <a target="_blank" href="https://stryker-mutator.io/docs/stryker-net/technical-reference/mutant-schemata">Mutant Schemata.</a> Stryker actually injects modified logic into your production code during test runs. That means something like a string concatenation mutation could cause a runtime error:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">if</span> (Environment.GetEnvironmentVariable(<span class="hljs-string">"ActiveMutation"</span>) == <span class="hljs-string">"1"</span>) {
  <span class="hljs-keyword">return</span> <span class="hljs-string">"hello "</span> - <span class="hljs-string">"world"</span>; <span class="hljs-comment">// mutated code</span>
} <span class="hljs-keyword">else</span> {
  <span class="hljs-keyword">return</span> <span class="hljs-string">"hello "</span> + <span class="hljs-string">"world"</span>; <span class="hljs-comment">// original code</span>
}
</code></pre>
<p>It is a great example and it shows how deep the tool goes. There are ways to tune this behavior, but I haven’t gotten that far yet.</p>
<p>As I dig deeper into mutation testing, I’m excited to learn more. What shows up in the reports? How other teams are using it, and where are the real pain points? If you’ve used Stryker, another mutation tool, or are thinking about it, I’d love to hear your thoughts.</p>
<p>Ask me questions. Challenge assumptions. Give me info!</p>
]]></content:encoded></item><item><title><![CDATA[Understanding HTTP Status Codes: Importance and Usage in RESTful Microservices]]></title><description><![CDATA[Microservice architectures live and die by clear communication. When dozens of services (and external vendors) interact via REST APIs, HTTP status codes become the silent contract that every request and response abides by. Using them correctly is mor...]]></description><link>https://brokenintellisense.com/understanding-http-status-codes-importance-and-usage-in-restful-microservices</link><guid isPermaLink="true">https://brokenintellisense.com/understanding-http-status-codes-importance-and-usage-in-restful-microservices</guid><category><![CDATA[status code]]></category><category><![CDATA[REST API]]></category><category><![CDATA[http]]></category><category><![CDATA[http requests]]></category><category><![CDATA[ASP.NET]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Wed, 07 May 2025 03:05:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/VZNYO4suM5o/upload/61bd089b7780f82a3d886153584c2bda.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Microservice architectures live and die by clear communication. When dozens of services (and external vendors) interact via REST APIs, HTTP status codes become the silent contract that every request and response abides by. Using them correctly is more than a nicety – it’s essential for clarity, observability, and maintainability in distributed systems. In this post, we’ll explore why status codes matter, focus on commonly used codes and give practical advice.</p>
<h1 id="heading-why-http-status-codes-matter-in-microservice-apis">Why HTTP Status Codes Matter in Microservice APIs</h1>
<p>They’re part of your API contract. In a RESTful communication, status codes are a primary way the server communicates what happened to a request. <a target="_blank" href="https://martinfowler.com/articles/richardsonMaturityModel.html">Martin Fowler notes that a truly RESTful service makes full use of HTTP verbs <em>and</em> response codes</a>. In other words, your API should use HTTP status codes meaningfully, not just always return 200 with a magic string that the consumer is expected to interpret. As microservices expert <a target="_blank" href="https://samnewman.io/books/building_microservices_2nd_edition/">Sam Newman</a> advises, every service should leverage the standard HTTP codes to clearly indicate outcomes. This makes your API self-explanatory to clients and developers.</p>
<p><strong>Clarity for clients and developers.</strong> A well-chosen status code instantly tells the client how to handle the response. For example, a <code>404 Not Found</code> tells a client it used a bad URL or ID, whereas a <code>400 Bad Request</code> indicates something wrong with the request format or data. If your API returns a 404 for a missing record ID, the client knows it’s a wrong ID; if it returns 400, the client knows it sent an invalid request (maybe malformed JSON). This clarity improves the developer experience and reduces misunderstandings. No one likes guessing whether a request failed due to their bug or a server issue – the status code should make it obvious.</p>
<p><strong>Observability and monitoring.</strong> In a distributed system, proper status codes are crucial for tracking the health of services. Monitoring tools and logs typically treat <code>5xx</code> errors as indicators of server or system problems, while <code>4xx</code> errors indicate client-side issues. As one microservices guide notes, services are generally expected to emit 2xx, 3xx, or 4xx codes, whereas any 5xx or timeout suggests an unhealthy service that may trigger alerts. If you misuse codes (for example, returning 200 OK even when an error occurred and that’s in the body of the response), your observability is compromised. Your dashboards won’t show the spike in errors because you never signaled an error to begin with. Using the correct codes helps your team quickly pinpoint issues (e.g. a surge in 504 Gateway Timeouts could flag upstream vendor problems immediately).</p>
<p><strong>Maintainability and consistency.</strong> In a microservice ecosystem, dozens of services might be developed by different teams. Consistent use of HTTP status codes across all services makes it easier to maintain and integrate these components. If every team follows the same protocols, developers moving between services (or writing client code for multiple services) don’t have to relearn the error semantics each time. Consistency is something thought leaders like David Farley emphasize – without it, you incur complexity and technical debt without any benefit. In practice, this means defining clear guidelines: e.g., “use 400 for validation errors, 404 for not found, 500 for unhandled exceptions, etc.” and sticking to them. Microsoft also has made it easy to return the proper HTTP status using <a target="_blank" href="https://learn.microsoft.com/en-us/aspnet/web-api/overview/error-handling/exception-handling#httpresponserexception">Action Results</a>, nudging teams toward consistent and correct usage.</p>
<p>Can you imagine if you had been onboarded to a new organization, and had to learn all of the magic strings that are returned from a service? What if those strings changed? How do you maintain that? Oh, this is making my head hurt!</p>
<p>Finally, <strong>robustness in face of failures.</strong> In any distributed system, failures are going to happen – networks partition, services go down, timeouts occur - a dog could eat all of your packets. Your services should use status codes to communicate these failures gracefully. As Sam Newman puts it, you must design for failure by handling timeouts and errors actively rather than ignoring them. A client of your service should receive a <code>504 Gateway Timeout</code> if your service couldn’t get a response from a downstream dependency, not a generic <code>500 Internal Server Error</code>, or a <code>400 Bad Request</code>. Clear status codes allow clients to implement retries or fallbacks when appropriate. They also encourage you, as the service author, to think about error cases explicitly (e.g., “What should I return if the service doesn’t respond in 5 seconds?”). This kind of defensive design is key to resilient microservices.</p>
<h1 id="heading-what-are-some-common-http-status-codes">What are some common HTTP Status Codes?</h1>
<p>In nearly ever engineer interview I go into, I propose a number of scenarios around HTTP and I always throw in some of the common status codes. There are more than I can remember, and you can always look them up, but when time is of the essence, you have to have the basics. It is just part of owning your craft.</p>
<p><a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status">HTTP Status codes</a> are broken down into five classes, where the first digit of the code highlights the class, and the following two digits help create the specificity. The first family of status codes are the <code>1xx</code> Informational response codes, and are used to communicate back and forth between the server and client with your response. These are not going to be the focus for today.</p>
<h2 id="heading-2xx-success-codes-communicating-all-good">2xx Success Codes: Communicating “All Good”</h2>
<p>When a request is handled successfully by a service, a <code>2xx</code> status code is returned. The 2xx range tells the client <em>“Everything worked as expected.”</em> Even within success codes, choosing the <em>right</em> one adds clarity about <em>how</em> the request was processed. Here are the most common ones:</p>
<h3 id="heading-200-ok">200 OK</h3>
<p><strong>What it means:</strong> A <code>200 OK</code> response means the request was successful and the server is returning the result in the response body (if any). This is the most common status code – essentially “OK, here is the data you asked for” or “OK, the operation succeeded.”</p>
<p><strong>When to use:</strong> Use 200 for successful GET requests (retrieving a resource or a collection), for PUT/PATCH requests that updated a resource, or for any POST request that doesn’t create a new resource (for example, a search operation or a login which just returns a token). In our Dog API, a GET request to <code>GET /api/dogs/123</code> that finds the dog will return 200 along with the dog’s profile JSON in the body. Similarly, if you updated a dog’s profile with a PUT request, a 200 might indicate the update succeeded and perhaps return the updated resource in the body.</p>
<p><strong>Why it matters:</strong> 200 is the default success code that clients will assume for a normal outcome. It’s important to return 200 (and not 204) when there is a response body. Conversely, don’t return 200 if something actually went wrong – that would mislead the client. As <a target="_blank" href="https://www.vinaysahni.com/best-practices-for-a-pragmatic-restful-api#http-status">Vinay Sahni notes in his REST API guidelines</a>, 200 OK is appropriate for a successful GET, PUT, PATCH or DELETE, or even a POST that doesn’t result in a new resource.</p>
<p><strong>.NET example:</strong> In an ASP.NET controller, you typically return 200 by using the <code>Ok(...)</code> helper with the response data. For example:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">HttpGet(<span class="hljs-meta-string">"api/dogs/{id}"</span>)</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> ActionResult&lt;Dog&gt; <span class="hljs-title">GetDog</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> id</span>)</span> {
    <span class="hljs-keyword">var</span> dog = _dogService.FindById(id);
    <span class="hljs-keyword">if</span> (dog == <span class="hljs-literal">null</span>) {
        <span class="hljs-keyword">return</span> NotFound(); <span class="hljs-comment">// 404 if no such dog</span>
    }
    <span class="hljs-keyword">return</span> Ok(dog);       <span class="hljs-comment">// 200 OK with the dog data in the body</span>
}
</code></pre>
<p>In the above code, if the dog exists we return <code>Ok(dog)</code>, which the framework translates to an HTTP 200 status with the dog object serialized in the response body. (If the dog isn’t found, we return a <code>404 Not Found</code> – we’ll talk more about 404 in the 4xx section.)</p>
<h3 id="heading-201-created">201 Created</h3>
<p><strong>What it means:</strong> A <code>201 Created</code> status means that the request was successful and <strong>a new resource was created</strong> as a result. It’s typically accompanied by a <code>Location</code> header pointing to the URL of the newly created resource. This is the proper response for <strong>POST requests that create new objects</strong>.</p>
<p><strong>When to use:</strong> Use 201 when processing a POST that adds a new resource to the system. For instance, <code>POST /api/dogs</code> to create a new dog profile should return 201 on success. The body usually contains the newly created resource (or some representation of it), and the <code>Location</code> header should contain the URL where that resource can be fetched (e.g. <code>/api/dogs/12345</code> if 12345 is the new ID). This makes it easier for clients to, for example, immediately navigate to or GET the new resource. According to REST best practices, <em>“Response to a POST that results in a creation should be 201 Created and include a Location header”</em>.</p>
<p><strong>Why it matters:</strong> 201 provides a clear signal that something was created, which differentiates it from a generic 200. Clients (and developers reading logs) will know that a new record was made. This can also be important for user interfaces or follow-up actions (the client now knows the URL of the new resource to perhaps display or further manipulate). Not using 201 in create scenarios might force the client to parse the response body to figure out if a creation happened, or to guess the new resource’s URL – that’s less clean. Using 201 is all about self-descriptiveness of your API.</p>
<p><strong>.NET example:</strong> ASP.NET provides a convenient helper to return 201 with a Location header. You can use <code>CreatedAtAction</code> (or <code>CreatedAtRoute</code>) to both return the created object and set the Location header. For example:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">HttpPost(<span class="hljs-meta-string">"api/dogs"</span>)</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> ActionResult&lt;Dog&gt; <span class="hljs-title">CreateDog</span>(<span class="hljs-params">[FromBody] DogDto newDog</span>)</span> {
    <span class="hljs-keyword">if</span> (!ModelState.IsValid) {
        <span class="hljs-comment">// 400 if the input is invalid </span>
        <span class="hljs-keyword">return</span> BadRequest(ModelState);
    }
    <span class="hljs-keyword">if</span> (_dogService.Exists(newDog.Name)) {
        <span class="hljs-comment">// 409 Conflict if a dog with the same name already exists</span>
        <span class="hljs-keyword">return</span> Conflict(<span class="hljs-string">"A dog with that name already exists."</span>);
    }
    <span class="hljs-keyword">var</span> created = _dogService.Add(newDog);
    <span class="hljs-comment">// Return 201 Created with Location header of the new resource</span>
    <span class="hljs-keyword">return</span> CreatedAtAction(<span class="hljs-keyword">nameof</span>(GetDog), <span class="hljs-keyword">new</span> { id = created.Id}, created);
}
</code></pre>
<p>In this snippet, after validating the input and checking for conflicts, we call the service to add the new dog. We then return <code>CreatedAtAction(...)</code>, which produces a 201 status. The <code>nameof(GetDog)</code> references the GET action for a single dog, and the anonymous object <code>{ id =</code> <a target="_blank" href="http://created.Id"><code>created.Id</code></a> <code>}</code> fills in that route’s parameters to generate the URL. This results in an HTTP response with status 201 and a Header <code>Location: https://&lt;baseurl&gt;/api/dogs/12345</code> (for example), and the body will contain the <code>created</code> Dog object in JSON. This way, the client immediately knows where the new dog resource lives.</p>
<h3 id="heading-204-no-content">204 No Content</h3>
<p><strong>What it means:</strong> <code>204 No Content</code> indicates success <em>but no response body</em>. The server successfully processed the request and is not returning any content. This is typically used when there’s nothing to return (as opposed to 200 where you usually have a response body).</p>
<p><strong>When to use:</strong> Use 204 for operations that successfully perform an action but don’t need to return data. Classic cases are <strong>DELETE requests</strong> (after deleting a resource, what would you return anyway?) and sometimes <strong>PUT/PATCH requests</strong> that update a resource without returning the updated representation. For example, if a client sends <code>DELETE /api/dogs/123</code>, and the dog is successfully removed, your service can return 204 No Content – basically saying “deleted successfully, and there’s no further information.” Another example: <code>POST /api/dogs/123/vaccinations</code> might record a vaccination and not need to return anything – a 204 tells the client “got it, vaccination recorded.”</p>
<p><strong>Why it matters:</strong> 204 is useful to save bandwidth and signal “nothing else to see here.” If you returned 200 in these cases, the client might expect a body (even an empty JSON object). With 204, the client knows to expect no content. It’s a small thing, but it makes the API a bit more precise. Also, if you have a client that automatically deserializes JSON, a 204 avoids the need to handle an empty response body in parsing logic – it’s clearly no content.</p>
<p><strong>.NET example:</strong> To return 204 in ASP.NET, you can use the <code>NoContent()</code> helper. For instance, in an update scenario:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">HttpPut(<span class="hljs-meta-string">"api/dogs/{id}/vaccinations"</span>)</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> IActionResult <span class="hljs-title">UpdateVaccination</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> id, [FromBody] VaccinationRecord record</span>)</span> {
    <span class="hljs-keyword">if</span> (!_dogService.Exists(id)) {
        <span class="hljs-keyword">return</span> NotFound(); <span class="hljs-comment">// 404 if no such dog</span>
    }
    _dogService.UpdateVaccination(id, <span class="hljs-keyword">record</span>);
    <span class="hljs-keyword">return</span> NoContent(); <span class="hljs-comment">// 204, indicating the update succeeded, nothing to return</span>
}
</code></pre>
<p>Here we update a dog’s vaccination info. If the dog exists, we perform the update and return <code>NoContent()</code>. The client receives a 204 status with no body, which is their cue that the operation succeeded and there’s no further data. (If the dog didn’t exist, we returned 404; if the input was invalid, we might return 400 or 422 as we’ll see next.)</p>
<p><strong>Summary of 2xx:</strong> In short, use <strong>200</strong> for normal responses with content, <strong>201</strong> for creations, <strong>204</strong> for empty successes. By using these appropriately, your API conveys exactly what happened. As an API design principle: <em>“Use HTTP status codes to be meaningful”</em> – a 200 tells a different story than a 201 or 204, even if all are “successful.” This extra semantic precision helps client developers and logs tremendously.</p>
<h2 id="heading-4xx-client-error-codes-client-error">4xx Client Error Codes: Client Error</h2>
<p>The 4xx class of codes indicates <strong>client errors</strong> – the request was somehow incorrect or cannot be fulfilled <em>as is</em>. This could be due to bad input, missing authentication, forbidden action, nonexistent resource, etc. Using the right 4xx code helps the client quickly understand and fix the issue. Let’s examine the common ones in our context:</p>
<h3 id="heading-400-bad-request">400 Bad Request</h3>
<p><strong>What it means:</strong> <code>400 Bad Request</code> means the server cannot or will not process the request due to something that is perceived to be a <strong>client error</strong>. In other words, the request was malformed or invalid in some way.</p>
<p><strong>When to use:</strong> Return 400 when the request data is syntactically incorrect or doesn’t pass basic validation. Typical scenarios:</p>
<ul>
<li><p>JSON body cannot be parsed</p>
</li>
<li><p>Required fields are missing or of the wrong type</p>
</li>
<li><p>The format of an input (like an email or date) is wrong</p>
</li>
<li><p>In our API, if a client POSTs a new dog with an invalid JSON (say missing a curly brace) or with a required field like <code>name</code> empty, the server should respond with 400. Essentially, “Your request is wrong, fix it and try again.”</p>
</li>
</ul>
<p>It’s worth noting that some teams use 400 for any validation errors (even semantic ones), lumping what others might use 422 for – we’ll discuss <code>422 Unprocessable Entity</code> soon. The key is to use 400 for clear-cut <em>bad requests</em>. As Vinay Sahni describes: <em>“400 Bad Request – The request is malformed, such as if the body does not parse.”</em></p>
<p><strong>Why it matters:</strong> 400 distinguishes client-side mistakes from other errors. If your service returns 400, the the <strong>caller</strong> knows the error is on their side. This is very different from a 500, which implies the client did everything right and the <strong>server</strong> needs fixing. By properly returning 400 for bad input, you signal to API consumers (and to monitoring systems) that the error was due to a bad request. This prevents unnecessary alerts on your side and helps client developers quickly find issues in their usage of your API.</p>
<p><strong>.NET example:</strong> In ASP.NET, <a target="_blank" href="https://learn.microsoft.com/en-us/aspnet/core/mvc/models/model-binding">model binding</a> and model validation make it easy to generate 400 responses. If you decorate your DTO with validation attributes (like <code>[Required]</code> or data annotations), and then call <code>ModelState.IsValid</code>, you can return a BadRequest. The framework can also auto-return 400 with error details if you use <code>[ApiController]</code> attribute (it does model validation automatically). In our earlier <code>CreateDog</code> snippet, we had:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">if</span> (!ModelState.IsValid) {
    <span class="hljs-keyword">return</span> BadRequest(ModelState); <span class="hljs-comment">//400 Bad Request</span>
}
</code></pre>
<p>This returns 400 with details about which fields failed validation. You could also do simpler: <code>return BadRequest("Invalid dog data");</code> with a custom message. The client then knows to fix the request (maybe they omitted the name or used an invalid format for a field).</p>
<p>Another example: if someone calls <code>GET /api/dogs?id=abc</code> and your API expected an integer, the framework might automatically treat that as a bad request (since “abc” can’t convert to int) and return a 400 for you. This helps indicate the client used the API incorrectly.</p>
<h3 id="heading-401-unauthorized">401 Unauthorized</h3>
<p><strong>What it means:</strong> <code>401 Unauthorized</code> means the request has not been applied because it lacks valid authentication credentials. Despite the name “Unauthorized,” it really is about authentication (<a target="_blank" href="https://youtu.be/PNbBDrceCy8?si=QjCjq239KyW4WxEL">Who are you?</a>) rather than authorization (what you’re allowed to do).</p>
<p>Use 401 when the request requires user authentication and the client did not provide it or provided invalid credentials (such as a bad token or expired token). For instance:</p>
<ul>
<li><p>If our API requires a valid API key or JWT token on a request, and the client calls <code>GET /api/dogs/123</code> without a token or with a wrong token, the service should return 401.</p>
</li>
<li><p>If a user is not logged in and tries to access a protected endpoint, 401 is appropriate.</p>
</li>
</ul>
<p>In short, 401 says <em>“You are not authenticated. Please authenticate and try again.”</em> The client can attempt to resolve this by providing credentials (logging in, refreshing a token, etc.). It’s not saying “you can never access this”; it’s saying “not in this state (unauthenticated).”</p>
<p><strong>Why it matters:</strong> In a microservices environment, clear auth errors are crucial. A 401 tells any intermediaries (like gateways) and the client that the issue is authentication. Many frameworks and tools (like HTTP client libraries or browsers) know to react to 401 by, for example, prompting the user to login or retrying with credentials. Using 401 vs 403 correctly also enhances security: 401 for missing/invalid credentials, 403 for valid credentials but forbidden action. The distinction can prevent information leakage. For example, if a resource requires auth, you don’t want to reveal its existence to an unauthenticated request – a 401 is the correct generic response. As a best practice: 401 when no/invalid credentials, 403 when credentials are valid but lack permissions.</p>
<p><strong>.NET example:</strong> In ASP.NET, you often don’t manually return 401 – the framework’s authentication middleware does it for you when authentication fails. For example, if you have JWT Bearer auth and the token is missing or wrong, the middleware will short-circuit and return 401 automatically. You can also explicitly return <code>Unauthorized()</code> from a controller if needed:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">if</span> (!User.Identity.IsAuthenticated) {
    <span class="hljs-keyword">return</span> Unauthorized(); <span class="hljs-comment">// returns 401</span>
}
</code></pre>
<p>Typically though, <code>[Authorize]</code> attributes on controllers handle this. It’s worth noting that <code>Unauthorized()</code> in .NET corresponds to 401, whereas there is a separate helper <code>Forbid()</code> for 403 Forbidden.</p>
<h3 id="heading-403-forbidden">403 Forbidden</h3>
<p><strong>What it means:</strong> <code>403 Forbidden</code> means the server understood the request and the user is authenticated, but they do not have permission to perform this action. It’s an authorization issue – <em>“You’re logged in, but you’re not allowed to do this.”</em></p>
<p><strong>When to use:</strong> Return 403 when the user’s credentials are recognized but they don’t have the right <strong>privileges or access level</strong> for the resource or operation. Examples:</p>
<ul>
<li><p>The client’s API token is valid but does not include the scope to delete a dog, and they attempted <code>DELETE /api/dogs/123</code>. Your Dog service should return 403 Forbidden in this case.</p>
</li>
<li><p>A user is trying to access a dog profile that they don’t own or shouldn’t see. If authentication succeeded (the user is logged in) but this particular dog is off-limits, 403 is the right response.</p>
</li>
<li><p>Any operation where the request is well-formed and the user is authenticated, but the authorization policy says “nope, not allowed.”</p>
</li>
</ul>
<p><strong>Why it matters:</strong> Using 403 appropriately, in tandem with 401, completes the security story of your API. It tells the client, “You can’t have this even though we know who you are.” If you always returned 401 for both unauthenticated and unauthorized cases, clients will get confused (do I need to re-authenticate or is it fundamentally not allowed?).</p>
<p>A spike in 403 errors might indicate attempted access violations or misconfigured permissions, whereas 401 spikes might indicate an authentication problem (like an auth server down or tokens expired). They are different scenarios and should be distinguished. Following the principle from Vinay’s API guidelines: <em>“403 Forbidden – when authentication succeeded but authenticated user doesn’t have access to the resource”</em>.</p>
<p><strong>.NET example:</strong> Similar to 401, ASP.NET will often handle 403 via the <code>[Authorize]</code> attribute and your authorization configuration. For example, if you use roles or policy-based authorization and a user lacks a required role, the framework will return 403 Forbidden. You can also manually return <code>Forbid()</code> in a controller to send a 403. For example:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">Authorize</span>] <span class="hljs-comment">// user must be logged in</span>
[<span class="hljs-meta">HttpDelete(<span class="hljs-meta-string">"api/dogs/{id}"</span>)</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> IActionResult <span class="hljs-title">DeleteDog</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> id</span>)</span> {
    <span class="hljs-keyword">var</span> dog = _dogService.FindById(id);
    <span class="hljs-keyword">if</span> (dog == <span class="hljs-literal">null</span>) <span class="hljs-keyword">return</span> NotFound();
    <span class="hljs-keyword">if</span> (!User.HasClaim(<span class="hljs-string">"CanDeleteDog"</span>, <span class="hljs-string">"true"</span>)) {
        <span class="hljs-keyword">return</span> Forbid(); <span class="hljs-comment">// 403 if user is not allowed to delete</span>
    }
    _dogService.Remove(id);
    <span class="hljs-keyword">return</span> NoContent();
}
</code></pre>
<p>In this snippet, we check an imaginary claim or permission and return <code>Forbid()</code> if the user isn’t allowed to delete the dog. The result is a 403 Forbidden.</p>
<h3 id="heading-404-not-found">404 Not Found</h3>
<p><strong>What it means:</strong> <code>404 Not Found</code> means the server can’t find the requested resource. The client might be requesting an endpoint that doesn’t exist or an entity by ID that isn’t present.</p>
<p><strong>When to use:</strong> Use 404 when:</p>
<ul>
<li><p>The URL is wrong or no longer exists (like <code>/api/dogz/123</code> with a typo, or an outdated endpoint).</p>
</li>
<li><p>The resource ID doesn’t exist. In our Dog API, if a client requests <code>GET /api/dogs/99999</code> but there is no dog with ID 99999, return 404. Similarly, if they try to update or delete a non-existent record, 404 is appropriate.</p>
</li>
<li><p>Essentially, whenever a resource cannot be found on the server.</p>
</li>
</ul>
<p>Note that if the resource is defined but the user isn’t allowed to see it, and you want to hide its existence, you might also return 404 to an unauthorized user. But generally, 404 is straightforward: record not found.</p>
<p><strong>Why it matters:</strong> You’ve probably seen 404 errors just navigating the web. It’s important because it immediately tells the client that either they have a mistake in the URI, or the resource has been deleted. In microservices, this can happen for legitimate reasons (an ID was valid but the record was deleted by another service or user). Handling 404 correctly improves the user experience. For example, an app can show “Dog not found” to the user instead of generic failure. From an observability standpoint, 404s are usually not alerts (they often indicate user input error or outdated references), so filtering them out of error alerts is common. As an API guideline: <em>“404 Not Found – when a non-existent resource is requested.”</em></p>
<p><strong>.NET example:</strong> We saw this in the <code>GetDog</code> example earlier. Using <code>return NotFound();</code> will produce a 404. To reinforce:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">HttpGet(<span class="hljs-meta-string">"api/dogs/{id}"</span>)</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> ActionResult&lt;Dog&gt; <span class="hljs-title">GetDog</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> id</span>)</span> {
    <span class="hljs-keyword">var</span> dog = _dogService.FindById(id);
    <span class="hljs-keyword">if</span> (dog == <span class="hljs-literal">null</span>) {
        <span class="hljs-keyword">return</span> NotFound(); <span class="hljs-comment">// returns 404 Not Found</span>
    }
    <span class="hljs-keyword">return</span> Ok(dog);      <span class="hljs-comment">// returns 200 OK if found</span>
}
</code></pre>
<p>This pattern of checking for null and returning NotFound is very common in Web API controllers. It clearly separates the “not found” case from the success case. In a list-fetch scenario (say <code>GET /api/dogs?name=Diesel</code>) if no results, one might choose to return 200 with an empty list instead of 404 (because the endpoint exists, it just has no data to return in that query). 404 is more for a singular resource that isn’t present.</p>
<h3 id="heading-409-conflict">409 Conflict</h3>
<p><strong>What it means:</strong> <code>409 Conflict</code> indicates that the request could not be processed because of a conflict with the current state of the resource. The server is basically saying “there’s a logical conflict, so I can’t do this unless you resolve the conflict.”</p>
<p><strong>When to use:</strong> Common use cases for 409:</p>
<ul>
<li><p><strong>Unique constraint violations</strong>: If the client attempts to create a resource that conflicts with an existing one. For example, if dogs are identified by name and the client tries to create another dog with the same name (assuming uniqueness), you might return 409 to indicate this conflict. (We showed an example check in our CreateDog code: we returned 409 if a dog with the same name exists.)</p>
</li>
<li><p><strong>Edit conflicts / concurrency control</strong>: Suppose the Dog API supports optimistic locking (each dog profile has a version number). If two clients try to update the same dog simultaneously, one update might conflict with the other. The second update could receive a 409 Conflict indicating “the state you tried to update has changed, your update conflicts.” The client might then fetch the latest version and retry. This is a classic use of 409 in REST to handle concurrent updates.</p>
</li>
<li><p>Basically, anytime the request can’t be completed due to some resource state that the client might not be aware of.</p>
</li>
</ul>
<p>As Martin Fowler has pointed out in a discussion on API design, 409 is a useful code for situations like business rule violations too. For instance, one could consider an attempt to perform an operation that violates a business invariant as a conflict. In an example from a banking context, Fowler favored using 409 when a withdrawal request couldn’t be processed due to insufficient funds (rather than 400), treating it as a state conflict with the account’s resource state. In our dog context, that might be like trying to register a dog twice for the same event – one could argue that’s a conflict.</p>
<p><strong>Why it matters:</strong> It’s not a client-format error (400), not an auth issue (401/403), and not a server bug (500) – it’s a logical conflict. 409 informs the client that repeating the exact same request will not succeed unless something changes (idempotency). This often prompts either user action or client logic to resolve the conflict. For example, a client gets 409 on creating “Diesel” because there can only be one Diesel – it can inform the user “choose a different name” rather than blindly retrying. Or for an update conflict, the client knows to GET the latest state and merge changes. By using 409, your API is communicating that there’s nothing wrong with the request format and the server is fine, but the requested action can’t be done in the current state. This is very helpful in microservices where concurrent writes or uniqueness constraints across services can happen. It’s also great for observability: you can track 409s to see how often clients hit conflicts.</p>
<p><strong>.NET example:</strong> There is a <code>Conflict()</code> helper in ASP.NET for 409. We used it in the earlier CreateDog example:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">if</span> (_dogService.Exists(newDog.Name)) {
    <span class="hljs-keyword">return</span> Conflict(<span class="hljs-string">"A dog with that name already exists."</span>);
}
</code></pre>
<p>This returns a 409 Conflict with a message in the body. Another scenario:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">HttpPut(<span class="hljs-meta-string">"api/dogs/{id}"</span>)</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> IActionResult <span class="hljs-title">UpdateDog</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> id, DogUpdateDto dto</span>)</span> {
    <span class="hljs-keyword">if</span> (!_dogService.Exists(id)) <span class="hljs-keyword">return</span> NotFound();
    <span class="hljs-keyword">try</span> {
        _dogService.UpdateDog(id, dto); <span class="hljs-comment">// say this throws a ConcurrencyException</span>
        <span class="hljs-keyword">return</span> NoContent(); <span class="hljs-comment">//204</span>
    } <span class="hljs-keyword">catch</span> (ConcurrencyException) {
        <span class="hljs-keyword">return</span> Conflict(<span class="hljs-string">"Dog profile was updated by someone else. Please refresh and retry."</span>); <span class="hljs-comment">//409</span>
    }
}
</code></pre>
<p>Here, if our service layer throws a <code>ConcurrencyException</code> because, say, an ETag or version check failed, we catch it and return <code>Conflict()</code> to inform the client of the edit conflict.</p>
<h3 id="heading-422-unprocessable-entity">422 Unprocessable Entity</h3>
<p><strong>What it means:</strong> <code>422 Unprocessable Entity</code> means the server understands the content type of the request and the syntax is correct, but the content was invalid in some way that prevented processing. In simpler terms, the request is well-formed, but the specific semantic errors make it unprocessable.</p>
<p><strong>When to use:</strong> 422 is often used for validation errors where the request format is correct (hence not a 400), but the content fails business rules or more complex validation:</p>
<ul>
<li><p>For example, in the Dog API, suppose <code>POST /api/dogs</code> requires a valid birth date for the dog. If the client provides a date in the future, that’s semantically invalid (a dog can’t be born in the future). The server could respond with 422 Unprocessable Entity, with a message like “Birth date cannot be in the future.”</p>
</li>
<li><p>Another example: if the client attempts to perform an action that is conceptually correct in format but not allowed: “Update vaccination” where the vaccination data is internally inconsistent or violates a rule (e.g., a vaccination date is before the dog’s birthdate, or trying to add a vaccination that the dog already has). The server might return 422 to indicate “I understood your request, but I can’t process these specifics.”</p>
</li>
</ul>
<p>In practice, some teams choose to use 400 for all kinds of validation errors (treating “missing required field” and “field value out of range” both as 400). Others use 422 to mean “the request payload was syntactically correct JSON and maybe partially valid, but there are <strong>domain-specific</strong> issues with it.” It’s a nuanced distinction. You did set up bounded contexts when determining your services, right? <strong>Right?</strong> Vinay Sahni’s guidelines list <strong>“422 Unprocessable Entity – Used for validation errors”</strong>, which reflects this common usage.</p>
<p><strong>Why it matters:</strong> If you choose to use 422, it gives client developers a clue that <em>“your request was understood and validated, but there are issues you need to correct.”</em> The difference between 400 and 422 can be subtle, but it can help in large systems to separate pure format errors from semantic validation. For example, monitoring a spike in 400s might indicate a bug in how clients are formatting requests (or a change in the API spec), whereas a spike in 422s might indicate lots of users hitting a business rule (maybe an overly strict rule or a UI that allows invalid data to be submitted). It also allows the response body to focus on detailed validation errors, since a 422 is clearly about that. Using 422 is a way of saying “all your syntax was correct, but the request as a whole is unacceptable in its current form.”</p>
<p><strong>.NET example:</strong> ASP.NET recently introduced the <code>UnprocessableEntity()</code> helper and should be in all modern .NET Framework versions. We can use it similarly to other helpers:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">HttpPost(<span class="hljs-meta-string">"api/dogs/{id}/vaccinations"</span>)</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> IActionResult <span class="hljs-title">AddVaccination</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> id, [FromBody] VaccinationRecord record</span>)</span> {
    <span class="hljs-keyword">if</span> (!_dogService.Exists(id)) <span class="hljs-keyword">return</span> NotFound();
    <span class="hljs-keyword">try</span> {
        _dogService.AddVaccination(id, <span class="hljs-keyword">record</span>);
        <span class="hljs-keyword">return</span> NoContent();
    } <span class="hljs-keyword">catch</span> (InvalidOperationException ex) {
        <span class="hljs-comment">// the vaccination record is invalid (maybe vaccine is not applicable for the dog's age)</span>
        <span class="hljs-keyword">return</span> UnprocessableEntity(<span class="hljs-keyword">new</span> { error = ex.Message });
    }
}
</code></pre>
<p>In this example, if the service throws an exception because the vaccination data didn’t pass some business rule (maybe the dog is too young for rabies vaccine, etc.), we catch it and return 422 Unprocessable Entity with an error message. The client sees a 422 and knows “my data was understood, but it failed validation; I need to adjust it.”</p>
<p>If using <code>[ApiController]</code>, you could also customize the validation problem details to return 422 instead of 400 for certain cases, but that’s beyond our scope here. The main idea is: use 422 (if you choose to) for <em>semantic validation failures</em>.</p>
<p><strong>A note on 400 vs 422:</strong> There is some debate on using 422 vs 400 for validation. Documentation and consistency is key around this when you do design your API. If you opt not to use 422, it’s fine to return 400 for all invalid input cases. Just ensure your clients know how to differentiate different error causes via error messages or error codes in the response body. The advantage of 422 is simply one extra layer of clarity.</p>
<h2 id="heading-5xx-server-error-codes-when-things-go-wrong-on-the-server-or-beyond">5xx Server Error Codes: When Things Go Wrong on the Server (or Beyond)</h2>
<p>The 5xx class indicates the server failed to fulfill a valid request. These are not the client’s fault; something went wrong on the server side or in a downstream service. Clients typically can’t fix these – but they might retry later. For microservices, distinguishing 5xx errors is vital for operations: a spike in 5xx means something needs investigation on the server side. Let’s cover the two big ones in our context: 500 and 504.</p>
<h3 id="heading-500-internal-server-error">500 Internal Server Error</h3>
<p><strong>What it means:</strong> <code>500 Internal Server Error</code> is the generic catch-all for “the server encountered an unexpected condition that prevented it from fulfilling the request.” It’s essentially “something blew up on our end.”</p>
<p><strong>When to use:</strong> Return 500 when no other specific 5xx code fits, and the error is indeed on the server. Common scenarios:</p>
<ul>
<li><p>Unhandled exceptions in code (this never happens to you though). For example, a null reference exception, or an overflow, or any bug that wasn’t caught will typically result in a 500.</p>
</li>
<li><p>Database connection failures. If your service tries to fetch data and the database is down or throws an error that isn’t specifically handled, that might bubble up as a 500.</p>
</li>
<li><p>Essentially, any time your service logic fails unexpectedly. If you anticipated the failure, you might choose a more specific code (<code>503 Service Unavailable</code> for planned downtime, though that’s usually used by load balancers; or <code>504 Gateway Timeout</code> if the downstream service is timing out, etc.). But if it’s a surprise – it’s 500.</p>
</li>
</ul>
<p>In our API, if a GET request triggers an exception (maybe the database query threw), the user would get a 500. If a POST triggers a bug in business logic resulting in an exception not caught, 500 is returned. Ideally, your code catches exceptions and maybe converts them to a nicer error response (possibly a 400 if it was due to bad data, or 503 if a dependency is not available). But anything unanticipated bubbles up as 500.</p>
<p><strong>Why it matters:</strong> 500 is how your service barks (I’m trying to keep the dog them going) for help. Monitoring systems will flag 500s as errors needing attention. As a rule, a well-designed microservice should minimize how often it returns 500 by handling expected error scenarios gracefully (using appropriate 4xx or 5xx codes for specific conditions). So, when a 500 does occur, it’s usually a true bug or outage. For the client, a 500 means “you did everything right, but the server failed – you can’t fix this from your end.” Clients might then either give up or schedule a retry after some delay, depending on the operation. From a maintainability perspective, when you see 500s in logs, you dive into server-side debugging. These errors often correlate with exceptions in your logs. One mantra is that 500 errors should not be part of normal business logic; if you find yourself intentionally returning 500 for expected conditions, consider using a different code. 500 should be reserved for “unexpected” failures – it's literally in the definition.</p>
<p><strong>.NET example:</strong> By default, if an ASP.NET controller throws an uncaught exception, the framework will return a 500 Internal Server Error (and possibly with a generic error payload or none, depending on your settings). You typically don’t manually <code>return StatusCode(500)</code> unless you caught an exception and want to wrap it. For example:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">try</span> {
    <span class="hljs-comment">// ...some operation...</span>
} <span class="hljs-keyword">catch</span> (Exception ex) {
    _logger.LogError(ex, <span class="hljs-string">"Unexpected error in UpdateDog"</span>);
    <span class="hljs-keyword">return</span> StatusCode(StatusCodes.Status500InternalServerError, <span class="hljs-string">"An unexpected error occurred."</span>);
}
</code></pre>
<p>This catches any exception and returns 500 with a message. In many cases, you’d let the global exception handler or middleware handle it. The key is: you as a developer focus on preventing these. But you might use <code>StatusCode(500, ...)</code> if you have custom error handling logic and want to provide a custom error body.</p>
<h3 id="heading-504-gateway-timeout">504 Gateway Timeout</h3>
<p><strong>What it means:</strong> <code>504 Gateway Timeout</code> indicates that a server, acting as a <strong>gateway or proxy</strong>, did not receive a timely response from an upstream server it needed to contact in order to complete the request. In essence, one server was calling another (or waiting on another), and the other didn’t respond in time, so the chain timed out.</p>
<p><strong>When to use:</strong> Use 504 in a microservice when your service is <strong>dependent on an upstream service or external API</strong> and that call times out. Scenarios:</p>
<ul>
<li><p>Your service calls an external <strong>Pedigree API</strong> (let’s say a third-party service that gives detailed lineage info). The client calls your endpoint <code>GET /api/dogs/123/pedigree</code>. Your service in turn calls the Pedigree API to fetch data. If the Pedigree API doesn’t respond within your timeout window, you should <strong>return 504 Gateway Timeout</strong> to the client. This tells the client that the server did not get a response from an upstream dependency.</p>
</li>
<li><p>A more internal example: Service A calls Service B as part of a request. Service B hangs or is offline – Service A’s request to B times out. Service A can return 504 to its caller (which might be a user or perhaps another service) to indicate “I couldn’t complete your request because a downstream service didn’t respond.”</p>
</li>
<li><p>Also, API Gateways or load balancers themselves often return 504 if one of the downstream microservices doesn’t respond in time. For instance, an Nginx proxy might give a 504 if the backend took too long. But here we’re focusing on your microservice actively returning 504 when <em>it</em> waits on something else.</p>
</li>
</ul>
<p><strong>Why it matters (especially in vendor-dependent systems):</strong> In microservice ecosystems that rely on third-party vendors (payment gateways, mapping APIs, etc.), timeouts are a fact of life. Emphasizing the 504 scenario is critical because it’s about <em>graceful degradation</em>. If a vendor API is slow or down, your service should not hang indefinitely, nor should it pretend everything is fine. It should fail fast and inform the client with a 504. This has several benefits:</p>
<ul>
<li><p><strong>Clarity:</strong> The client (or calling service) knows the error is due to an upstream timeout. They might choose to implement a retry strategy with backoff, or present a specific message to the user (“The service is experiencing delays from a downstream provider, please try again later.”). If you simply returned 500, the client wouldn’t know it was a timeout vs a bug in your code.</p>
</li>
<li><p><strong>Resource freeing:</strong> By timing out and returning 504, your service frees up resources (threads, memory) that would otherwise be stuck waiting. It’s better to fail and report than to tie up resources on a lost cause. <a target="_blank" href="https://qconlondon.com/presentation/apr2025/timeouts-retries-and-idempotency-distributed-systems">Sam Newman states that setting timeouts is key to building resilient services</a> – without them, calls could hang forever and cascade issues.</p>
</li>
<li><p><strong>Observability &amp; Monitoring:</strong> A rise in 504 errors specifically can alert you (and the vendor) that something is wrong with the upstream service’s performance. You might have dashboards showing 504s separate from 500s. This is gold for quickly diagnosing issues in a complex chain. For example, if Service A returns a bunch of 504s, you immediately check Service B or the third-party system it depends on. It narrows down the problem domain.</p>
</li>
<li><p><strong>Maintainability:</strong> Designing with 504 in mind forces you to think about timeout strategies and fallback plans. This leads to more robust code. Perhaps you implement a <strong>circuit breaker</strong> pattern: after several 504s, you stop calling the vendor for a while and immediately fail (or degrade functionality) to avoid cascading latency. <a target="_blank" href="https://martinfowler.com/bliki/CircuitBreaker.html">Martin Fowler describes that circuit breakers help “avoid waiting on timeouts for the client” and prevent overloading a struggling upstream by short-circuiting calls</a>. In practice, a circuit breaker might internally treat repeated timeouts as errors and for a period, return an error (maybe a 503 or 504 immediately) without attempting the upstream call, until the upstream seems healthy again. This spares your system extra load and gives the upstream time to recover.</p>
</li>
<li><p><strong>User Experience:</strong> If you return 504 quickly, the user isn’t left staring at a spinning loader for a minute only to get an error anyway. Failing fast can allow the client to maybe call a fallback service or at least inform the user promptly.</p>
</li>
</ul>
<p>In summary, a 504 is the correct way to propagate an upstream timeout condition. It says: <em>“I, the gateway, timed out waiting for a response from another server.”</em> Contrast this with a <code>503 Service Unavailable</code>, which typically means “the server itself is temporarily overloaded or down.” A 504 pinpoints it to an upstream dependency issue.</p>
<p><strong>.NET example:</strong> Suppose our Dog API has an endpoint to get a dog’s pedigree from an external service:</p>
<pre><code class="lang-csharp"><span class="hljs-comment">//This represents the API Gateway</span>

[<span class="hljs-meta">HttpGet(<span class="hljs-meta-string">"api/dogs/{id}/pedigree"</span>)</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;IActionResult&gt; <span class="hljs-title">GetPedigree</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> id</span>)</span> {
    <span class="hljs-keyword">if</span> (!_dogService.Exists(id)) {
        <span class="hljs-keyword">return</span> NotFound(); <span class="hljs-comment">// 404 if dog not found</span>
    }
    <span class="hljs-keyword">try</span> {
        <span class="hljs-keyword">var</span> pedigree = <span class="hljs-keyword">await</span> _pedigreeService.GetPedigreeAsync(id);
        <span class="hljs-keyword">return</span> Ok(pedigree); <span class="hljs-comment">// 200 OK with data if successful</span>
    } <span class="hljs-keyword">catch</span> (TimeoutException) {
        <span class="hljs-comment">// The call to the 3rd party pedigree service timed out</span>
        <span class="hljs-keyword">return</span> StatusCode(StatusCodes.Status504GatewayTimeout, 
                          <span class="hljs-string">"Pedigree service did not respond in time"</span>); <span class="hljs-comment">//504</span>
    } <span class="hljs-keyword">catch</span> (Exception ex) {
        <span class="hljs-comment">// Some other error in calling external service or processing</span>
        _logger.LogError(ex, <span class="hljs-string">"Unexpected error getting pedigree"</span>);
        <span class="hljs-keyword">return</span> StatusCode(StatusCodes.Status500InternalServerError, <span class="hljs-string">"Internal error"</span>); <span class="hljs-comment">//500</span>
    }
}
</code></pre>
<p>In this snippet, <code>_pedigreeService.GetPedigreeAsync(id)</code> represents a call to the external vendor (perhaps using <code>HttpClient</code> under the hood). We wrap it in a try/catch. If it throws a <code>TimeoutException</code> (meaning we hit our timeout without a response), we return a 504 Gateway Timeout with a message. Any other exception we treat as a generic 500. Notice we check for the dog existence first to handle 404 separately – a missing dog is not an upstream timeout issue.</p>
<p>It’s important that we <em>set a timeout</em> on the external call. If you never set one, you might never throw that TimeoutException and your thread could hang. Best practice is to use a cancellation token or timeout mechanism on HttpClient (like <code>HttpClient.Timeout</code> property or using <code>CancellationTokenSource</code>, but we’ll talk about CancellationTokens some other time). By doing so, you ensure that after X seconds of no response, you abandon the call and return 504. This is implementing the <strong>fail fast</strong> principle. As Newman’s <em>Building Resilient Systems</em> book suggests, timeouts are your first line of defense – they prevent your system from waiting indefinitely.</p>
<p><strong>Handling 504 in microservices:</strong> Beyond just returning 504, a robust service might also:</p>
<ul>
<li><p>Retry the upstream call a couple of times before giving up (especially if the operation is read-only and idempotent). If a transient slowdown caused the timeout, a quick retry might succeed. If all retries fail, then return 504.</p>
</li>
<li><p>Implement a circuit breaker as mentioned, so that if the upstream is consistently timing out, you stop hammering it for a while. The circuit breaker could trigger a fallback – for example, return cached data or a default response if available, instead of an outright error. If no fallback is possible, 504 is still returned, but the circuit breaker ensures you recover faster when the upstream is back.</p>
</li>
<li><p>Log the timeout with context (which upstream, how long we waited) and possibly trigger alerts if it crosses a threshold.</p>
</li>
<li><p>Communicate with the vendor: if this is a third-party, your devops team might contact the vendor when seeing sustained 504s, while your service keeps returning 504 to clients to be transparent about the issue.</p>
</li>
</ul>
<p>From the client’s perspective, a 504 might mean they should try again later. If it’s a user-facing scenario, you might show a friendly error like “We’re experiencing delays from our data provider. Please try again in a few minutes.” If it’s service-to-service, the calling service might catch 504 and decide to either propagate it further up or implement its own fallback.</p>
<p>To put it in the words of an error reference: <em>“A 504 error indicates that the web server (acting as a gateway) was waiting too long for a response from another server and timed out”</em>. This is precisely why we emphasize it when leveraging the services of third parties.</p>
<h2 id="heading-status-codes-matter">Status Codes Matter</h2>
<p>HTTP status codes might seem like small numeric signals, but as I’ve shown, they carry a lot of weight in microservice-based systems. When you build distributed systems, thinking deliberately about which code to return in each scenario is part of designing a clear and maintainable API. And continuing on this thought, nearly everything is a distributed system.</p>
<p>By using the correct codes:</p>
<ul>
<li><p>You make your APIs self-explanatory (a new developer can read the code or API docs and immediately understand what 401 vs 403 or 409 means in your context).</p>
</li>
<li><p>You enhance observability, since tools can rely on the status codes to measure the health and behavior of your services (e.g., tracking 5xx rates for instability, 4xx for client misuse, etc.).</p>
</li>
<li><p>You improve client handling of errors – well-behaved clients will read 409 and not retry immediately (instead maybe prompt user), but might retry on 503 or 504 after a delay. They’ll redirect on 201 if needed, or prompt auth on 401. In essence, you play into the HTTP ecosystem’s established patterns.</p>
</li>
<li><p>You ensure maintainability and consistency across services. If every vendor and team follows these practices, services can work together more easily. Following these protocols ensure that new engineers can be onboarded quickly by following industry standards.</p>
</li>
</ul>
<p>Martin Fowler and Sam Newman often remind us that the first rule of distributed system design is to acknowledge it <em>is</em> distributed – things will fail. Using status codes, contracts, and HTTP verbs properly is part of your API Design. David Farley cautions against the pitfalls of misapplying microservices – one such pitfall would be neglecting fundamentals like clear API communication. On the flip side, embracing these fundamentals (like clear status codes) helps unlock the benefits of microservices by making services loosely coupled but strongly coherent in protocol.</p>
<p>Look - I’m not that creative - many other smarter people have written these protocols. Follow industry standard definitions for status codes as outlined (there’s a reason these codes exist!). Document your API’s error responses. For critical integrations (like with vendors), establish and handle timeouts and use the appropriate codes. Leverage your framework – as we saw with ASP.NET, many helpers exist (<code>Ok()</code>, <code>NotFound()</code>, <code>BadRequest()</code>, etc.) to make doing the right thing easy.</p>
<p>By treating HTTP status codes not as an afterthought but as a core part of your API design, you’ll create services that are easier to debug, scale, and integrate. The result is a more resilient microservice ecosystem – one where, if something goes wrong, everyone knows <em>exactly</em> what’s going on just by looking at the HTTP responses. And as a bonus, the next developer to maintain your service will thank you for those clear 4xx/5xx signals instead of a mysterious <code>"error": "something went wrong"</code> with a <code>200 OK</code>.</p>
<p><strong>Let your microservices speak the language of HTTP clearly.</strong> A well-placed status code is worth a thousand words (or at least saves a trip to the logs). So whether you’re fetching a dog profile or updating a vaccination record, make sure your service barks the right code!</p>
<p><strong>People and Sources that are Smarter than Me:</strong></p>
<ul>
<li><p><a target="_blank" href="https://samnewman.io/books/building_microservices_2nd_edition/">Newman, Sam. <em>Building Microservices</em></a> – Emphasizes designing services with clear contracts and handling failures gracefully (timeouts, etc.).</p>
</li>
<li><p><a target="_blank" href="https://martinfowler.com/articles/richardsonMaturityModel.html">Fowler, Martin. <em>Richardson Maturity Model</em></a> – Discusses the importance of using HTTP verbs and codes in REST APIs.</p>
</li>
<li><p><a target="_blank" href="https://www.vinaysahni.com/best-practices-for-a-pragmatic-restful-api">Sahni, Vinay</a>. <em>Best Practices for REST API</em> – Provides practical guidelines on status codes (e.g., 201 for create with Location header, 422 for validation).</p>
</li>
<li><p><a target="_blank" href="https://www.davefarley.net/?p=305">Farley, Dave. <em>Continuous Delivery &amp; Microservices</em></a> – Stresses getting the fundamentals right and avoiding complexity when it’s not adding value.</p>
</li>
<li><p>Microsoft <a target="_blank" href="http://ASP.NET">ASP.NET</a> Team (led by Scott Guthrie) – Built frameworks for proper use of HTTP status codes, highlighting their importance in API design.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[What's going on? 2025-04-30]]></title><description><![CDATA[A holiday, seasonably warm weather, and the NHL playoffs have taken my attention a little bit. Sometime things are predictable, and sometimes they’re not. Finding that balance in life and maintaining your personal life and professional life can cause...]]></description><link>https://brokenintellisense.com/whats-going-on-2025-04-30</link><guid isPermaLink="true">https://brokenintellisense.com/whats-going-on-2025-04-30</guid><category><![CDATA[BitStuffing]]></category><category><![CDATA[c4]]></category><category><![CDATA[Azure]]></category><category><![CDATA[scalar]]></category><category><![CDATA[.NET]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Wed, 30 Apr 2025 05:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/vcF5y2Edm6A/upload/f88339158871d039b003d7696974460c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A holiday, seasonably warm weather, and the NHL playoffs have taken my attention a little bit. Sometime things are predictable, and sometimes they’re not. Finding that balance in life and maintaining your personal life and professional life can cause you to prioritize one more than the other.</p>
<p>I spent some time talking to a buddy about my MacBook Air Project, and certifications as well. He talked to me about his boot camps. I am revisiting a lot of <a target="_blank" href="https://www.davefarley.net/?p=352">David Farley’s book on Modern Software Engineering</a>. I suspect that I’ll be writing an entire review on this book.</p>
<h3 id="heading-c4-diagrams">C4 Diagrams</h3>
<p><a target="_blank" href="https://c4model.com/">C4 Model</a> - I’ve been looking a lot at diagraming different systems and creating them from the right level of detail based on the audience. Looking at C4 Diagrams to help create the different levels of abstraction, and relationships</p>
<h3 id="heading-bit-stuffing">Bit Stuffing</h3>
<p><a target="_blank" href="https://www.techtarget.com/searchnetworking/definition/bit-stuffing">TechTarget</a> - I was reminding myself of Bit Stuffing based on a recent project that leverages bit stuffing, and then someone asked an unrelated question about it. It isn’t something I do that often, so its nice to have a refresher.</p>
<h3 id="heading-azure-app-configuration">Azure App Configuration</h3>
<p><a target="_blank" href="https://azure.microsoft.com/en-us/products/app-configuration">Microsoft Azure</a> - Feels like everything I’m working on right now has a need for Feature Flags, so here comes Azure App Configuration. I’ve already started writing articles about it.</p>
<h3 id="heading-aspnet-core-openapi-with-scalar">ASP.NET Core OpenAPI with Scalar</h3>
<p><a target="_blank" href="https://medium.com/@FitoMAD/asp-net-core-openapi-with-scalar-c430051bbabf">Medium</a> - I started looking at Scalar since .Net 9.0, and the use of Swagger is no longer there. I haven’t spent a ton of time with .Net 9 because it isn’t an LTS version but I should!</p>
<h3 id="heading-azure-ai-vision">Azure AI Vision</h3>
<p><a target="_blank" href="https://azure.microsoft.com/en-us/products/ai-services/ai-vision">Microsoft Azure</a> - I spent some time looking at this service from Azure. I like the idea of building my own image analysis tool. I wonder if I can build something to help me build some sort of analysis of my personal image library.</p>
]]></content:encoded></item><item><title><![CDATA[Revisiting the Four Pillars of Object-Oriented Programming]]></title><description><![CDATA[Sometimes I dive so deep into software development details that I forget the basics. Fundamentals become second nature, slipping beneath my conscious awareness. Recently, a memory from one of my first software development interviews popped into my he...]]></description><link>https://brokenintellisense.com/revisiting-the-four-pillars-of-object-oriented-programming</link><guid isPermaLink="true">https://brokenintellisense.com/revisiting-the-four-pillars-of-object-oriented-programming</guid><category><![CDATA[Pillars of Object Oriented Programming]]></category><category><![CDATA[oop]]></category><category><![CDATA[abstraction]]></category><category><![CDATA[encapsulation]]></category><category><![CDATA[inheritance]]></category><category><![CDATA[polymorphism]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Fri, 18 Apr 2025 01:09:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/I2hs7w9ClF8/upload/6de3ea797304676bb523cb6a9e425433.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sometimes I dive so deep into software development details that I forget the basics. Fundamentals become second nature, slipping beneath my conscious awareness. Recently, a memory from one of my first software development interviews popped into my head, reminding me of a seemingly simple question about the <strong>Four Pillars of Object-Oriented Programming</strong>. Back then, I nailed it. Today? I'd hesitate a bit. I asked a couple of other experienced engineers as well and they struggled too! So, let's refresh our memory together using some animal examples.</p>
<h2 id="heading-abstraction">Abstraction</h2>
<p><strong>Abstraction</strong> simplifies complexity by modeling only what's necessary, hiding the underlying details. Think about animals, you don't need to know everything about an animal, just the behaviors important to your application.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">abstract</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Animal</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">abstract</span> <span class="hljs-keyword">void</span> <span class="hljs-title">MakeSound</span>(<span class="hljs-params"></span>)</span>;
}
<span class="hljs-keyword">class</span> <span class="hljs-title">Dog</span> : <span class="hljs-title">Animal</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">override</span> <span class="hljs-keyword">void</span> <span class="hljs-title">MakeSound</span>(<span class="hljs-params"></span>)</span>
    {
        Console.WriteLine(<span class="hljs-string">"Woof!"</span>);
    }
}
<span class="hljs-keyword">class</span> <span class="hljs-title">Cat</span> : <span class="hljs-title">Animal</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">override</span> <span class="hljs-keyword">void</span> <span class="hljs-title">MakeSound</span>(<span class="hljs-params"></span>)</span>
    {
        Console.WriteLine(<span class="hljs-string">"Meow!"</span>);
    }
}
</code></pre>
<p>Here, the <code>Animal</code> class is abstract, meaning you can't create an "Animal" directly, only specific animals like dogs or cats. It keeps things simple: you're just interested in animals making sounds.</p>
<h2 id="heading-encapsulation">Encapsulation</h2>
<p><strong>Encapsulation</strong> bundles data and methods operating on that data within one unit, keeping things safe and secure. Consider your checking account: you don't let anyone directly change your balance, you control it through safe methods.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">class</span> <span class="hljs-title">CheckingAccount</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">decimal</span> balance;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Deposit</span>(<span class="hljs-params"><span class="hljs-keyword">decimal</span> amount</span>)</span>
    {
        <span class="hljs-keyword">if</span> (amount &gt; <span class="hljs-number">0</span>) balance += amount;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Withdraw</span>(<span class="hljs-params"><span class="hljs-keyword">decimal</span> amount</span>)</span>
    {
        <span class="hljs-keyword">if</span> (amount &gt; <span class="hljs-number">0</span> &amp;&amp; amount &lt;= balance) balance -= amount;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">decimal</span> <span class="hljs-title">GetBalance</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">return</span> balance;
    }
}
</code></pre>
<p>Your balance is encapsulated, ensuring only valid transactions modify it. This stops someone from just setting any balance they like. And yes, I know this isn't animal-related, I couldn't think of a clearer animal analogy! Leave me alone.</p>
<h2 id="heading-inheritance">Inheritance</h2>
<p><strong>Inheritance</strong> lets classes reuse properties and behaviors from other classes. Think of how all birds can fly, but each might add unique features. Ignore penguins.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">class</span> <span class="hljs-title">Bird</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Fly</span>(<span class="hljs-params"></span>)</span>
    {
        Console.WriteLine(<span class="hljs-string">"Bird is flying."</span>);
    }
}
<span class="hljs-keyword">class</span> <span class="hljs-title">Sparrow</span> : <span class="hljs-title">Bird</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Chirp</span>(<span class="hljs-params"></span>)</span>
    {
        Console.WriteLine(<span class="hljs-string">"Sparrow chirps."</span>);
    }
}
</code></pre>
<p>Here, <code>Sparrow</code> inherits the <code>Fly()</code> method from <code>Bird</code> and adds a unique <code>Chirp()</code> method. Inheritance makes your classes reusable and easy to extend.</p>
<h2 id="heading-polymorphism">Polymorphism</h2>
<p><strong>Polymorphism</strong> allows objects of different classes to behave differently even though they're accessed through a common interface. Think about how each animal has its way of eating, but your code treats them uniformly.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">class</span> <span class="hljs-title">Animal</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">virtual</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Eat</span>(<span class="hljs-params"></span>)</span>
    {
        Console.WriteLine(<span class="hljs-string">"Animal eats."</span>);
    }
}
<span class="hljs-keyword">class</span> <span class="hljs-title">Lion</span> : <span class="hljs-title">Animal</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">override</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Eat</span>(<span class="hljs-params"></span>)</span>
    {
        Console.WriteLine(<span class="hljs-string">"Lion eats meat."</span>);
    }
}
<span class="hljs-keyword">class</span> <span class="hljs-title">DieselTheDog</span> : <span class="hljs-title">Animal</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">override</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Eat</span>(<span class="hljs-params"></span>)</span>
    {
        Console.WriteLine(<span class="hljs-string">"Diesel my dog eats first and asks questions later."</span>);
    }
}
<span class="hljs-keyword">class</span> <span class="hljs-title">Program</span>
{
    <span class="hljs-function"><span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Main</span>(<span class="hljs-params"><span class="hljs-keyword">string</span>[] args</span>)</span>
    {
        Animal lion = <span class="hljs-keyword">new</span> Lion();
        Animal dog = <span class="hljs-keyword">new</span> DieselTheDog();

        lion.Eat(); <span class="hljs-comment">// Lion eats meat.</span>
        dog.Eat(); <span class="hljs-comment">// Diesel my dog eats first and asks questions later.</span>
    }
}
</code></pre>
<p>The <code>Eat()</code> method demonstrates polymorphism. Your application knows animals eat, but each animal type does it differently. This lets you handle many animals uniformly.</p>
<h2 id="heading-why-did-i-forget-these-pillars">Why Did I Forget These Pillars?</h2>
<p>Over years of practice, these principles became second nature rather than explicit practice. It's easy to overlook foundational ideas when you're busy handling complex problems every day. Revisiting basics regularly helps keep your understanding sharp and clear.</p>
<h2 id="heading-why-remembering-oop-pillars-matters">Why Remembering OOP Pillars Matters</h2>
<p>While the examples here are intentionally simple, revisiting fundamental OOP concepts helps clarify your thought processes, improving your class designs and overall code quality. Clear class design also makes your communication better with teammates and junior developers, simplifying debugging and enhancing maintainability.</p>
<p>These pillars are practical tools, not just theoretical concepts. <strong>Abstraction</strong> reduces complexity, <strong>encapsulation</strong> protects data, <strong>inheritance</strong> promotes code reuse, and <strong>polymorphism</strong> boosts flexibility. Keeping these ideas fresh ensures your software stays clean, scalable, and maintainable.</p>
]]></content:encoded></item><item><title><![CDATA[What's going on? 2025-04-12]]></title><description><![CDATA[It was a busy week for me, and I did get to work on my Apple Side project. I spent some time on my AZ-204 certification, and that will ramp up more and more now that the Chicago Blackhawks are almost done with the regular season.
I revisited some old...]]></description><link>https://brokenintellisense.com/whats-going-on-2025-04-12</link><guid isPermaLink="true">https://brokenintellisense.com/whats-going-on-2025-04-12</guid><category><![CDATA[MediatR]]></category><category><![CDATA[MassTransit]]></category><category><![CDATA[vibe coding]]></category><category><![CDATA[Microsoft Build]]></category><category><![CDATA[monolithic architecture]]></category><category><![CDATA[leadership]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Sat, 12 Apr 2025 13:44:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/xOj6_Ha1_R8/upload/29cdccfa5dc5ffe157c41c639a54dace.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It was a busy week for me, and I did get to work on <a target="_blank" href="https://brokenintellisense.com/exploring-ubuntu-linux-on-a-macbook-a-learning-journey">my Apple Side project</a>. I spent some time on my AZ-204 certification, and that will ramp up more and more now that the Chicago Blackhawks are almost done with the regular season.</p>
<p>I revisited some older books on my shelf and reminded myself of things that are second nature these days. Don’t be afraid to reach out and have a chat with me about anything you find interesting here or want me to check out.</p>
<h3 id="heading-mediatr-and-masstransit-going-commercial">MediatR and MassTransit Going Commercial</h3>
<p><a target="_blank" href="https://www.milanjovanovic.tech/blog/mediatr-and-masstransit-going-commercial-what-this-means-for-you">Milan Jovanović</a> - Milan talks about the shift of popular .NET libraries like MediatR and MassTransit to commercial licensing and that this transition highlights the importance of understanding the core principles these tools encapsulate. I can’t say enough, patterns and abstractions will make your life easier.</p>
<h3 id="heading-engineers-are-using-ai-to-code-based-on-vibes">Engineers Are Using AI to Code Based on Vibes</h3>
<p>IEEE Spectrum - AI is everywhere, and software is one area that was highlighted where it will change the industry. We now have the buzz word of vibe coding for having AI generate the majority of your code. I will talk about this in depth in the future.</p>
<h3 id="heading-microsoft-build-session-catalog">Microsoft Build - Session Catalog</h3>
<p><a target="_blank" href="https://build.microsoft.com/en-US/sessions">Microsoft</a> - The Microsoft Build session catalog is now open. Microsoft Build is May 19-22. It isn’t a surprise that there’s a lot of Co-Pilot and AI sessions. I’ll share my schedule a little closer to the day of Build.</p>
<h3 id="heading-modular-monolith-communication-patterns">Modular Monolith Communication Patterns</h3>
<p><a target="_blank" href="https://www.milanjovanovic.tech/blog/modular-monolith-communication-patterns?">Milan Jovanović</a> - Milan talks about different ways of communicating between services in your monolith system, and how there is no silver bullet for describing how different systems talk to each other. In his example, asynchronous communication using messaging can reduce coupling, but does introduce complexity.</p>
<h3 id="heading-oktas-ceo-tells-us-why-he-thinks-software-engineers-will-be-more-in-demand-in-5-years-not-less">Okta's CEO tells us why he thinks software engineers will be more in demand in 5 years — not less</h3>
<p><a target="_blank" href="https://www.businessinsider.com/okta-ceo-software-engineer-job-market-future-2025-4">Business Insider</a> - Okta CEO Todd McKinnon believes that the adoption of AI will shift the role of software development, much like the creation of compilers did. He believes that AI will help with more grunt work rather than eliminate jobs.</p>
<h3 id="heading-agile-provides-both-cash-and-control">Agile provides both cash and control</h3>
<p><a target="_blank" href="https://www.edyouragilecoach.com/agile-provides-both-cash-and-control/?ref=dirty-fingers-newsletter">Dirty Fingers</a> - Ed dives into the world of agile methodologies and how iterations create measurable value delivery which is key in today’s economy where investor demands may be shifting. It is a great thought about how agile practices enable organizations demonstrate control and cashflow demonstrating resilience.</p>
<h3 id="heading-what-sets-inspirational-leaders-apart">What Sets Inspirational Leaders Apart</h3>
<p><a target="_blank" href="https://hbr.org/2025/03/what-sets-inspirational-leaders-apart">Harvard Business Review</a> - A short reminder of how our actions as leaders cascade to anyone else in the organization. How we present ourselves, our attitude, and how we relay information is infectious. The author breaks it down into three key areas - vision, mentoring, and exemplary behaviors.</p>
]]></content:encoded></item><item><title><![CDATA[Exploring Ubuntu Linux on a MacBook: A Learning Journey]]></title><description><![CDATA[It’s been about 10 years since I last touched anything by Apple. I’m a Windows guy through and through, developing in a Microsoft-centric stack that’s worked just fine for me. Honestly, I had zero interest in diving into the Apple universe—until now....]]></description><link>https://brokenintellisense.com/exploring-ubuntu-linux-on-a-macbook-a-learning-journey</link><guid isPermaLink="true">https://brokenintellisense.com/exploring-ubuntu-linux-on-a-macbook-a-learning-journey</guid><category><![CDATA[Apple]]></category><category><![CDATA[macOS]]></category><category><![CDATA[Linux]]></category><category><![CDATA[MacBook Air]]></category><category><![CDATA[macbook]]></category><dc:creator><![CDATA[Larry Gasik]]></dc:creator><pubDate>Tue, 08 Apr 2025 23:09:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Hin-rzhOdWs/upload/d8ea17ca5bf1c80e6ddece8916920845.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It’s been about 10 years since I last touched anything by Apple. I’m a Windows guy through and through, developing in a Microsoft-centric stack that’s worked just fine for me. Honestly, I had zero interest in diving into the Apple universe—until now.</p>
<p>Here’s the deal man - I was recently gifted a hand-me-down MacBook Air. It came with zero fanfare: no complete power cable, a bit dusty, and no clue about what it’s been through. The previous owner just wanted it out of their house, and now, it’s mine to figure out. I haven’t even opened it yet.</p>
<p>Instead of trying to wrestle with macOS, I’ve decided to wipe it and install Ubuntu Linux. Why? Well, I want to document a little learning experiment of setting up .NET development on unfamiliar hardware, all while playing around with Linux. This won’t replace my Windows setup any time soon - it's just a sandbox to explore cross-platform .NET on Linux.</p>
<h3 id="heading-why-net-on-ubuntu">Why .NET on Ubuntu?</h3>
<p>Here’s the thing: .NET is now cross-platform, and guess what? Ubuntu is one of the primary Linux targets. I’ve run .NET applications on ubuntu containers so I know its possible. Microsoft’s got official packages, and the community is super active. Running .NET on Ubuntu will let me:</p>
<ol>
<li><p><strong>See if my Microsoft workflow plays nice with Linux</strong>: I want to find out if my tools and habits can make the jump to Linux. Does it work? Or does it break? Let’s find out.</p>
</li>
<li><p><strong>Learn about Apple hardware quirks</strong>: Running Linux on Apple devices isn’t always smooth sailing. I’m hoping to uncover the hidden gems (or landmines) involved in making Linux work on Apple hardware.</p>
</li>
<li><p><strong>Experience something new</strong>: I’ve been stuck in my Windows bubble for ages. It’ll be interesting to see how Ubuntu changes the game and what works (and what doesn’t) for me in day-to-day development.</p>
</li>
</ol>
<h3 id="heading-getting-started-installing-ubuntu-on-a-macbook">Getting Started: Installing Ubuntu on a MacBook</h3>
<p>Now, let’s get this show on the road. But first, a few things need to be figured out. Here’s how I’ll go.</p>
<h4 id="heading-step-1-identify-the-macbook-model">Step 1: Identify the MacBook Model</h4>
<p>Before I do anything, I need to figure out what kind of MacBook I’m dealing with. Is this one of those sleek Apple silicon family, or is it an Intel chip? This thing could easily be 10 years old, and I haven’t even opened it yet. My first task is to figure out what exactly I’ve got in front of me.</p>
<h4 id="heading-step-2-secure-power-for-the-macbook">Step 2: Secure Power for the MacBook</h4>
<p>I don’t even know if this MacBook turns on. The power cable I’ve got isn’t even complete, so step two is to figure out how to get this thing juiced up. I can’t do much without power, right?</p>
<h4 id="heading-step-3-try-it-out">Step 3: Try it Out</h4>
<p>Once I get it powered up, the next step is making sure it actually works. Do the USB ports work? Is the screen a shattered mess? If anything major’s broken, this thing might get an immediate ticket to the recycling center.</p>
<h4 id="heading-step-4-the-hello-world-test">Step 4: The Hello World Test</h4>
<p>Once everything’s working, I’ll do what every developer loves to do- “Hello World”. Nothing fancy, just a little <em>“Hey, look, I’m alive!”</em> moment from the system. We’ll see what happened and where things are at.</p>
<h3 id="heading-winging-it">Winging It</h3>
<p>To be honest, I have absolutely no idea what’s coming next. I’m just winging it and seeing what happens. If this turns out to be on Apple’s silicon, I might be stuck with macOS (not a huge loss). But hey, if I can’t go straight to Linux, I’ll figure out another way around it. The idea is to learn something new and see if Linux—or even macOS—could eventually become my daily driver for development.</p>
<p>So, no pressure. It can’t be that hard, right? Right?</p>
]]></content:encoded></item></channel></rss>