<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Effective software engineering]]></title><description><![CDATA[The central theme connecting all articles on this site is the enormous efficiency — and long-term stability — that rich domain models bring to application development. These pieces aim to show, from multiple perspectives, that domain-centric modeling is not a trend, not an old-school technique, and certainly not optional. It is the only approach that puts understanding before doing.

Procedural programming, functional pipelines, vibe-coding, and framework-driven architectures all default to doing before understanding. They focus on producing behavior rather than capturing meaning. And while they can deliver short-term progress, they accumulate structural debt at alarming speed — because nothing in the code explains why anything exists.

Rich domain models counter that completely. They create systems where the code mirrors the mental model of the business — where reasoning, adapting, and evolving are not chores but natural consequences of clarity.

But this does not happen automatically. Domain modeling is not a pattern, nor a checklist, nor a technique you sprinkle on top. It is a skill — one that demands conceptual thinking, experience, curiosity, and the willingness to understand the problem deeply before encoding it.

This site exists to bring attention back to that essential foundation.
In most engineering fields, insufficient understanding leads to visible and immediate consequences. In software, the absence of reference implementations creates the illusion that misunderstanding is cheap — as explored in the “boat” article.
Yet the opposite is true: placing the domain model before the code yields extraordinary leverage. It turns software into an adaptable, comprehensible system rather than a growing liability.]]></description><link>https://blog.leonpennings.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 07 Apr 2026 12:05:38 GMT</lastBuildDate><atom:link href="https://blog.leonpennings.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[When Distribution Becomes a Substitute for Design — and Fails]]></title><description><![CDATA[A lot of modern software architecture—microservices, event-driven systems, CQRS—is not born from deeply understanding the domain. It is what teams reach for when the existing application has become a ]]></description><link>https://blog.leonpennings.com/when-distribution-becomes-a-substitute-for-design-and-fails</link><guid isPermaLink="true">https://blog.leonpennings.com/when-distribution-becomes-a-substitute-for-design-and-fails</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[Microservices]]></category><category><![CDATA[Rich Domain Model]]></category><category><![CDATA[Java]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Tue, 07 Apr 2026 08:40:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/da8c1580-a7b0-4f61-9d9e-c5dc821dc83c.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A lot of modern software architecture—microservices, event-driven systems, CQRS—is not born from deeply understanding the domain. It is what teams reach for when the existing application has become a mess: nobody really knows what’s happening where anymore, behavior is unpredictable, and making changes feels risky and expensive. Instead of asking “What does this concept actually mean and where does it truly belong?”, they ask “How do we split this?”</p>
<p>That is where a lot of modern architecture begins.<br />Not in necessity.<br />Not in insight.<br />But in the growing discomfort of trying to manage software that was never modeled well in the first place.</p>
<p>And because the resulting system still runs in production, the cost of that move often remains invisible for years.</p>
<p>That is one of the most expensive traps in software.</p>
<hr />
<h2>Framework Fluency Is Not Software Design</h2>
<p>A lot of developers today are highly fluent in frameworks.<br />They know how to build controllers, services, repositories, DTOs, entities, integrations, and configuration.</p>
<p>From the outside, that often looks like competence.</p>
<p>But that kind of fluency can be deeply misleading.</p>
<p>Because building software out of familiar framework-shaped parts is not the same thing as designing software well.</p>
<p>The real questions are different:</p>
<ul>
<li><p>What is the actual business concept here?</p>
</li>
<li><p>What belongs together?</p>
</li>
<li><p>What behavior is intrinsic to the domain?</p>
</li>
<li><p>What is a real boundary, and what is just an implementation detail?</p>
</li>
<li><p>What rules should be explicit in the model rather than implied by orchestration?</p>
</li>
</ul>
<p>Real domain modeling is not about applying a catalog of patterns. It is the disciplined, often uncomfortable work of discovering what belongs together, what behavior is intrinsic, and expressing those concepts as clearly and cohesively as possible—whether that lives in modules, functions, or simple objects. The goal is conceptual integrity, not architectural ceremony.</p>
<p>Without those questions, software tends to take on a very predictable shape: fat service classes, anemic entities, persistence-first design, procedural workflows, business logic smeared across layers.</p>
<p>The code works. The endpoints return data. The database persists state.</p>
<p>But the system has not really been designed.<br />It has been assembled.</p>
<p>And that difference matters far more than most teams realize.</p>
<hr />
<h2>Weak Models Create Cognitive Overload</h2>
<p>The cost of poor design does not usually show up immediately. At first, the system still feels manageable. A few controllers. A few services. A few repositories. Everything is still “clean.”</p>
<p>But over time, something starts to happen. Business rules accumulate. Exceptions pile up. New requirements interact with old assumptions. Concepts that looked simple turn out to be related in ways the software never captured.</p>
<p>And because there is no strong domain model holding those concepts together, the complexity has nowhere coherent to go. So it leaks—into service methods, orchestration flows, integration glue, persistence logic, special-case conditionals, “helper” abstractions, and coordination code.</p>
<p>At that point, the team starts feeling something very real:</p>
<blockquote>
<p><em>Nobody understands the whole thing anymore.</em></p>
</blockquote>
<p>And that is the crucial moment.</p>
<p>Because once a system becomes cognitively overwhelming, the team has two options:</p>
<h3>Option A</h3>
<p>Reduce the complexity by improving the model.</p>
<h3>Option B</h3>
<p>Reduce the <em>scope</em> of the confusion by splitting it apart.</p>
<p>A lot of teams choose Option B.</p>
<hr />
<h2>Distribution Becomes Compensation</h2>
<p>This is where architecture often stops being a design choice and starts becoming a coping mechanism.</p>
<p>When the internal model is weak, teams still need some way to create order. And distribution gives them one.</p>
<p>So they introduce microservices, event-driven architecture, CQRS, separate read models, ownership boundaries, queues, and asynchronous coordination.</p>
<p>Distribution, CQRS, and event-driven architecture can have legitimate uses in rare cases of extreme scale or unavoidable organizational boundaries. But in the vast majority of systems, they are not introduced because the domain demands them. They are introduced because the internal model is too weak to provide clarity. What looks like sophisticated architecture is often just confusion hiding behind cleaner service boundaries.</p>
<p>What they are really doing is this:</p>
<blockquote>
<p><strong>They are trying to create externally, through distribution, the boundaries they failed to create internally, through design.</strong></p>
</blockquote>
<p>And that can work. At least for a while.</p>
<p>A smaller service <em>does</em> feel easier to understand than a large monolith. A separate read model <em>does</em> reduce some friction. A queue <em>does</em> create some local decoupling.</p>
<p>But none of that means the software has become conceptually better. It often just means the confusion has been sliced into smaller containers.</p>
<hr />
<h2>Local Clarity Comes at a Global Cost</h2>
<p>That trade is where the real damage happens.</p>
<p>Because distribution absolutely can create local context. A team can say, “This service owns billing.” And that does help.</p>
<p>But it is a much weaker form of clarity than a real domain model. A service boundary can tell you <strong>where code lives</strong>. A good model can tell you what something <em>is</em>, what it <em>means</em>, what rules govern it, what its lifecycle is, and what relationships are essential.</p>
<p>Those are very different levels of understanding.</p>
<p>And when teams use distribution to manufacture context, they often gain short-term manageability at the cost of long-term agility. Because now the system starts paying the distribution tax: network failure, eventual consistency, contract drift, duplicated concepts, duplicated logic, coordination overhead, deployment complexity, operational burden, and fractured causality.</p>
<p>And perhaps most importantly: <strong>lost refactorability</strong>.</p>
<p>When the model is strong and cohesive, changing your mind usually means a local refactor—sometimes even a delightful collapse of concepts. When boundaries have been hardened into services, the same insight triggers contracts, versioning, migration scripts, and cross-team coordination. The cost of learning is no longer paid in thought, but in infrastructure and politics.</p>
<p>And in software, changing your mind is not a failure. It is the job.</p>
<hr />
<h2>The Real Cost Is Paid When the Business Learns Something New</h2>
<p>This is where badly structured software reveals itself. Not when it is first deployed. Not when the first endpoints work. Not when the dashboards are green. But when the business itself becomes better understood.</p>
<p>Because that is what always happens. Sooner or later, the business learns: these two concepts are actually one thing, this workflow was modeled incorrectly, this rule has important exceptions, this distinction is more important than we thought, or this process should not exist at all.</p>
<p>That is normal. That is what software is supposed to accommodate.</p>
<p>A coherent domain model makes that kind of change survivable. A fragmented, distributed, weakly modeled system makes it expensive.</p>
<p>Note that “coherent domain model” here does not mean the tactical patterns that became associated with DDD—entities, repositories, aggregates, and the rest. Those often added their own accidental complexity. Real modeling is simpler and deeper: it is the ongoing work of refining ubiquitous language and discovering natural conceptual boundaries so that new business insight can be absorbed with minimal violence to the existing code.</p>
<p>Because now the insight has to travel through APIs, queues, read models, event contracts, deployment boundaries, ownership lines, duplicated rules, and partial consistency guarantees. What should have been a conceptual refactor becomes a cross-system negotiation.</p>
<p>And that is where the bill arrives. Not because the domain was inherently impossible. But because the architecture froze yesterday’s misunderstandings into today’s structure.</p>
<p>That is one of the worst things software can do.</p>
<hr />
<h2>Why This So Often Goes Unnoticed</h2>
<p>The most dangerous part is that this kind of architecture often looks successful. The system runs. Users use it. The company makes money. So the architecture gets treated as validated.</p>
<p>But “it works” is one of the weakest standards in software. A system running in production proves only that it is viable enough to survive. It does <strong>not</strong> prove that it is cheap to change, conceptually sound, structurally coherent, or good at absorbing new understanding.</p>
<p>Most teams never get to experience how different software feels when:</p>
<ul>
<li><p>Concepts have a single, obvious home instead of being smeared across services</p>
</li>
<li><p>Rules are explicit and enforceable rather than scattered in orchestration and glue code</p>
</li>
<li><p>New business understanding leads to a clean refactor instead of distributed coordination</p>
</li>
<li><p>The system invites insight instead of resisting change</p>
</li>
</ul>
<p>Without that contrast, the pain of weak modeling hidden behind distribution gets normalized as “just how complex software is.”</p>
<p>Often, it is not. Often, it is just the cost of weak design hidden behind architecture.</p>
<hr />
<h2>Final Thought</h2>
<p>Much of today’s distributed architecture is not the result of domain insight. It is compensation for the conceptual clarity that was never built into the model. By reaching for separation instead of deeper understanding, teams gain local manageability at the expense of long-term coherence and cheap evolution.</p>
<p>The problem is that the original lack of clarity doesn’t disappear — it just gets distributed. In the end, the same confusion that made the monolith unmaintainable will make the distributed system fail just as hard, only now it’s far more expensive and painful to fix.</p>
<p>This is why so much “sophisticated” architecture is, in truth, just sophisticated coping.</p>
]]></content:encoded></item><item><title><![CDATA[Rich Domain Models: Start with What Is, Not What Happens]]></title><description><![CDATA[A lot of software is more difficult to build and maintain than it needs to be.
Not because the business itself is inherently complex.
Not because the requirements keep changing.
But because the softwa]]></description><link>https://blog.leonpennings.com/rich-domain-models-start-with-what-is-not-what-happens</link><guid isPermaLink="true">https://blog.leonpennings.com/rich-domain-models-start-with-what-is-not-what-happens</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[Rich Domain Model]]></category><category><![CDATA[Java]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Sat, 04 Apr 2026 10:15:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/8cdeeead-552b-41c2-aa6e-ce5385765cbb.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A lot of software is more difficult to build and maintain than it needs to be.</p>
<p>Not because the business itself is inherently complex.</p>
<p>Not because the requirements keep changing.</p>
<p>But because the software is usually structured around the wrong things: workflows, events, commands, technical layers, frameworks, or current implementation details.</p>
<p>When that happens, the business logic becomes scattered, hard to reason about, and expensive to evolve. The fix is not more patterns, more ceremonies, or more events. The fix is proper domain modelling.</p>
<p>A rich domain model is built by first identifying the core concepts of the business and giving each one clear responsibilities and boundaries. Once that foundation is in place, everything else—events, workflows, persistence, integrations—becomes simpler and more stable.</p>
<p>This is not a new technique or a branded method. It is basic systems engineering done in the right order.</p>
<hr />
<h3>The purpose of domain modelling</h3>
<p>Domain modelling is about discovering <em>what exists</em> in the business, independent of how we happen to implement it today.</p>
<p>It means answering a small set of fundamental questions for every important concept:</p>
<ul>
<li><p>What <em>is</em> this thing?</p>
</li>
<li><p>What is it responsible for?</p>
</li>
<li><p>What may it know?</p>
</li>
<li><p>What may it decide?</p>
</li>
<li><p>What belongs inside its boundary, and what does not?</p>
</li>
</ul>
<p>These questions come before any talk of events, commands, database tables, API payloads, or user flows. If those questions have not been asked and answered, domain modelling has not actually started. At best, we are only mapping interactions.</p>
<hr />
<h3>Start with responsibilities, not representation</h3>
<p>The most common mistake is beginning with <em>representation</em> instead of responsibility.</p>
<p>Teams start listing fields, DTOs, JSON shapes, database columns, or REST endpoints. Those are not the model; they are merely one possible way to represent the model. When you start there, you almost always end up with passive data structures and procedural logic spread across services, handlers, and utility classes.</p>
<p>A rich domain model begins the other way around. The first questions are never “What properties does this object have?” or “What does the request body look like?” They are:</p>
<ul>
<li><p>What is this thing?</p>
</li>
<li><p>What does it <em>do</em>?</p>
</li>
<li><p>What is it responsible for?</p>
</li>
<li><p>What should it <em>never</em> be responsible for?</p>
</li>
</ul>
<p>Structure and representation emerge naturally once responsibilities are clear.</p>
<hr />
<h3>A simple way to begin</h3>
<p>You do not need extensive workshops, coloured sticky notes, or elaborate frameworks.</p>
<p>The most effective technique is almost embarrassingly simple: put people in a circle of chairs. Tell one person, “You are the Order. What are you? What do you know? What are you responsible for? What should you never do?” Then add the next concept—Client, Invoice, Payment—and let them talk to each other. Let them negotiate boundaries. When something feels wrong, revise the definitions or pull up a new chair for a missing concept.</p>
<p>The medium does not matter—cards, people, puppets, or just conversation. What matters is that you can point at a concept and force it to declare its own identity and responsibilities. When two concepts constantly need to know each other’s internals, the boundaries are probably wrong. When no one knows who should decide something, the responsibility has not been assigned yet. When a concept only exists because a UI flow needed it, it may not be a real domain concept at all.</p>
<p>This is domain discovery. It starts with “What are we <em>about</em>?” and then “Who does what?”—not in the sense of users or actors, but in the sense of the actual participants in the business reality: Client, Order, Invoice, Payment, Subscription, Shipment, Notification.</p>
<hr />
<h3>Why starting with events or workflows feels backwards</h3>
<p>Many popular modelling techniques (Event Storming being the most visible) begin with domain events, commands, actors, and processes. They are excellent at mapping <em>what happens</em> and at surfacing integration points. But they are weak at discovering <em>what is</em>.</p>
<p>They describe motion around the business rather than the business itself. A process map can tell you that a payment failed. It cannot tell you what a Payment <em>is</em>, what responsibilities it owns, or whether Invoice resolution belongs to the Invoice, the Order, or a separate Payment concept. It cannot distinguish a Client (the legal/commercial entity) from a User (merely a web-access mechanism for that Client).</p>
<p>Those are modelling questions, and they must come first. Events and workflows are valuable <em>after</em> the core model exists; they should not be the starting point. Otherwise the domain becomes limited to today’s usage patterns instead of reflecting the stable underlying reality.</p>
<hr />
<h3>Aggregates and the danger of procedural models</h3>
<p>The concept of “Aggregate” is often presented as a necessary consistency boundary. In practice it frequently becomes a procedural container: a cluster of data that a command mutates and an event is emitted from. When responsibilities have not been properly assigned, those aggregates turn into little more than transaction scripts with a fancy name.</p>
<p>In a rich model the question is simpler: does this concept have a coherent responsibility? If it does, it owns its invariants and decisions. If it does not, no artificial boundary will save it. Objects can collaborate, but they do not need to be artificially clustered just to satisfy technical consistency rules.</p>
<hr />
<h3>Rich domain models make the core simple</h3>
<p>A well-defined domain model does not add complexity; it removes accidental complexity.</p>
<p>Consider a typical payment flow. An Order contains Items. An Invoice points to an Order. An Invoice can be resolved by a Payment. A Payment has a type (Online, BankTransfer, etc.). That type determines how execution actually happens.</p>
<p>In a responsibility-driven model this is straightforward:</p>
<ul>
<li><p>The Invoice knows it needs to be resolved.</p>
</li>
<li><p>The Payment knows it must execute according to its type.</p>
</li>
<li><p>The type itself (implemented as an enum with a strategy or small implementing classes) encapsulates the variability.</p>
</li>
</ul>
<p>Adding a new payment mechanism tomorrow is a local change inside the Payment concept. No new workshop, no new event storm, no ripple through services. The core model stays stable; only the variable part grows.</p>
<p>Complexity lives exactly where the variability is—not scattered across workflows, services, or “process managers.”</p>
<hr />
<h3>Keep the domain central; push technology to the border</h3>
<p>The real architectural decision is not whether a domain object may call a database or invoke an external service. The question is: does this action belong to the responsibility of this concept?</p>
<p>If the answer is yes, the call can live inside the domain object. Technology is not the organising principle. The business meaning is.</p>
<p>When you organise around technology layers instead (controllers, services, repositories, adapters), the business becomes invisible. Every change requires archaeological digging. When you organise around the domain, the business stays transparent and technology becomes replaceable.</p>
<hr />
<h3>Outcomes — short term and long term</h3>
<p>A domain model built this way delivers measurable improvements from the very first delivery and compounds dramatically over time.</p>
<p><strong>Short term:</strong> Time to first production is usually <em>shorter</em>, not longer. With a rich domain model you know the destination clearly from the start, so you can take the direct route. It is the difference between driving from The Hague to Utrecht on the A12 motorway versus taking the long detour via Amsterdam and the Afsluitdijk. Both paths eventually get you there, but the workflow-first approach feels like continuously driving “somewhat in the right direction” while you figure things out on the fly. By modelling what the business is, you learn faster, decide faster, write less boilerplate, and avoid the lengthy refactoring cycles that come from discovering the business domain later in the project.</p>
<p><strong>Long term:</strong> The difference becomes stark — especially in non-CRUD domains such as complex ETL pipelines, logistics orchestration, risk engines, or any system with real business rules and variability.</p>
<p>The “framework-first” or “workflow-first” approach can appear to work for a while. You can wire together services, handlers, and event processors and ship something functional. But as soon as the business evolves — new payment types, new regulatory rules, new integration partners, or changed data flows — the system turns into a web of scattered logic. Maintenance becomes slow, error-prone, and expensive. Changes ripple unpredictably because the business is no longer visible in one coherent place.</p>
<p>In contrast, a rich domain model keeps the stable business reality in the centre. Change stays local. Payment providers, ETL transformations, or logistics carriers can be swapped without touching the core model. Fewer classes, fewer hand-offs, and far less rediscovery work are required. The result is software that is significantly cheaper to keep alive over its lifetime — often by a large margin.</p>
<p>The economic benefit is real, but it is not the goal. It is the natural outcome of doing the engineering work correctly: modelling the domain first, responsibilities first, structure first.</p>
<hr />
<h3>On AI and domain modelling</h3>
<p>Modern AI tools are already excellent at helping with the <em>implementation</em> phase. They can generate clean code snippets, suggest conventions, enforce patterns, and accelerate boilerplate work once the model is clear.</p>
<p>But they have no meaningful role in the actual domain modelling itself.</p>
<p>AI cannot sit in the circle of chairs. It cannot negotiate what a concept <em>is</em>, what it should know, or what it should never be responsible for. It can mimic patterns it has seen in other codebases, but it lacks the lived understanding of business reality and the ability to discover stable invariants through dialogue.</p>
<p>Writing the code remains the best mirror for your design. As soon as you start implementing, flaws in the model become visible immediately — that feedback loop is irreplaceable and deeply human. AI can polish and speed up the coding, but it should not be the one discovering or deciding the model. That work still belongs to the people who understand the domain.</p>
<hr />
<h3>Final thought</h3>
<p>Basic domain modelling is not complicated. It is simply insisting on answering the most fundamental questions first:</p>
<ul>
<li><p>What is this thing?</p>
</li>
<li><p>What is it responsible for?</p>
</li>
<li><p>What belongs inside it?</p>
</li>
<li><p>What should remain outside it?</p>
</li>
</ul>
<p>When those questions are answered clearly, the business becomes visible in the code. Once the business is visible, the system becomes maintainable — from day one and for years to come.</p>
<p>That is not a luxury. For any software expected to live longer than its current tech stack, it is the foundation.</p>
]]></content:encoded></item><item><title><![CDATA[Your software development approach is too expensive and too brittle]]></title><description><![CDATA[Most software teams are not struggling because software is inherently chaotic.
They are struggling because they are paying enormous amounts of money to keep the wrong machine barely usable.
That sound]]></description><link>https://blog.leonpennings.com/your-software-development-approach-is-too-expensive-and-too-brittle</link><guid isPermaLink="true">https://blog.leonpennings.com/your-software-development-approach-is-too-expensive-and-too-brittle</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[Rich Domain Model]]></category><category><![CDATA[event-driven-architecture]]></category><category><![CDATA[Microservices]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Thu, 02 Apr 2026 08:12:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/a4842122-099f-42ab-8032-89abf2aa92e4.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most software teams are not struggling because software is inherently chaotic.</p>
<p>They are struggling because they are paying enormous amounts of money to keep the wrong machine barely usable.</p>
<p>That sounds dramatic.</p>
<p>It is not.</p>
<p>In fact, it is one of the most normal things in modern software development.</p>
<p>A lot of systems are built in ways that are:</p>
<ul>
<li><p>more expensive than they need to be,</p>
</li>
<li><p>more fragile than they need to be,</p>
</li>
<li><p>harder to change than they need to be,</p>
</li>
<li><p>and harder to reason about than they need to be.</p>
</li>
</ul>
<p>And yet they still get called “well architected.”</p>
<p>Why?</p>
<p>Because in software, there is usually no comparison case.</p>
<p>No control group.</p>
<p>No alternate implementation.</p>
<p>No tractor parked next to the Ferrari.</p>
<p>So if the thing eventually works, the architecture often gets promoted from merely functional to supposedly good.</p>
<p>That is one of the deepest blind spots in software engineering.</p>
<p>And it is how teams end up trying to plow fields with a Ferrari F40.</p>
<hr />
<h2><strong>The Ferrari and the tractor</strong></h2>
<p>Imagine you need to plow a field.</p>
<p>You can choose between:</p>
<ul>
<li><p>a Ferrari F40, or</p>
</li>
<li><p>a tractor.</p>
</li>
</ul>
<p>This should not be a difficult decision.</p>
<p>The tractor is not glamorous, but it is aligned to the work.</p>
<p>It has:</p>
<ul>
<li><p>the right ground clearance,</p>
</li>
<li><p>the right tires,</p>
</li>
<li><p>the right torque profile,</p>
</li>
<li><p>the right durability characteristics,</p>
</li>
<li><p>the right maintenance expectations,</p>
</li>
<li><p>and the right operational shape.</p>
</li>
</ul>
<p>The Ferrari has none of that.</p>
<p>It is a remarkable machine.</p>
<p>It is just the wrong one.</p>
<p>And the mismatch does not merely show up once the work starts.</p>
<p>It shows up immediately.</p>
<p>Because before the Ferrari can even begin to perform badly in the field, someone first has to solve a completely absurd problem:</p>
<blockquote>
<p><strong>How do we even make this thing usable for field work?</strong></p>
</blockquote>
<p>That is where the real cost begins.</p>
<p>Because now you need compensations.</p>
<p>You need:</p>
<ul>
<li><p>custom adaptations,</p>
</li>
<li><p>support structures,</p>
</li>
<li><p>protective workarounds,</p>
</li>
<li><p>non-native operational handling,</p>
</li>
<li><p>specialist maintenance,</p>
</li>
<li><p>and constant care to keep the machine functioning in an environment it was never shaped for.</p>
</li>
</ul>
<p>That is the real problem with a mismatch.</p>
<p>Not just that it performs badly.</p>
<p>But that you now have to build an entire support ecosystem around the fact that it is wrong.</p>
<hr />
<h2><strong>And even that is a cheap mismatch compared to software</strong></h2>
<p>In the physical world, the mismatch would at least be visible.</p>
<p>A Ferrari F40 is obviously a terrible agricultural investment.</p>
<p>Even with rough but realistic assumptions, the economics are absurd.</p>
<p>In the physical world, the absurdity would be obvious on a balance sheet. A collector Ferrari F40 trades for millions, while a capable farm tractor costs a fraction of that — with maintenance profiles to match. Using the supercar for field work would not just perform poorly; it would demand absurd custom adaptations before it could even start.</p>
<p>Software hides this mismatch better, which is why teams can run the equivalent for years and still call it maturity.</p>
<p>So yes: in the real world, using a Ferrari to plow a field would already be economically insane.</p>
<p>But in software, the mismatch is often much worse.</p>
<p>Because in software:</p>
<ul>
<li><p>the cost is less visible,</p>
</li>
<li><p>the pain is spread over time,</p>
</li>
<li><p>the friction is normalized,</p>
</li>
<li><p>and the organization often has no simpler implementation to compare it to.</p>
</li>
</ul>
<p>That means software teams can spend years operating the equivalent of a Ferrari in a muddy field and still call it “engineering maturity.”</p>
<p>That is the danger.</p>
<hr />
<h2><strong>The uniqueness trap</strong></h2>
<p>This is one of the hardest structural problems in software development:</p>
<blockquote>
<p><strong>most applications are built only once.</strong></p>
</blockquote>
<p>Not once in terms of business purpose, perhaps.</p>
<p>But once in terms of implementation.</p>
<p>A team typically does not build:</p>
<ul>
<li><p>one version with a cohesive domain model,</p>
</li>
<li><p>another with CQRS and event choreography,</p>
</li>
<li><p>another with five microservices,</p>
</li>
</ul>
<p>and then compare cost, reliability, comprehensibility, and adaptability over five years.</p>
<p>That almost never happens.</p>
<p>So architecture is rarely judged comparatively.</p>
<p>It is judged internally.</p>
<p>And that means if a system eventually “works,” people often conclude that the architecture must have been reasonable.</p>
<p>But that conclusion is deeply unreliable.</p>
<p>Because there may have been a far cheaper, simpler, more robust, and more truthful way to build the same thing.</p>
<p>No one knows.</p>
<p>Because the tractor version was never built.</p>
<p>That is the uniqueness trap.</p>
<p>And it is one of the main reasons accidental complexity survives so easily in software.</p>
<hr />
<h2><strong>Most software architecture is expensive support structure around a mismatch</strong></h2>
<p>This is where the Ferrari metaphor becomes useful.</p>
<p>If someone insisted on plowing a field with an F40, they would not simply “start plowing.”</p>
<p>They would first need to invent a whole support system around the mismatch.</p>
<p>They would need to answer questions like:</p>
<ul>
<li><p>How do we prevent the chassis from bottoming out?</p>
</li>
<li><p>How do we maintain traction in mud?</p>
</li>
<li><p>How do we protect components from wear profiles they were never designed for?</p>
</li>
<li><p>How do we attach the wrong machine to the wrong task?</p>
</li>
<li><p>How do we keep it alive under repeated misuse?</p>
</li>
</ul>
<p>In other words:</p>
<blockquote>
<p><strong>they would need to build a compensating architecture around the fact that the machine is wrong.</strong></p>
</blockquote>
<p>That is exactly what many software teams do.</p>
<p>They choose an architectural shape before they understand the domain, and then spend years building support mechanisms around the mismatch.</p>
<p>That support structure often looks like:</p>
<ul>
<li><p>CQRS,</p>
</li>
<li><p>EDA,</p>
</li>
<li><p>orchestration layers,</p>
</li>
<li><p>distributed workflows,</p>
</li>
<li><p>microservices,</p>
</li>
<li><p>command buses,</p>
</li>
<li><p>event buses,</p>
</li>
<li><p>retries,</p>
</li>
<li><p>compensations,</p>
</li>
<li><p>synchronization logic,</p>
</li>
<li><p>observability scaffolding,</p>
</li>
<li><p>deployment choreography,</p>
</li>
<li><p>and framework conventions.</p>
</li>
</ul>
<p>And because all of this is technical work, it often feels sophisticated.</p>
<p>But much of it exists only because the software was shaped incorrectly to begin with.</p>
<p>That is the setup tax of accidental complexity.</p>
<hr />
<h2><strong>Back to Brooks: essential versus accidental complexity</strong></h2>
<p>Fred Brooks gave us the cleanest possible vocabulary for this problem decades ago.</p>
<h3><strong>Essential complexity</strong></h3>
<p>Essential complexity is the irreducible complexity of the business domain itself.</p>
<p>This is the complexity that actually belongs.</p>
<p>Examples:</p>
<ul>
<li><p>pricing rules,</p>
</li>
<li><p>eligibility constraints,</p>
</li>
<li><p>shipment state transitions,</p>
</li>
<li><p>reconciliation logic,</p>
</li>
<li><p>metadata rules,</p>
</li>
<li><p>legal behavior,</p>
</li>
<li><p>catalog semantics,</p>
</li>
<li><p>scheduling constraints.</p>
</li>
</ul>
<p>This complexity exists because reality is complex.</p>
<p>You cannot remove it.</p>
<p>You can only understand it, model it, and localize it properly.</p>
<h3><strong>Accidental complexity</strong></h3>
<p>Accidental complexity is everything introduced by the solution that the problem itself did not require.</p>
<p>Examples:</p>
<ul>
<li><p>framework conventions,</p>
</li>
<li><p>architectural ceremony,</p>
</li>
<li><p>messaging choreography,</p>
</li>
<li><p>unnecessary distribution,</p>
</li>
<li><p>layered indirection,</p>
</li>
<li><p>technical orchestration,</p>
</li>
<li><p>compensating workflows,</p>
</li>
<li><p>integration-driven domain shape,</p>
</li>
<li><p>“enterprise” abstraction stacks.</p>
</li>
</ul>
<p>This complexity is not business truth.</p>
<p>It is construction overhead.</p>
<p>And much of modern software architecture is simply accidental complexity with better branding.</p>
<hr />
<h2><strong>The first job of software design is not to choose an architecture</strong></h2>
<p>It is to understand the domain.</p>
<p>That should not be controversial.</p>
<p>And yet much of modern software development behaves as if the opposite were true.</p>
<p>Teams routinely begin with questions like:</p>
<ul>
<li><p>Should we use CQRS?</p>
</li>
<li><p>Should we use EDA?</p>
</li>
<li><p>Should we split this into microservices?</p>
</li>
<li><p>Should this be event-driven?</p>
</li>
<li><p>Should we separate reads and writes?</p>
</li>
<li><p>Should this be asynchronous?</p>
</li>
<li><p>Should we introduce orchestration?</p>
</li>
</ul>
<p>Those are not first questions.</p>
<p>Those are late questions.</p>
<p>The first question is:</p>
<blockquote>
<p><strong>What is the business, really?</strong></p>
</blockquote>
<p>Until that question is answered properly, every major architectural choice is at risk of being premature.</p>
<p>And premature architecture is usually just accidental complexity entering the system early enough to become permanent.</p>
<hr />
<h2><strong>The real problem is Pattern-Driven Design</strong></h2>
<p>The issue is not that CQRS, EDA, or messaging can never appear in a system.</p>
<p>The issue is that many teams no longer design from the domain outward.</p>
<p>They design from patterns inward.</p>
<p>That is how software ends up shaped by:</p>
<ul>
<li><p>command handlers,</p>
</li>
<li><p>event buses,</p>
</li>
<li><p>orchestration layers,</p>
</li>
<li><p>service templates,</p>
</li>
<li><p>and framework conventions</p>
</li>
</ul>
<p>before anyone has actually understood what the business is.</p>
<p>That is not architecture.</p>
<p>That is <strong>Pattern-Driven Design</strong>.</p>
<p>And Pattern-Driven Design is one of the fastest ways to bury essential complexity under accidental complexity.</p>
<p>Because once the pattern becomes the starting point, the business no longer gets modeled on its own terms.</p>
<p>It gets forced to fit the machinery.</p>
<p>That is not simplification.</p>
<p>That is distortion.</p>
<hr />
<h2><strong>Always start with the domain model</strong></h2>
<p>If the goal is to avoid expensive, brittle, overcompensated systems, then the starting point is straightforward:</p>
<blockquote>
<p><strong>Always start with the domain model.</strong></p>
</blockquote>
<p>Not because every system needs an elaborate object hierarchy.</p>
<p>Not because “DDD” is fashionable.</p>
<p>Not because object orientation is sacred.</p>
<p>But because if you do not start there, something else will define the shape of the software instead.</p>
<p>And that “something else” is usually accidental.</p>
<p>If you do not begin with:</p>
<ul>
<li><p>what the business concepts are,</p>
</li>
<li><p>what they mean,</p>
</li>
<li><p>what they are responsible for,</p>
</li>
<li><p>what must always be true,</p>
</li>
<li><p>how they are allowed to change,</p>
</li>
<li><p>and how they interact,</p>
</li>
</ul>
<p>then the system will instead be shaped by:</p>
<ul>
<li><p>endpoints,</p>
</li>
<li><p>persistence structure,</p>
</li>
<li><p>framework constraints,</p>
</li>
<li><p>service boundaries,</p>
</li>
<li><p>message flows,</p>
</li>
<li><p>handler conventions,</p>
</li>
<li><p>or transport semantics.</p>
</li>
</ul>
<p>And once that happens, the business is no longer being modeled.</p>
<p>It is being adapted to the machinery.</p>
<p>That is where software becomes expensive and brittle.</p>
<hr />
<h2><strong>A user story is not a model</strong></h2>
<p>This is one of the most common and costly confusions in software teams.</p>
<p>A user story is not a model.</p>
<p>A ticket is not a model.</p>
<p>A process diagram is not a model.</p>
<p>A request from the business is not yet the business.</p>
<p>These things describe surface behavior.</p>
<p>They do not necessarily describe the actual structure or semantics of the domain.</p>
<p>That means implementation should never start by merely wiring the request into the chosen architecture.</p>
<p>It should start by asking:</p>
<ul>
<li><p>What actually exists here?</p>
</li>
<li><p>What is this concept responsible for?</p>
</li>
<li><p>Which rules belong together?</p>
</li>
<li><p>Which state transitions are valid?</p>
</li>
<li><p>Which interactions are intrinsic?</p>
</li>
<li><p>Which behaviors are essential and which are incidental?</p>
</li>
</ul>
<p>That is the real work of software design.</p>
<p>And the clearest place to do that work is the domain model.</p>
<hr />
<h2><strong>A rich domain model is not overengineering</strong></h2>
<p>This is where a lot of modern teams have become confused.</p>
<p>There is a recurring assumption that a rich domain model is somehow “too much.”</p>
<p>But in practice, what often happens is not that the logic disappears.</p>
<p>It simply moves elsewhere.</p>
<p>If the business logic is not in the model, it will end up in:</p>
<ul>
<li><p>services,</p>
</li>
<li><p>handlers,</p>
</li>
<li><p>orchestrators,</p>
</li>
<li><p>subscribers,</p>
</li>
<li><p>validators,</p>
</li>
<li><p>workflows,</p>
</li>
<li><p>pipelines,</p>
</li>
<li><p>process managers,</p>
</li>
<li><p>or framework glue.</p>
</li>
</ul>
<p>That is not simplification.</p>
<p>That is displacement.</p>
<p>A rich domain model is not about making software “academic.”</p>
<p>It is about ensuring that the unavoidable business complexity lives where it is:</p>
<ul>
<li><p>explicit,</p>
</li>
<li><p>cohesive,</p>
</li>
<li><p>inspectable,</p>
</li>
<li><p>and semantically meaningful.</p>
</li>
</ul>
<p>In other words:</p>
<blockquote>
<p><strong>the model should contain the business.</strong></p>
</blockquote>
<p>Not the framework.</p>
<p>Not the message bus.</p>
<p>Not the choreography.</p>
<p>Not the deployment topology.</p>
<p>The business.</p>
<hr />
<h2><strong>If the domain is simple, the model will be simple</strong></h2>
<p>This is where the usual objection appears:</p>
<blockquote>
<p>“But not every system needs a rich domain model.”</p>
</blockquote>
<p>Correct.</p>
<p>But that does not weaken the argument at all.</p>
<p>Because the real point is not that every system needs a complex model.</p>
<p>The point is:</p>
<blockquote>
<p><strong>every system should begin by discovering whether the domain is simple or complex.</strong></p>
</blockquote>
<p>And the correct place to do that is still the model.</p>
<p>If the domain turns out to be simple, then good.</p>
<p>The model will simply remain small and quiet.</p>
<p>That is not failure.</p>
<p>That is successful discovery of simplicity.</p>
<p>But deciding not to start there is a mistake.</p>
<p>Because then simplicity is not being discovered.</p>
<p>It is being assumed.</p>
<p>And assumed simplicity is one of the easiest ways accidental complexity gets invited in.</p>
<hr />
<h2><strong>CQRS and EDA are often compensations for unclear modeling</strong></h2>
<p>Here is the part many people will resist.</p>
<p>That is fine.</p>
<p><strong>CQRS and EDA are very often workarounds for bad design or not knowing how to model.</strong></p>
<p>That does not mean they can never appear.</p>
<p>It means they should almost never appear as <strong>up-front architectural choices</strong>.</p>
<p>That distinction matters enormously.</p>
<p>They can absolutely emerge later as observations in retrospect.</p>
<p>But they should not be adopted as predefined frameworks before the domain has been understood.</p>
<p>Because once that happens, the architecture is no longer responding to the domain.</p>
<p>The domain is being forced into the architecture.</p>
<p>That is backwards.</p>
<hr />
<h2><strong>CQRS is usually an observation, not a design starting point</strong></h2>
<p>Properly understood, CQRS is not something you “do.”</p>
<p>It is simply the recognition that:</p>
<blockquote>
<p><strong>the model used to change business state is not always the same model best suited for retrieving and navigating information.</strong></p>
</blockquote>
<p>That is all.</p>
<p>And sometimes that is perfectly valid.</p>
<p>A search engine like Lucene is a very good example.</p>
<p>The write side may simply persist documents or structured domain state.</p>
<p>The read side may support:</p>
<ul>
<li><p>indexing,</p>
</li>
<li><p>tokenization,</p>
</li>
<li><p>ranking,</p>
</li>
<li><p>full-text search,</p>
</li>
<li><p>query optimization.</p>
</li>
</ul>
<p>Those are not the same concern.</p>
<p>That is a natural asymmetry.</p>
<p>That is CQRS as an observation.</p>
<p>But that is very different from deciding on day one that the architecture will have:</p>
<ul>
<li><p>command handlers,</p>
</li>
<li><p>query handlers,</p>
</li>
<li><p>buses,</p>
</li>
<li><p>mediators,</p>
</li>
<li><p>folders,</p>
</li>
<li><p>pipelines,</p>
</li>
<li><p>and all the associated ceremony.</p>
</li>
</ul>
<p>That is not domain modeling.</p>
<p>That is accidental complexity pretending to be rigor.</p>
<p>Most CQRS implementations are just <strong>CRUD with bureaucracy</strong>.</p>
<hr />
<h2><strong>EDA is often the same mistake, but with more latency</strong></h2>
<p>Event-driven architecture is often sold as if it were inherently sophisticated.</p>
<p>It is not.</p>
<p>Very often, it is simply a sign that direct responsibility was not modeled clearly enough.</p>
<p>There is a major difference between:</p>
<ul>
<li><p>recognizing a domain fact,<br />and</p>
</li>
<li><p>externalizing causality into a distributed system.</p>
</li>
</ul>
<p>Those are not the same thing.</p>
<p>A domain event can be a useful modeling concept.</p>
<p>But when every business consequence gets turned into:</p>
<ul>
<li><p>a message,</p>
</li>
<li><p>a subscriber,</p>
</li>
<li><p>a consumer,</p>
</li>
<li><p>a queue,</p>
</li>
<li><p>a retry policy,</p>
</li>
<li><p>a dead-letter topic,</p>
</li>
<li><p>a compensating process,</p>
</li>
</ul>
<p>then what often happened is not decoupling.</p>
<p>What happened is that one coherent business act was split into multiple technical acts — and the system now needs operational rituals to pretend they are still one thing.</p>
<p>That is not elegance.</p>
<p>That is fragmentation.</p>
<hr />
<h2><strong>If an event is required for correctness, it belongs in the same transaction</strong></h2>
<p>This is where a lot of “event-driven” thinking falls apart.</p>
<p>If an event represents something the business considers part of the same completed action, then it should not be externalized into eventual consistency theater.</p>
<p>It should be processed within the same transactional consistency boundary.</p>
<p>Often that means:</p>
<ul>
<li><p>same model,</p>
</li>
<li><p>same process,</p>
</li>
<li><p>same database transaction,</p>
</li>
<li><p>same JDBC transaction.</p>
</li>
</ul>
<p>Because if correctness depends on:</p>
<ul>
<li><p>retries,</p>
</li>
<li><p>cleanup,</p>
</li>
<li><p>compensating actions,</p>
</li>
<li><p>dead-letter queues,</p>
</li>
<li><p>reconciliation jobs,</p>
</li>
<li><p>or support scripts,</p>
</li>
</ul>
<p>then the architecture has usually split apart something the business still considers one coherent act.</p>
<p>That is not decoupling.</p>
<p>That is a modeling failure disguised as scalability.</p>
<p>The simple rule is this:</p>
<blockquote>
<p><strong>If the business says these things are one thing, the software should not split them into many things.</strong></p>
</blockquote>
<p>Only effects that are genuinely:</p>
<ul>
<li><p>external,</p>
</li>
<li><p>observational,</p>
</li>
<li><p>optional,</p>
</li>
<li><p>or secondary</p>
</li>
</ul>
<p>should be allowed to escape the core transactional boundary asynchronously.</p>
<p>Everything else belongs together.</p>
<hr />
<h2><strong>Microservices are often bad design with Kubernetes</strong></h2>
<p>And yes, the same critique applies to microservices.</p>
<p>Microservices are one of the most overprescribed and underjustified architectural choices in modern software.</p>
<p>They are usually discussed in terms of:</p>
<ul>
<li><p>scaling,</p>
</li>
<li><p>team autonomy,</p>
</li>
<li><p>resilience,</p>
</li>
<li><p>independent deployment,</p>
</li>
<li><p>ownership.</p>
</li>
</ul>
<p>But that framing hides the actual cost.</p>
<p>Because microservices are not just a deployment decision.</p>
<p>They are a fragmentation decision.</p>
<p>They force teams to commit to distributed boundaries early — often before anyone has proven those boundaries are semantically real.</p>
<p>And once the split is made, the business has to pretend those boundaries are natural.</p>
<p>That is how teams end up with:</p>
<ul>
<li><p>cross-service workflows,</p>
</li>
<li><p>distributed invariants,</p>
</li>
<li><p>duplicated concepts,</p>
</li>
<li><p>compensating logic,</p>
</li>
<li><p>service orchestration,</p>
</li>
<li><p>and “eventual consistency” as a lifestyle.</p>
</li>
</ul>
<p>That is not architecture.</p>
<p>That is often just what happens when one cohesive domain gets cut into pieces because “small services” sounded modern.</p>
<hr />
<h2><strong>Logical cohesion comes before physical scale</strong></h2>
<p>This is where the usual counterargument appears:</p>
<blockquote>
<p>“Yes, but what about scale?”</p>
</blockquote>
<p>Fair question.</p>
<p>But scale does not rescue bad boundaries.</p>
<p>It amplifies them.</p>
<p>If you cannot model a business capability coherently in one process, you are very unlikely to improve it by scattering it across twenty.</p>
<p>That is because <strong>logical cohesion is a prerequisite for physical distribution</strong>.</p>
<p>A coherent system can sometimes be split later if reality genuinely demands it.</p>
<p>An incoherent system does not become better by being distributed.</p>
<p>It just becomes harder to debug, harder to reason about, and more expensive to keep alive.</p>
<p>So yes, scale matters.</p>
<p>But scale is not an excuse to abandon cohesion before you have even found it.</p>
<hr />
<h2><strong>Small is not the goal. Cohesion is.</strong></h2>
<p>The phrase “microservice” already biases the conversation in the wrong direction.</p>
<p>Because it encourages optimization for smallness.</p>
<p>But smallness is not the goal.</p>
<blockquote>
<p><strong>Cohesion is the goal.</strong></p>
</blockquote>
<p>The real objective is:</p>
<ul>
<li><p>semantically meaningful boundaries,</p>
</li>
<li><p>high internal density of behavior,</p>
</li>
<li><p>low cross-boundary coordination.</p>
</li>
</ul>
<p>That is very different.</p>
<p>If one business action routinely requires orchestration across multiple internal services, the split is probably wrong.</p>
<p>That is one of the best architectural tests there is.</p>
<p>Because if the business still experiences something as one coherent operation, but the software requires:</p>
<ul>
<li><p>service A,</p>
</li>
<li><p>then service B,</p>
</li>
<li><p>then service C,</p>
</li>
<li><p>then retries and compensations if one fails,</p>
</li>
</ul>
<p>then the architecture has not discovered a boundary.</p>
<p>It has manufactured one.</p>
<p>And now it has to manage the damage.</p>
<hr />
<h2><strong>The real cost of framework-first architecture is not implementation. It is drag.</strong></h2>
<p>This is where the economics become severe.</p>
<p>Bad architecture is not expensive merely because it takes slightly longer to build.</p>
<p>It is expensive because it creates organizational drag for years.</p>
<p>That drag shows up everywhere.</p>
<h3><strong>Slower feature development</strong></h3>
<p>Every change now has to move through machinery that was introduced before the business was properly understood.</p>
<p>So even small changes require:</p>
<ul>
<li><p>coordination,</p>
</li>
<li><p>contract changes,</p>
</li>
<li><p>handler updates,</p>
</li>
<li><p>event flow changes,</p>
</li>
<li><p>service touchpoints,</p>
</li>
<li><p>deployment sequencing,</p>
</li>
<li><p>orchestration review.</p>
</li>
</ul>
<p>That is not domain complexity.</p>
<p>That is architecture tax.</p>
<h3><strong>More defects and harder recovery</strong></h3>
<p>When one coherent business action has been fragmented across:</p>
<ul>
<li><p>services,</p>
</li>
<li><p>queues,</p>
</li>
<li><p>projections,</p>
</li>
<li><p>retries,</p>
</li>
<li><p>and compensations,</p>
</li>
</ul>
<p>then failure handling becomes vastly more expensive.</p>
<p>The question is no longer:</p>
<blockquote>
<p>“Did the business rule execute correctly?”</p>
</blockquote>
<p>It becomes:</p>
<blockquote>
<p>“Which part of the distributed choreography failed, and what state is the system now in?”</p>
</blockquote>
<p>That is a much more expensive problem to solve.</p>
<h3><strong>Permanent cognitive overhead</strong></h3>
<p>This is one of the biggest hidden costs in software.</p>
<p>A misaligned architecture forces every engineer to carry extra mental load just to understand the system.</p>
<p>Instead of reasoning directly about the business, they must first reason about:</p>
<ul>
<li><p>the framework,</p>
</li>
<li><p>the orchestration model,</p>
</li>
<li><p>the service topology,</p>
</li>
<li><p>the event timing,</p>
</li>
<li><p>the deployment shape,</p>
</li>
<li><p>the technical conventions.</p>
</li>
</ul>
<p>That means every change is more mentally expensive than it should be.</p>
<p>And because salaries are the dominant cost in software, <strong>cognitive inefficiency is financial inefficiency</strong>.</p>
<h3><strong>The architecture becomes a second problem</strong></h3>
<p>At some point, the software is no longer difficult because the business is difficult.</p>
<p>It is difficult because the architecture has become a second problem layered on top of the first.</p>
<p>The system is now solving:</p>
<ol>
<li><p>the business domain, and</p>
</li>
<li><p>the consequences of its own design choices.</p>
</li>
</ol>
<p>That is pure waste.</p>
<p>And because most teams never built the tractor version, they often do not even realize how much of their effort is going into supporting the machine rather than solving the problem.</p>
<p>That is the uniqueness trap again.</p>
<hr />
<h2><strong>The most expensive architecture is not the one that fails immediately</strong></h2>
<p>It is the one that:</p>
<ul>
<li><p>works just enough,</p>
</li>
<li><p>survives just long enough,</p>
</li>
<li><p>and obscures its own cost just well enough</p>
</li>
</ul>
<p>that nobody ever questions whether the machine was appropriate in the first place.</p>
<p>That is what makes framework-first architecture so dangerous.</p>
<p>It often does not fail loudly.</p>
<p>It succeeds <strong>expensively</strong>.</p>
<p>And that is much worse.</p>
<p>Because visible failure can trigger redesign.</p>
<p>But expensive success gets institutionalized.</p>
<p>It becomes:</p>
<ul>
<li><p>“our platform,”</p>
</li>
<li><p>“our standard architecture,”</p>
</li>
<li><p>“our scalable foundation,”</p>
</li>
<li><p>“our engineering maturity.”</p>
</li>
</ul>
<p>When in reality, it may just be a Ferrari that the organization has spent five years trying to teach to plow a field.</p>
<hr />
<h2><strong>The first responsibility of software architecture is not scalability</strong></h2>
<p>It is not flexibility.<br />It is not “future-proofing.”<br />It is not pattern compliance.<br />It is not cloud nativeness.<br />It is not distributed elegance.</p>
<p>It is this:</p>
<blockquote>
<p><strong>to make the essential complexity of the business explicit, cohesive, and understandable.</strong></p>
</blockquote>
<p>That is the job.</p>
<p>Everything else comes later.</p>
<p>And if the software cannot explain the business clearly through its model, then it is not well architected — no matter how many services, handlers, events, buses, frameworks, or diagrams surround it.</p>
<p>Because at that point, the architecture is no longer serving the business.</p>
<p>The business is serving the architecture.</p>
<p>And that is why so much modern software is too expensive and too brittle.</p>
<hr />
<h2><strong>A much better default</strong></h2>
<p>A better architectural instinct is this:</p>
<blockquote>
<p><strong>Do not ask what architecture you can build.</strong></p>
<p><strong>Ask what architecture the domain actually justifies.</strong></p>
</blockquote>
<p>And if the answer is:</p>
<ul>
<li><p>smaller,</p>
</li>
<li><p>more cohesive,</p>
</li>
<li><p>more local,</p>
</li>
<li><p>less distributed,</p>
</li>
<li><p>less framework-driven,</p>
</li>
<li><p>and more explicit in its model</p>
</li>
</ul>
<p>than current fashion prefers, that is not a sign of immaturity.</p>
<p>It is often a sign that the problem is finally being understood.</p>
<p>The next time a team is asked to “choose an architecture,” the first question should not be:</p>
<ul>
<li><p>Which framework?</p>
</li>
<li><p>Which pattern?</p>
</li>
<li><p>Which cloud primitive?</p>
</li>
<li><p>Which service template?</p>
</li>
</ul>
<p>It should be:</p>
<blockquote>
<p><strong>What is the business, and what is the cheapest, most coherent way to represent it truthfully?</strong></p>
</blockquote>
<p>Because software does not become expensive and brittle by accident.</p>
<p>It becomes expensive and brittle when teams choose machinery before they understand the work.</p>
<p>And from that point on, they do not just have a domain to solve.</p>
<p>They also have an architecture to survive.</p>
<p>That is not engineering maturity.</p>
<p>That is paying interest on a design mistake.</p>
]]></content:encoded></item><item><title><![CDATA[When CI/CD Becomes the Goal: The Quiet Erosion of Engineering Ownership]]></title><description><![CDATA[Software delivery has become one of the most ritualized practices in modern development.
Pipelines are longer.Checks are stricter.Deployments are more automated.Dashboards are greener than ever.
Yet i]]></description><link>https://blog.leonpennings.com/when-ci-cd-becomes-the-goal-the-quiet-erosion-of-engineering-ownership</link><guid isPermaLink="true">https://blog.leonpennings.com/when-ci-cd-becomes-the-goal-the-quiet-erosion-of-engineering-ownership</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Java]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Mon, 30 Mar 2026 06:01:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/30977c9a-49ee-401a-8e67-5c296b08d9da.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Software delivery has become one of the most ritualized practices in modern development.</p>
<p>Pipelines are longer.<br />Checks are stricter.<br />Deployments are more automated.<br />Dashboards are greener than ever.</p>
<p>Yet in many teams, software has not become more engineered.</p>
<p>It has become more processed.</p>
<p>That distinction matters.</p>
<p>CI/CD was never intended as an excuse to pile machinery on top of weak engineering. It started as a practical response to real problems. But somewhere along the way, much of the industry stopped using it to support strong engineering and began using it to compensate for its absence.</p>
<p>That is where things quietly went wrong.</p>
<hr />
<h2>What CI Originally Solved</h2>
<p>The original idea behind Continuous Integration was straightforward.</p>
<p>It was never primarily about pipelines, YAML, or branch policies. It was about forcing reality into the room early.</p>
<p>Developers were expected to integrate frequently — often daily — into a shared codebase. The goal was simple: prevent teams from drifting into parallel worlds and discovering too late that their work didn’t fit together.</p>
<p>That solved a real problem.</p>
<p>Frequent integration forced teams to confront overlap, collisions, ambiguity, and unintended coupling while the cost of correction was still low. But CI did something subtler and arguably more important: it reinforced the team while development was still happening.</p>
<p>Developers didn’t merely discover each other’s work after the fact. They had to continuously adapt to one another’s choices, assumptions, and interpretations of the system in the moment. That pressure was not a flaw. It was the point.</p>
<p>This is how engineering sharpens itself — not by letting everyone disappear into isolated implementation tunnels and comparing answers at the end, but by shaping and correcting each other during the act of construction. Real engineering teams do not just divide work. They reinforce shared understanding.</p>
<p>Original CI made integration a living team concern rather than a delayed administrative event.</p>
<p>That was healthy engineering.</p>
<hr />
<h2>What Continuous Delivery Originally Solved</h2>
<p>Continuous Delivery was aimed at a different concern than CI.</p>
<p>Not integration itself, but the path from integrated code to running software.</p>
<p>And to be fair, that was not a fake concern.</p>
<p>But it also was not universally the disaster modern delivery culture sometimes pretends it was.</p>
<p>In many Java systems, deployment was already fairly boring. An application server was stopped, a WAR or EAR was replaced, the instance was restarted, and the system was verified. That was not always elegant, but neither was it some fundamental engineering crisis.</p>
<p>So the real value of CD was not that it magically solved an impossible deployment problem.</p>
<p>Its promise was narrower and more practical: to make the release path more repeatable, more standardized, less person-dependent, and easier to execute consistently across teams and environments.</p>
<p>That is a reasonable goal.</p>
<p>And in some environments, it becomes more than reasonable — it becomes necessary.</p>
<p>Once deployments span multiple machines, rolling restarts, clustered services, or orchestrated server fleets, manual deployment stops being merely inconvenient and starts becoming operationally impractical. At that point, automation is not theater. It is simply the sane way to move software safely and consistently.</p>
<p>That is where CD has real value.</p>
<p>But not all release friction was technical.</p>
<p>In many organizations, a significant part of the “deployment problem” came from the surrounding structure itself: separate infrastructure departments, ticket-driven handoffs, release scheduling rituals, and operational processes that turned even simple deployments into expensive coordination exercises.</p>
<p>That pain was real — but it is important to name it accurately.</p>
<p>Often, the difficulty was not in replacing the software.</p>
<p>It was in navigating the organization around it.</p>
<p>Modern delivery automation did remove a great deal of that friction.</p>
<p>But in many cases, the underlying pattern did not disappear. It simply moved.</p>
<p>Where infrastructure teams once controlled servers and release windows, platform and pipeline teams now increasingly control the mechanics of delivery itself. The form changed. The separation often did not.</p>
<p>And that matters more than it first appears.</p>
<p>Because once the release path is defined by people who do not carry the semantic or business consequences of the software, the pipeline can quietly become a surrogate for ownership.</p>
<p>That is where the trade-offs began.</p>
<hr />
<h2>Where It Started to Go Wrong</h2>
<p>The issue is not that CI/CD solved fake problems. The issue is that much of the industry adopted the tooling and rituals while quietly abandoning the engineering assumptions that gave those practices their value.</p>
<p>Once that happened, CI/CD stopped reinforcing good engineering and started compensating for weak engineering instead.</p>
<p>A lot of what now passes for “CI” is no longer continuous integration.</p>
<p>It is deferred reconciliation.</p>
<p>Developers work in isolation on long-lived branches, treating the merge as the first serious moment of contact with the rest of the system. The pain that CI was designed to expose early is now allowed to accumulate until the branch is “ready.” The pipeline creates the illusion of discipline, but the underlying practice has shifted.</p>
<p>The old model forced developers to adapt to each other continuously.</p>
<p>The modern branch-heavy model lets them adapt only at the end.</p>
<p>What makes this regression more serious is that it did not happen accidentally. In many teams, CI was gradually reshaped to serve a different goal: continuous deployment of independently developed changes.</p>
<p>That sounds efficient, but it came with a structural trade-off.</p>
<p>In order to deploy “each feature” continuously, work first had to become isolatable. That pushed development toward branch-based workflows, delayed integration, and feature-level thinking. The unit of progress stopped being the continuously evolving shared system and became the individually shippable change.</p>
<p>And once that shift happened, CI changed with it.</p>
<p>What used to be immediate feedback on a real check-in against the shared codebase became a staged validation process around isolated work. The branch is tested. The pull request is reviewed. The pipeline is green. But the fully integrated system — in motion, under changing conditions, with multiple real changes meeting each other — is often encountered meaningfully much later.</p>
<p>That is not a small process adjustment.</p>
<p>It is a relocation of feedback.</p>
<p>And when feedback moves later, risk moves with it.</p>
<hr />
<h2>The Social Cost Nobody Mentions</h2>
<p>This changes far more than code flow.</p>
<p>It changes the social structure of development itself.</p>
<p>Instead of reinforcing each other during construction, developers increasingly become delayed reviewers, test-runners, or approval gates. The shared act of building gives way to a serialized process of isolated work followed by late validation.</p>
<p>That may still produce working software, but it does not produce the same quality of team thinking.</p>
<p>The old model created friction early, while people were still shaping the solution together. The newer model often postpones that friction until after mental commitment has set in. At that point, integration becomes negotiation rather than collaboration.</p>
<p>That is a significant regression.</p>
<p>A team stops behaving like a team.</p>
<p>It starts behaving like a collection of individuals working in parallel and negotiating reality afterward.</p>
<p>And once that happens, the pipeline begins to replace the team as the thing that “validates” software.</p>
<p>That is a dangerous substitution.</p>
<p>Because a team can challenge assumptions, surface ambiguity, and expose misunderstandings while the system is still being shaped.</p>
<p>A pipeline cannot.</p>
<p>It can only tell you whether a predefined process passed.</p>
<p>It cannot tell you whether the software still makes sense.</p>
<hr />
<h2>The Illusion of Delivery Maturity</h2>
<p>Continuous Delivery has suffered a parallel fate.</p>
<p>In theory, CD makes deployments safe by making them repeatable. In practice, many teams achieve “safety” by surrounding brittle systems with ever-growing layers of process, abstraction, and automation. The application becomes harder to understand. The deployment model grows more complex. And the pipeline swells to absorb complexity that should never have existed in the software itself.</p>
<p>Eventually, the release system becomes more elaborate than the software it delivers.</p>
<p>This raises an uncomfortable question:</p>
<p><strong>Are we automating a healthy system, or are we automating around an unhealthy one?</strong></p>
<p>If deployment is difficult, brittle, or mysterious, there are usually only two explanations:</p>
<ul>
<li><p>The system genuinely operates in a complex environment.</p>
</li>
<li><p>The software was never designed with operability in mind.</p>
</li>
</ul>
<p>The first is sometimes unavoidable.</p>
<p>The second is too often ignored.</p>
<hr />
<h2>Good Deployment Begins in Design</h2>
<p>Much deployment pain is treated as the inevitable cost of “modern systems.” In many business applications, that pain is not inevitable — it is designed in.</p>
<p>A well-engineered application should be deployable because it was built to be deployable: operational state kept where it belongs, environment-specific behavior minimized, startup made deterministic, migrations treated as part of the lifecycle, and only what truly needs to vary externalized.</p>
<p>When deployment is simple by design, the need for pipeline heroics drops dramatically.</p>
<p>Automation then becomes what it was meant to be: a way to remove repetition and error from a sound process — not a bandage for an unsound one.</p>
<hr />
<h2>The Dangerous Slide into “Production as Test Environment”</h2>
<p>This is where the earlier shift becomes dangerous.</p>
<p>When integration is no longer happening continuously during development, reality does not disappear.</p>
<p>It simply waits.</p>
<p>And increasingly, that reality is encountered much later — often in environments close to or inside production.</p>
<p>This is why so many modern delivery models quietly drift toward using production as their final validation environment. Not because teams explicitly decide to “test in production,” but because the system as a whole often meets changing real-world conditions there more meaningfully than anywhere before it.</p>
<p>That is a very different feedback model from original CI.</p>
<p>Original CI gave teams rapid feedback on check-ins against a shared and continuously evolving codebase. Modern branch-heavy CI/CD often gives rapid feedback on isolated changes, then relies on deployment frequency to surface what only the integrated whole can reveal.</p>
<p>That is not the same kind of safety.</p>
<p>It is simply a different place to discover reality.</p>
<p>Smaller and faster deployments are often presented as inherently safer.</p>
<p>But that is only true if one quietly assumes that the meaning and impact of a change are already well understood.</p>
<p>In practice, that is often exactly what is not true.</p>
<p>A smaller deployment unit may reduce rollback scope or make blame attribution easier, but that is not the same as reducing actual engineering risk. If anything, the opposite can happen: the change is seen by fewer people, discussed less deeply, and integrated less continuously before it reaches production.</p>
<p>That does not reduce uncertainty.</p>
<p>It merely packages uncertainty into smaller increments.</p>
<p>And when production changes multiple times per day, stability itself begins to shrink. The system is only as stable as the scenarios already captured in the automated tests — tests which are themselves usually adapted to the most recent expected path into production.</p>
<p>That creates a dangerous illusion of control.</p>
<p>The software appears validated, but only within the shrinking boundary of what was recently anticipated.</p>
<hr />
<h2>The Semantic Risk Pipelines Cannot See</h2>
<p>More importantly, the true impact of a change is often not visible from the code itself.</p>
<p>A seemingly trivial modification for a developer can carry major domain consequences. And a technically substantial change can sometimes be domain-trivial. That asymmetry matters.</p>
<p>Because developers are not domain experts.</p>
<p>They can understand the implementation, but they cannot reliably infer the full business meaning of a change from code alone — not without sustained discussion and feedback from people who actually understand the domain.</p>
<p>And the most dangerous part is that this is not predictable.</p>
<p>It is not true that every change requires deep domain validation.</p>
<p>But it is also not reliably obvious which changes do.</p>
<p>That is exactly why semantic risk cannot be reduced to diff size, deployment frequency, or pipeline confidence.</p>
<p>Many of the hardest failures are not technical crashes or exceptions. They are semantic failures: the system behaves exactly as the code and tests dictate, yet wrongly according to the business.</p>
<p>That is where domain experts matter.</p>
<p>And no amount of deployment frequency changes that fact.</p>
<hr />
<h2>Human Validation Is Not the Enemy of Engineering</h2>
<p>One of the stranger modern assumptions is that removing human judgment from the release path is always progress.</p>
<p>It is not.</p>
<p>There is a crucial difference between automating repeatable mechanics and eliminating deliberate validation. Those should never be conflated.</p>
<p>A strong delivery process should automate the mechanical parts — build, package, verify, deploy to controlled environments, reproduce release steps consistently.</p>
<p>That is sensible.</p>
<p>But whether a business-critical change should be exposed to real users is not always a purely technical question. In many systems, it is also a domain question.</p>
<p>Human validation is not a sign of immaturity. Sometimes it is the last remaining sign that someone still understands the difference between technical correctness and business correctness.</p>
<p>That distinction is too often lost.</p>
<hr />
<h2>Application Quality Is Not Generated By Tooling</h2>
<p>Part of the problem is that “quality” itself has increasingly been redefined through the lens of tooling.</p>
<p>In many organizations, delivery practices are no longer primarily shaped by engineers with deep ownership of the software and its domain. They are shaped by process-specialized roles, platform teams, and tooling consultants whose authority often comes from familiarity with delivery systems rather than from responsibility for the software’s behavior, design, or business consequence.</p>
<p>That changes what gets optimized.</p>
<p>Quality slowly stops meaning clarity, simplicity, robustness, and domain correctness.</p>
<p>It starts meaning compliance: green pipelines, approved stages, scan completion, branch policy adherence, and process conformance.</p>
<p>Those may be useful signals.</p>
<p>But useful signals can become dangerous substitutes.</p>
<p>And that is how problem analysis gets replaced by cargo cults.</p>
<hr />
<h2>The Real Regression: Loss of Ownership</h2>
<p>Underneath all of this lies a deeper problem than pipelines or deployment buttons.</p>
<p>The quiet regression is the loss of engineering ownership.</p>
<p>Modern delivery culture has made it increasingly possible for developers to produce deployable software without truly understanding:</p>
<ul>
<li><p>how the system runs</p>
</li>
<li><p>how it is released</p>
</li>
<li><p>how it evolves</p>
</li>
<li><p>how it fails</p>
</li>
<li><p>how it behaves in production</p>
</li>
<li><p>how it fits the business domain as a whole</p>
</li>
</ul>
<p>That is not progress.</p>
<p>That is separation from consequence.</p>
<p>Once that separation occurs, the pipeline stops being a tool.</p>
<p>It becomes a substitute for engineering responsibility.</p>
<p>Pipelines can tell you whether something passed the process.</p>
<p>They cannot tell you whether the software is truly understood.</p>
<hr />
<h2>What Healthy CI/CD Should Actually Look Like</h2>
<p>Good CI/CD is not about maximum automation.</p>
<p>It is about preserving engineering discipline while reducing mechanical waste.</p>
<p>That usually looks far less glamorous than modern tooling culture suggests:</p>
<ul>
<li><p>Developers integrate continuously into a shared mainline</p>
</li>
<li><p>Incomplete work is handled through discipline and design, not default branch isolation</p>
</li>
<li><p>Build and verification are automated and fast</p>
</li>
<li><p>Deployment to lower environments is repeatable and low-friction</p>
</li>
<li><p>Acceptance happens in a controlled way</p>
</li>
<li><p>Production deployment is simple enough to trust</p>
</li>
<li><p>Human validation exists where domain risk justifies it</p>
</li>
<li><p>The release path is designed to support ownership, not replace it</p>
</li>
</ul>
<p>That is not anti-automation.</p>
<p>It is anti-theater.</p>
<p>And that distinction matters.</p>
<hr />
<h2>The Real Question</h2>
<p>CI/CD is not really a tooling question.</p>
<p>It is a quality question.</p>
<p>The real issue is not whether a team has pipelines, feature flags, deployment jobs, or environment promotion stages.</p>
<p>The real issue is this:</p>
<p><strong>Does the delivery process reflect a well-engineered system and a team that understands it — or is it compensating for the absence of both?</strong></p>
<p>That is the question most teams avoid.</p>
<p>Because if the honest answer is the second one, then the pipeline is not a sign of maturity.</p>
<p>It is camouflage.</p>
<p>And that may be the most uncomfortable truth in modern software delivery:</p>
<p>sometimes what looks like engineering progress is really just process growth around declining engineering depth.</p>
<p>CI/CD used as a substitute for the very discipline it was supposed to support.</p>
<p>And once that happens, delivery stops being an expression of engineering quality.<br />It becomes a process for moving misunderstood software into production more efficiently.</p>
]]></content:encoded></item><item><title><![CDATA[Software Testing: You’re Probably Doing It Wrong]]></title><description><![CDATA[Software testing has become one of the most ritualized practices in modern development.
That is not because testing is unimportant. Quite the opposite.
Testing matters.
But in many teams, testing has ]]></description><link>https://blog.leonpennings.com/software-testing-you-re-probably-doing-it-wrong</link><guid isPermaLink="true">https://blog.leonpennings.com/software-testing-you-re-probably-doing-it-wrong</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[Java]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Thu, 26 Mar 2026 08:23:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/d4ca5394-46fa-4d43-926e-3e19805386f7.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Software testing has become one of the most ritualized practices in modern development.</p>
<p>That is not because testing is unimportant. Quite the opposite.</p>
<p>Testing matters.</p>
<p>But in many teams, testing has quietly expanded beyond its actual role. It is no longer treated as a tool for verifying software behavior. It is increasingly treated as a proxy for understanding, a proxy for design, and even a proxy for quality itself.</p>
<p>And that is where the problem begins.</p>
<p>Because testing can verify behavior.<br />But it cannot replace engineering.</p>
<hr />
<h2><strong>Testing in Software Is a Verification Discipline</strong></h2>
<p>At its core, testing in software has a very specific role:</p>
<blockquote>
<p><strong>to verify whether a system behaves acceptably under certain conditions.</strong></p>
</blockquote>
<p>That is valuable. Necessary, even.</p>
<p>A good test can help answer questions like:</p>
<ul>
<li><p>Does this behavior still work?</p>
</li>
<li><p>Does this input still lead to the expected output?</p>
</li>
<li><p>Did this change introduce a regression?</p>
</li>
</ul>
<p>That is where testing is strong.</p>
<p>But notice what testing does <strong>not</strong> answer:</p>
<ul>
<li><p>Is the design coherent?</p>
</li>
<li><p>Is the architecture proportional to the problem?</p>
</li>
<li><p>Is the model a good representation of the domain?</p>
</li>
<li><p>Is this implementation economical to evolve?</p>
</li>
</ul>
<p>Those are engineering questions.</p>
<p>And when teams start treating test suites as if they answer them, behavioral verification gets confused with software quality itself.</p>
<p>That is a costly mistake.</p>
<hr />
<h2><strong>The Math Test Problem</strong></h2>
<p>A great deal of modern team testing resembles cheating on a math exam.</p>
<p>Imagine students defining the exam questions during class, together with the teacher, while learning the material. By the time the exam arrives, the goal is no longer to understand the mathematics. The goal is to reproduce the answers that were already agreed upon.</p>
<p>Something very similar happens in software teams.</p>
<p>During refinement, development, or collaborative scenario-writing sessions, expected behavior is often defined in detail in advance. Tests are written, scenarios are formalized, and the team aligns around them.</p>
<p>In theory, this sounds excellent.</p>
<p>In practice, it introduces a subtle distortion:</p>
<blockquote>
<p><strong>the implementation target shifts from understanding the business domain to passing the agreed test scenarios.</strong></p>
</blockquote>
<p>That is a very different goal.</p>
<p>The result is not necessarily a bad system. But it is often a system optimized for compliance rather than understanding.</p>
<p>And the danger is obvious:</p>
<blockquote>
<p><strong>how often is the first interpretation of a business need fully correct?</strong></p>
</blockquote>
<p>If the test scenarios are based on incomplete understanding, then all the rigor in the world only helps build the wrong thing more reliably.</p>
<hr />
<h2><strong>Verification Is Not Validation</strong></h2>
<p>This is the distinction many teams lose.</p>
<p>Testing is very good at <strong>verification</strong>:</p>
<ul>
<li><p>Did the implementation behave as intended?</p>
</li>
<li><p>Does the system still behave as expected?</p>
</li>
</ul>
<p>But verification is not the same as <strong>validation</strong>:</p>
<ul>
<li><p>Was the right thing built?</p>
</li>
<li><p>Is this actually a fitting solution for the domain?</p>
</li>
</ul>
<p>A system can satisfy every agreed scenario and still be fundamentally wrong.</p>
<p>It can behave correctly while being poorly modeled.<br />It can produce the expected output while being overcomplicated.<br />It can pass every acceptance test while solving the wrong problem in the wrong way.</p>
<p>In other words:</p>
<blockquote>
<p><strong>A passing test suite proves behavioral agreement—not solution fitness.</strong></p>
</blockquote>
<p>And that distinction matters far more than many teams admit.</p>
<hr />
<h2><strong>The Ferrari in the Field</strong></h2>
<p>A Ferrari F40 can absolutely move across a field.</p>
<p>It can produce motion. It can get from one side to the other. It can, in the most literal sense, “do the job.”</p>
<p>That does not make it a tractor.</p>
<p>The same is true in software.</p>
<p>A system can satisfy all functional expectations and still be the wrong machine for the domain. It can be too expensive to change, too fragile to extend, too over-engineered for the actual need, or too structurally rigid to survive evolving business requirements.</p>
<p>Testing does not expose that.</p>
<p>Because testing can tell whether the machine moves.</p>
<p>It cannot tell whether it is the right machine.</p>
<p>And that is not a trivial distinction.<br />That is the distinction between <strong>working software</strong> and <strong>good engineering</strong>.</p>
<hr />
<h2><strong>When Tests Stop Following Behavior and Start Following Structure</strong></h2>
<p>This is where testing often becomes actively harmful.</p>
<p>If testing is a behavioral verification discipline, then it should limit itself to verifying behavior.</p>
<p>But many modern testing practices go deeper than that.</p>
<p>They start testing:</p>
<ul>
<li><p>local call structures</p>
</li>
<li><p>internal collaborations</p>
</li>
<li><p>class-level decomposition</p>
</li>
<li><p>implementation fragments in isolation</p>
</li>
</ul>
<p>At that point, the tests are no longer verifying the system in any meaningful way.</p>
<p>They are verifying the current shape of the code.</p>
<p>That is not the same thing.</p>
<p>And once that happens, the test suite stops protecting change and starts resisting it.</p>
<blockquote>
<p><strong>The moment a test depends on how the behavior is achieved instead of what behavior is observed, it becomes a brake on refactoring.</strong></p>
</blockquote>
<p>That is one of the most under-discussed quality problems in software teams.</p>
<p>Because now every structural improvement becomes expensive:</p>
<ul>
<li><p>rename a collaborator → tests break</p>
</li>
<li><p>merge responsibilities → tests break</p>
</li>
<li><p>simplify orchestration → tests break</p>
</li>
<li><p>move logic to a better abstraction → tests break</p>
</li>
</ul>
<p>Not because behavior changed.<br />But because the test suite was never really about behavior to begin with.</p>
<hr />
<h2><strong>Why Isolated Class Testing Often Misses the Point</strong></h2>
<p>One of the clearest examples of this problem is isolated class testing.</p>
<p>A class exists in code. Therefore, many teams assume it should be testable independently.</p>
<p>But a technical unit is not automatically a meaningful behavioral unit.</p>
<p>That assumption is rarely challenged.</p>
<p>Take something like a PDF information extractor.</p>
<p>That behavior does not meaningfully exist in a vacuum. It depends on:</p>
<ul>
<li><p>parsing logic</p>
</li>
<li><p>normalization logic</p>
</li>
<li><p>extraction rules</p>
</li>
<li><p>object interpretation</p>
</li>
<li><p>domain-level decisions</p>
</li>
</ul>
<p>Yet what often happens?</p>
<p>A single class gets tested in isolation.<br />Its collaborators are mocked.<br />Its environment is simulated.<br />Its context is stripped away.</p>
<p>Now the test no longer asks:</p>
<blockquote>
<p>“Can the system reliably extract useful information from PDFs?”</p>
</blockquote>
<p>Instead, it asks something far weaker:</p>
<blockquote>
<p>“Does this one implementation fragment behave under synthetic scaffolding?”</p>
</blockquote>
<p>That is not meaningful verification.</p>
<p>That is structural rehearsal.</p>
<p>And the cost is not just conceptual—it is practical.</p>
<p>Because now the test suite is coupled to a local decomposition that may not even survive the next decent refactor.</p>
<p>We end up with a test suite that passes perfectly even if the integration between those fragments is fundamentally broken—because we’ve tested the components, but ignored the composition.</p>
<hr />
<h2><strong>Coverage Is Not Confidence</strong></h2>
<p>Test coverage is another example of verification ritual turning into proxy engineering.</p>
<p>Coverage has become a metric in its own right.</p>
<p>Teams report it. Managers ask for it. Pipelines display it as if it were a signal of quality.</p>
<p>But coverage says only one thing:</p>
<blockquote>
<p><strong>this code was executed while a test ran.</strong></p>
</blockquote>
<p>That’s it.</p>
<p>It does <strong>not</strong> tell:</p>
<ul>
<li><p>whether the test is meaningful</p>
</li>
<li><p>whether important behavior is protected</p>
</li>
<li><p>whether the assertions matter</p>
</li>
<li><p>whether the design is safe to evolve</p>
</li>
</ul>
<p>And yet teams optimize for it anyway.</p>
<p>That leads to the usual absurdities:</p>
<ul>
<li><p>getter/setter tests</p>
</li>
<li><p>trivial constructor tests</p>
</li>
<li><p>one-line branch inflation</p>
</li>
<li><p>synthetic assertions written only to satisfy the metric</p>
</li>
</ul>
<p>This is not quality.<br />It is administrative theater.</p>
<blockquote>
<p><strong>Coverage is a measure of execution, not a measure of insight.</strong></p>
</blockquote>
<p>And once a team starts chasing the number instead of the confidence, the metric has already failed.</p>
<hr />
<h2><strong>Testing Is Not a Design Discipline</strong></h2>
<p>This may be the most important point of all.</p>
<p>Testing can verify whether software behaves as expected.</p>
<p>It cannot tell whether the software is well-designed.</p>
<p>It cannot tell whether:</p>
<ul>
<li><p>the abstraction boundaries are good</p>
</li>
<li><p>the model is coherent</p>
</li>
<li><p>the architecture is sustainable</p>
</li>
<li><p>the implementation cost is proportional to the value</p>
</li>
<li><p>future stories will remain easy to add</p>
</li>
</ul>
<p>Those are not test outcomes.</p>
<p>Those are design and engineering concerns.</p>
<p>And if a team replaces those concerns with:</p>
<ul>
<li><p>framework templates</p>
</li>
<li><p>scenario scripts</p>
</li>
<li><p>coverage thresholds</p>
</li>
<li><p>pipeline greenness</p>
</li>
</ul>
<p>…then better engineering is not happening.</p>
<p>Judgment is simply being outsourced to artifacts.</p>
<p>That may feel safer.<br />It may even look more rigorous.</p>
<p>But it is still a substitute for actual thought.</p>
<hr />
<h2><strong>What Testing Is Actually For</strong></h2>
<p>Testing does have a real and valuable place.</p>
<p>Used well, testing is for:</p>
<ul>
<li><p>verifying externally observable behavior</p>
</li>
<li><p>protecting against meaningful regressions</p>
</li>
<li><p>increasing confidence during change</p>
</li>
<li><p>supporting safe evolution of a system</p>
</li>
</ul>
<p>That is already enough.</p>
<p>Testing does <strong>not</strong> need to become:</p>
<ul>
<li><p>a replacement for design</p>
</li>
<li><p>a replacement for domain understanding</p>
</li>
<li><p>a replacement for architecture</p>
</li>
<li><p>a replacement for engineering judgment</p>
</li>
</ul>
<p>The moment testing is asked to do those things, it becomes overloaded.</p>
<p>And overloaded tools do not become more powerful.</p>
<p>They become more misleading.</p>
<hr />
<h2><strong>The Cost of a Misaligned System</strong></h2>
<p>A system does not need to be broken to be expensive.</p>
<p>It only needs to be misaligned.</p>
<p>That is one of the most dangerous illusions in software development: if the system behaves correctly, it is easy to assume the engineering must also be sound.</p>
<p>But a system can pass tests, satisfy stories, and still be fundamentally costly in all the places that matter over time.</p>
<p>It can be:</p>
<ul>
<li><p>too expensive to extend</p>
</li>
<li><p>too brittle to refactor</p>
</li>
<li><p>too complex to reason about</p>
</li>
<li><p>too rigid to absorb new requirements cleanly</p>
</li>
</ul>
<p>This is the software equivalent of using a Ferrari F40 to plow a field.</p>
<p>The machine moves.<br />The task gets completed.<br />But every future change becomes more expensive than it should be.</p>
<p>That cost rarely appears in the first implementation. It appears later:</p>
<ul>
<li><p>in slower feature development</p>
</li>
<li><p>in rising maintenance effort</p>
</li>
<li><p>in increasingly fragile changes</p>
</li>
<li><p>in the growing difficulty of correcting earlier assumptions</p>
</li>
</ul>
<p>And this is precisely where testing, on its own, offers very little protection.</p>
<p>Because testing can confirm that a system still behaves the same.</p>
<p>It cannot tell whether that behavior is now trapped inside the wrong machine.</p>
<p>That is an engineering problem.</p>
<p>And when that distinction is missed, software quality gets reduced to present-day correctness while long-term adaptability quietly deteriorates.</p>
<hr />
<h2><strong>Conclusion</strong></h2>
<p>Software engineering has become increasingly comfortable with proxies.</p>
<p>Metrics are used as substitutes for judgment.<br />Artifacts are used as substitutes for understanding.<br />Test suites are used as substitutes for design confidence.</p>
<p>And in doing so, many teams create the appearance of rigor while quietly undermining the adaptability of the system itself.</p>
<p>Testing is valuable.<br />But only when it stays in its lane.</p>
<blockquote>
<p><strong>Testing should verify software behavior. It should not define the software, freeze its structure, or pretend to certify its design.</strong></p>
</blockquote>
<p>Because the moment verification starts replacing engineering, pipelines may still signal green — but better systems do not follow.</p>
<p>Ferraris get built where tractors would have been enough.</p>
]]></content:encoded></item><item><title><![CDATA[The Mirror and the Machine: Reclaiming Scrum Refinement in the Age of AI.]]></title><description><![CDATA[Agile was never meant to be a delivery machine. It was meant to be a learning system.
At its core, Agile shortens the feedback loop between business intent and working software—to expose ideas early, ]]></description><link>https://blog.leonpennings.com/the-mirror-and-the-machine-reclaiming-scrum-refinement-in-the-age-of-ai</link><guid isPermaLink="true">https://blog.leonpennings.com/the-mirror-and-the-machine-reclaiming-scrum-refinement-in-the-age-of-ai</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Scrum]]></category><category><![CDATA[AI]]></category><category><![CDATA[Java]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Tue, 24 Mar 2026 07:49:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/87217837-83ca-4568-953c-cf5bd8ca0be9.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Agile was never meant to be a delivery machine. It was meant to be a learning system.</p>
<p>At its core, Agile shortens the feedback loop between business intent and working software—to expose ideas early, validate them quickly, and adapt continuously. The goal was never just to build software, but to <em>discover what the business actually needs</em> by building it.</p>
<p>Somewhere along the way, many teams drifted. User stories became work orders instead of expressions of intent. Refinement became premature implementation design instead of shared understanding. And the feedback loop quietly stretched back to the end of the sprint.</p>
<hr />
<h2>The Problem with User Stories as Work Orders</h2>
<p>A good user story expresses intent: What is the user trying to achieve, and why does it matter?</p>
<p>In practice, stories too often look like predefined solutions:</p>
<ul>
<li><p>“I want a button in the top right corner to search.”</p>
</li>
<li><p>“Add a ‘costs’ field to each order.”</p>
</li>
</ul>
<p>These constrain the solution space from the start. Better alternatives go unexplored. The system quietly accumulates unnecessary complexity.</p>
<p>What’s missing is the actual problem: Is this about searching, or about finding something quickly? Is this about storing costs, or about understanding profitability?</p>
<p>Without that clarity, we aren’t building solutions—we’re implementing assumptions.</p>
<hr />
<h2>Refinement as Understanding, Not Design</h2>
<p>Refinement is where the misunderstanding should be corrected. Yet too many sessions devolve into:</p>
<ul>
<li><p>“Where should the button go?”</p>
</li>
<li><p>“What fields do we need?”</p>
</li>
<li><p>“How do we implement this?”</p>
</li>
</ul>
<p>That is early design on incomplete information.</p>
<p>Real refinement focuses first on:</p>
<ul>
<li><p><strong>Intent</strong>: What is the user truly trying to achieve?</p>
</li>
<li><p><strong>Context</strong>: When and why does this happen?</p>
</li>
<li><p><strong>Available information</strong>: What does the user already know?</p>
</li>
<li><p><strong>Problem type</strong>: Is this a lookup, exploration, or navigation task?</p>
</li>
</ul>
<p>Only after the problem is clearly understood should solution ideas emerge.</p>
<hr />
<h2>A Practical Example: When “Search” Isn’t Search</h2>
<p>In an ETL context, a functional manager requests a search feature. On the surface it sounds reasonable. Dig deeper, though, and the real need surfaces:</p>
<p>The manager is often asked by a colleague (or for their own reference) to pull up a <em>specific case</em>. They have only a vague description of the object type and when it occurred. The goal isn’t broad exploration—it’s <strong>identification and direct navigation</strong>.</p>
<p>Instead of a generic search system, a far simpler solution appears:</p>
<ul>
<li><p>Use meaningful, relatable identifiers (a combination of object type and unique ID).</p>
</li>
<li><p>Enable direct navigation via those identifiers.</p>
</li>
<li><p>Add contextual links to related cases.</p>
</li>
</ul>
<p>The result is simpler, faster, and far better aligned with actual usage.</p>
<p>This is refinement at its best: turning vague requests into precise problem definitions.</p>
<hr />
<h2>Mirror Pieces: The First Implementation Is Not the Product</h2>
<p>Even strong refinement leaves understanding theoretical until it meets reality. That’s where the first implementation enters—not as a finished deliverable, but as a <strong>mirror piece</strong>.</p>
<p>A mirror piece is:</p>
<ul>
<li><p>A minimal, functional slice of the system.</p>
</li>
<li><p>Built specifically to reflect business intent back to stakeholders.</p>
</li>
<li><p>Deliberately incomplete and open to rapid change.</p>
</li>
</ul>
<p>Its real purpose isn’t immediate value. It answers a more important question: “Is this what you meant?”</p>
<p>By creating mirror pieces early, teams shift from end-of-sprint validation to <strong>continuous feedback during development</strong>.</p>
<hr />
<h2>Why a UI Is Crucial—Even for Technical Systems</h2>
<p>A mirror piece without a UI is often invisible to the business. Raw data, logs, or backend flows require interpretation, reopening the very gap we’re trying to close.</p>
<p>A simple, even rough UI changes everything. It provides:</p>
<ul>
<li><p><strong>Observability</strong>: What is actually happening in the system?</p>
</li>
<li><p><strong>Navigability</strong>: How do entities relate (e.g., DeliveryUnit → PreservationUnit)?</p>
</li>
<li><p><strong>Clarity</strong>: Does this match how the business understands the domain?</p>
</li>
</ul>
<p>The UI becomes the <strong>event horizon</strong>—the meeting point of business intent and technical execution. Without it the system stays abstract. With it, the system becomes a shared language.</p>
<hr />
<h2>Refinement in a Living System</h2>
<p>No story arrives in a vacuum. Every new request lands inside an existing system—complete with implemented logic, established flows, and embedded assumptions about how the business works.</p>
<p>Refinement must therefore do double duty: deeply understand the new intent <em>and</em> re-evaluate what already exists.</p>
<p>The first question shifts from “How do we build this?” to “How does this relate to what we already have?”</p>
<p>New stories often reveal deeper truths:</p>
<ul>
<li><p>An earlier assumption was incomplete.</p>
</li>
<li><p>A rule was too simplistic.</p>
</li>
<li><p>A flow was designed for a narrower case than reality demands.</p>
</li>
</ul>
<p>This is not failure—it is the system doing its job: <strong>exposing gaps in understanding</strong>.</p>
<p>When a story interacts with existing behavior, there are typically three paths:</p>
<ol>
<li><p><strong>It fits the current model</strong> → Simply extend what is already there.</p>
</li>
<li><p><strong>It introduces a variation within the same flow</strong> → Isolate the difference cleanly (e.g., using strategy-like patterns) without fracturing the stable core.</p>
</li>
<li><p><strong>It challenges earlier assumptions</strong> → Revisit and evolve the underlying model itself.</p>
</li>
</ol>
<p>Treating all three the same—by just adding patches or conditionals—breeds accumulating complexity, duplicated logic, and a system that grows harder to reason about.</p>
<p>In this light, refinement becomes far more than story clarification. It is a <strong>checkpoint for system integrity</strong>: a deliberate moment to ask, “Does our current system still reflect how the business actually operates?”</p>
<p>Software is not a static machine. It is an evolving mirror of the domain. Every new story offers a chance to confirm, refine, or correct what we thought we knew. Refinement is where that evolution should happen consciously—not accidentally through technical debt.</p>
<hr />
<h2>The Real Cost of Skipping Intent-Focused Refinement</h2>
<p>When refinement stays shallow and we build on assumptions instead of understanding, the consequences are predictable:</p>
<ul>
<li><p>Misaligned solutions.</p>
</li>
<li><p>Duplicated or conflicting functionality.</p>
</li>
<li><p>Growing technical debt.</p>
</li>
<li><p>Late and expensive rework.</p>
</li>
<li><p>Systems that pass internal checks but fail in real use.</p>
</li>
</ul>
<p>Most importantly, the system stops being a tool for learning and becomes a machine for executing yesterday’s assumptions.</p>
<hr />
<h2>Reclaiming the Feedback Loop</h2>
<p>The original promise of Agile was fast, continuous feedback. To reclaim it, we need a mindset shift:</p>
<ul>
<li><p>From <strong>stories as work orders</strong> → to <strong>stories as intent</strong>.</p>
</li>
<li><p>From <strong>refinement as design</strong> → to <strong>refinement as understanding</strong>.</p>
</li>
<li><p>From <strong>implementation as delivery</strong> → to <strong>implementation as a mirror</strong>.</p>
</li>
<li><p>From <strong>end-of-sprint feedback</strong> → to <strong>continuous feedback through mirror pieces and early UI</strong>.</p>
</li>
</ul>
<hr />
<h2>Conclusion</h2>
<p>Software development is often treated as a delivery process. In reality, it is a <strong>learning process</strong>.</p>
<p>The goal is not merely to build what was asked. The goal is to discover what is actually needed.</p>
<p>Refinement, mirror pieces, early UI, and deliberate validation are not overhead. They are the mechanisms that make genuine learning possible.</p>
<blockquote>
<p>Software is not just a tool to serve the business.<br />It is a mirror that helps the business—and the team—understand itself.</p>
</blockquote>
<p>The sooner we look into that mirror, the better what we build will become.</p>
<hr />
<h2>In the Age of AI: Where Does AI Fit?</h2>
<p>Looking at refinement as understanding, mirror pieces as feedback, and software as a learning tool, the natural question arises: Where does AI fit?</p>
<p>AI excels at implementation:</p>
<ul>
<li><p>Translating well-understood requirements into code.</p>
</li>
<li><p>Generating boilerplate and structure.</p>
</li>
<li><p>Accelerating familiar patterns.</p>
</li>
</ul>
<p>In short: <strong>AI operates most effectively in the solution space.</strong></p>
<p>The central challenge in this article lies elsewhere—in understanding intent, interpreting context, challenging assumptions, and discovering what the problem actually is. That remains fundamentally human work.</p>
<p>AI introduces a subtle risk. Because it can generate working code so quickly, it creates an illusion of progress even when understanding is incomplete. If refinement is weak:</p>
<ul>
<li><p>AI will still produce code.</p>
</li>
<li><p>The system will still behave as specified.</p>
</li>
<li><p>Tests will still pass.</p>
</li>
</ul>
<p>But the result is simply a faster realization of the same flawed assumptions.</p>
<blockquote>
<p>AI doesn’t correct misunderstanding—it accelerates it.</p>
</blockquote>
<p>What AI makes unmistakably clear is something that was always true: writing code is often the easiest part of building software. The real difficulty lies in knowing <em>what</em> to build, understanding why it matters, and recognizing when our assumptions are wrong.</p>
<p>That is exactly where strong refinement, mirror pieces, and early feedback matter most.</p>
<p>Within the model described here, AI fits naturally:</p>
<ul>
<li><p><strong>Refinement</strong> → human-driven discovery.</p>
</li>
<li><p><strong>Mirror pieces + UI</strong> → shared validation.</p>
</li>
<li><p><strong>AI</strong> → accelerated implementation of what has been learned.</p>
</li>
</ul>
<p>AI lets teams build mirror pieces faster, iterate more quickly, and validate ideas sooner. But it does not replace the need for discovery—it makes that discovery loop <em>more</em> critical, not less.</p>
<hr />
<h2><strong>Final Note</strong></h2>
<p>If software development becomes a process of executing predefined solutions, AI will do that exceptionally well.</p>
<p>But if we treat it as a process of learning and deeply understanding a domain, then AI becomes a powerful tool—without ever being the one that asks the important questions.</p>
<p>And those questions are still where the real work begins.</p>
]]></content:encoded></item><item><title><![CDATA[Less Code, Lost Meaning: Why Boilerplate Reduction Misses the Point]]></title><description><![CDATA[In modern software development, one theme keeps returning:

reduce boilerplate

write less code

increase conciseness


Frameworks, annotations, and code generators promise cleaner classes and faster ]]></description><link>https://blog.leonpennings.com/less-code-lost-meaning-why-boilerplate-reduction-misses-the-point</link><guid isPermaLink="true">https://blog.leonpennings.com/less-code-lost-meaning-why-boilerplate-reduction-misses-the-point</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[AI]]></category><category><![CDATA[boilerplate]]></category><category><![CDATA[Java]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Fri, 20 Mar 2026 07:43:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/5623a2ae-5ce2-40e0-96f9-2d80f8be21f9.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In modern software development, one theme keeps returning:</p>
<ul>
<li><p>reduce boilerplate</p>
</li>
<li><p>write less code</p>
</li>
<li><p>increase conciseness</p>
</li>
</ul>
<p>Frameworks, annotations, and code generators promise cleaner classes and faster development. Tools in ecosystems like Spring Boot emphasize exactly that: less code, less friction, more output.</p>
<p>At first glance, this seems like obvious progress.</p>
<p>But it raises a fundamental question:</p>
<blockquote>
<p>Does writing less code actually lead to better software?</p>
</blockquote>
<hr />
<h2>The Appeal of Code Reduction</h2>
<p>Code reduction is attractive because it delivers immediate, visible results:</p>
<ul>
<li><p>fewer lines of code</p>
</li>
<li><p>less repetition</p>
</li>
<li><p>faster initial development</p>
</li>
</ul>
<p>A class with 200 lines becomes 50. Configuration disappears behind annotations. Common patterns are abstracted away.</p>
<p>From a distance, this looks like improvement.</p>
<p>And at the level of syntax, it is.</p>
<hr />
<h2>The Problem: Optimizing the Wrong Layer</h2>
<p>Reducing boilerplate optimizes <em>how</em> we write code.</p>
<p>It does not address:</p>
<ul>
<li><p>what the code represents</p>
</li>
<li><p>how responsibilities are defined</p>
</li>
<li><p>whether the model is correct</p>
</li>
</ul>
<p>In other words:</p>
<blockquote>
<p>It improves expression without improving meaning.</p>
</blockquote>
<p>You can have:</p>
<ul>
<li><p>perfectly concise code</p>
</li>
<li><p>minimal syntax</p>
</li>
<li><p>elegant constructs</p>
</li>
</ul>
<p>…that still represent a poor understanding of the domain.</p>
<p>And when that happens, the system remains:</p>
<ul>
<li><p>hard to understand</p>
</li>
<li><p>fragile under change</p>
</li>
<li><p>difficult to extend</p>
</li>
</ul>
<p>No amount of syntactic improvement fixes that.</p>
<hr />
<h2>The Missing Dimension: The Story</h2>
<p>A well-designed system tells a story.</p>
<p>Not in comments or documentation, but in its structure:</p>
<ul>
<li><p>objects represent real concepts</p>
</li>
<li><p>behavior lives where it belongs</p>
</li>
<li><p>interactions reflect actual processes</p>
</li>
</ul>
<p>You can read the code and understand:</p>
<blockquote>
<p><em>what the system does and why</em></p>
</blockquote>
<p>This is the “story” of the system.</p>
<p>And it is where most of the value lies.</p>
<hr />
<h2>Why Less Code Doesn’t Mean a Better Story</h2>
<p>Reducing code does not automatically improve that story.</p>
<p>In many cases, it does the opposite.</p>
<p>Consider what often happens:</p>
<ul>
<li><p>explicit logic is replaced with annotations</p>
</li>
<li><p>behavior is hidden behind framework conventions</p>
</li>
<li><p>configuration replaces clear structure</p>
</li>
</ul>
<p>The result:</p>
<ul>
<li><p>less visible code</p>
</li>
<li><p>but more implicit behavior</p>
</li>
</ul>
<p>And implicit behavior is harder to reason about.</p>
<p>You didn’t remove complexity.</p>
<blockquote>
<p>You made it harder to see.</p>
</blockquote>
<hr />
<h2>The Illusion of Simplicity</h2>
<p>Code reduction creates a powerful illusion:</p>
<blockquote>
<p>If there is less code, the system must be simpler.</p>
</blockquote>
<p>But simplicity in software is not about size.</p>
<p>It is about:</p>
<ul>
<li><p>clarity of responsibilities</p>
</li>
<li><p>correctness of the model</p>
</li>
<li><p>predictability of behavior</p>
</li>
</ul>
<p>A small, unclear system is more complex than a larger, well-structured one.</p>
<p>And a concise system with hidden behavior is more dangerous than an explicit one.</p>
<hr />
<h2>When Code Reduction Helps</h2>
<p>This is not an argument against reducing boilerplate.</p>
<p>There are clear benefits:</p>
<ul>
<li><p>eliminating repetition</p>
</li>
<li><p>removing mechanical code</p>
</li>
<li><p>standardizing common patterns</p>
</li>
</ul>
<p>When applied carefully, code reduction can:</p>
<ul>
<li><p>improve readability</p>
</li>
<li><p>reduce noise</p>
</li>
<li><p>allow focus on important parts</p>
</li>
</ul>
<p>But only under one condition:</p>
<blockquote>
<p>The underlying model must already be sound.</p>
</blockquote>
<hr />
<h2>When It Becomes Harmful</h2>
<p>Code reduction becomes problematic when it is used as a substitute for thinking.</p>
<p>When teams focus on:</p>
<ul>
<li><p>making code shorter</p>
</li>
<li><p>following framework conventions</p>
</li>
<li><p>reducing visible complexity</p>
</li>
</ul>
<p>Instead of:</p>
<ul>
<li><p>modeling the domain</p>
</li>
<li><p>defining responsibilities</p>
</li>
<li><p>understanding behavior</p>
</li>
</ul>
<p>At that point, development becomes:</p>
<blockquote>
<p>an exercise in fitting problems into existing constructs</p>
</blockquote>
<p>Rather than solving them.</p>
<hr />
<h2>When the Story Disappears</h2>
<p>If software engineering increasingly focuses on syntax optimization—on writing less code, faster—then an important question emerges:</p>
<blockquote>
<p>Who is responsible for the quality of the story?</p>
</blockquote>
<p>Because if we optimize for:</p>
<ul>
<li><p>fewer lines of code</p>
</li>
<li><p>more generation</p>
</li>
<li><p>less manual effort</p>
</li>
</ul>
<p>We also reduce something else:</p>
<blockquote>
<p>the amount of direct engagement with the model itself</p>
</blockquote>
<p>Traditionally, writing code served a dual purpose:</p>
<ul>
<li><p>implementing behavior</p>
</li>
<li><p>validating understanding</p>
</li>
</ul>
<p>The act of writing forced decisions:</p>
<ul>
<li><p>where does this responsibility belong?</p>
</li>
<li><p>does this concept make sense?</p>
</li>
<li><p>do these rules contradict each other?</p>
</li>
</ul>
<p>Code was not just output.</p>
<p>It was a <strong>mirror</strong>.</p>
<hr />
<h2>The Role of Friction</h2>
<p>Some level of friction in development is valuable.</p>
<p>Not accidental friction—like fighting a framework—but <strong>conceptual friction</strong>:</p>
<ul>
<li><p>needing to define boundaries</p>
</li>
<li><p>needing to resolve ambiguity</p>
</li>
<li><p>needing to make trade-offs explicit</p>
</li>
</ul>
<p>This friction forces clarity.</p>
<p>It exposes:</p>
<ul>
<li><p>inconsistencies in requirements</p>
</li>
<li><p>gaps in understanding</p>
</li>
<li><p>misplaced responsibilities</p>
</li>
</ul>
<p>When you remove too much of that friction, you don’t just gain speed.</p>
<blockquote>
<p>You lose feedback.</p>
</blockquote>
<hr />
<h2>Code Generation as the Endgame</h2>
<p>Tools like Claude and similar code generation systems represent the logical extreme of this trend.</p>
<p>They can:</p>
<ul>
<li><p>generate large amounts of code instantly</p>
</li>
<li><p>remove almost all boilerplate</p>
</li>
<li><p>translate intent into implementation</p>
</li>
</ul>
<p>From a productivity standpoint, this is remarkable.</p>
<p>But it introduces a new risk:</p>
<blockquote>
<p>If code is no longer written, it is no longer <em>used to think</em>.</p>
</blockquote>
<hr />
<h2>When “Working” Is No Longer Proof</h2>
<p>Traditionally, writing code forced validation.</p>
<p>Each decision had to be made explicitly:</p>
<ul>
<li><p>where does this behavior belong?</p>
</li>
<li><p>do these concepts align?</p>
</li>
<li><p>are these rules consistent?</p>
</li>
</ul>
<p>In that process, contradictions surface.</p>
<p>With code generation, that feedback loop weakens.</p>
<p>You describe intent.<br />The system produces implementation.</p>
<p>And because the result runs, it creates a powerful signal:</p>
<blockquote>
<p>It works.</p>
</blockquote>
<p>But that signal is misleading.</p>
<p>What you get is not necessarily a system that is <em>correct</em>.</p>
<blockquote>
<p>It is a system that <strong>appears to work under current conditions</strong>.</p>
</blockquote>
<hr />
<h2>The Silent Failure Mode</h2>
<p>Without active engagement in shaping the model:</p>
<ul>
<li><p>contradictions in the domain are not surfaced</p>
</li>
<li><p>responsibilities are not fully resolved</p>
</li>
<li><p>assumptions are not challenged</p>
</li>
</ul>
<p>They don’t disappear.</p>
<p>They remain latent.</p>
<p>And instead of being caught during construction, they emerge later as:</p>
<ul>
<li><p>inconsistent behavior</p>
</li>
<li><p>edge-case failures</p>
</li>
<li><p>unpredictable interactions</p>
</li>
</ul>
<p>At that point, the problem is no longer local.</p>
<p>It is systemic.</p>
<hr />
<h2>The Loss of Pressure on the Model</h2>
<p>A well-designed system is not just built—it is <strong>continuously refined</strong>.</p>
<p>Each line of code adds pressure:</p>
<ul>
<li><p>on the model</p>
</li>
<li><p>on the boundaries</p>
</li>
<li><p>on the assumptions</p>
</li>
</ul>
<p>Code generation removes much of that pressure.</p>
<p>It allows systems to grow without forcing the same level of scrutiny.</p>
<p>So the model is no longer:</p>
<ul>
<li><p>shaped</p>
</li>
<li><p>challenged</p>
</li>
<li><p>corrected</p>
</li>
</ul>
<p>It is merely <em>extended</em>.</p>
<hr />
<h2>From Engineering to Assembly</h2>
<p>The risk is not that code generation produces bad code.</p>
<p>The risk is that it enables a different mode of development:</p>
<blockquote>
<p>assembling systems without fully understanding them</p>
</blockquote>
<p>At small scale, this works.</p>
<p>At larger scale, it leads to:</p>
<ul>
<li><p>hidden inconsistencies</p>
</li>
<li><p>fragile structures</p>
</li>
<li><p>systems that behave correctly—until they don’t</p>
</li>
</ul>
<p>And when they fail, they fail in ways that are:</p>
<ul>
<li><p>hard to trace</p>
</li>
<li><p>hard to reason about</p>
</li>
<li><p>hard to fix</p>
</li>
</ul>
<hr />
<h2>The Real Risk</h2>
<p>The danger is subtle.</p>
<p>The system does not immediately break.</p>
<p>It delivers output.<br />It passes tests.<br />It supports current use cases.</p>
<p>But underneath:</p>
<blockquote>
<p>the model has never been fully validated.</p>
</blockquote>
<p>And over time, that leads to a system that is not truly stable, but:</p>
<blockquote>
<p><strong>conditionally correct and fundamentally unpredictable</strong></p>
</blockquote>
<hr />
<h2>Closing Thought</h2>
<p>Code generation removes effort.</p>
<p>But it also removes something essential:</p>
<blockquote>
<p>the act of forcing clarity through construction</p>
</blockquote>
<p>And without that:</p>
<blockquote>
<p>we risk building systems that don’t fail fast—<br />but fail late, and fail hard.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[The Illusion of Progress: Why Tooling Can’t Replace Engineering]]></title><description><![CDATA[Walk into almost any modern enterprise Java codebase and you’ll see the same pattern: controllers, services, repositories, configuration, and a dense web of injected dependencies—often built on framew]]></description><link>https://blog.leonpennings.com/the-illusion-of-progress-why-tooling-can-t-replace-engineering</link><guid isPermaLink="true">https://blog.leonpennings.com/the-illusion-of-progress-why-tooling-can-t-replace-engineering</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Java]]></category><category><![CDATA[Rich Domain Model]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Wed, 18 Mar 2026 14:07:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/eaf24312-703a-4d91-8791-242609511d43.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Walk into almost any modern enterprise Java codebase and you’ll see the same pattern: controllers, services, repositories, configuration, and a dense web of injected dependencies—often built on frameworks like Spring Boot.</p>
<p>It works. Requests flow through the system. Data is persisted. Features get delivered.</p>
<p>By most organizational standards, this is considered a success.</p>
<p>But there’s a fundamental question almost never asked:</p>
<blockquote>
<p><em>Is this system well engineered—or does it merely appear to work?</em></p>
</blockquote>
<hr />
<h2>The Industry’s Blind Spot</h2>
<p>Software development suffers from a unique problem: we almost never get to compare two fundamentally different approaches to the <em>same</em> system.</p>
<ul>
<li><p>One system is built by Team A, using framework templates</p>
</li>
<li><p>Another is built by Team B, using a strong conceptual model</p>
</li>
</ul>
<p>Different teams, different timelines, different constraints.</p>
<p>So when a system “works,” we assume:</p>
<blockquote>
<p>the approach must be valid</p>
</blockquote>
<p>But we never see:</p>
<blockquote>
<p>what that same system could have looked like with a better model</p>
</blockquote>
<p>That absence of comparison creates a blind spot—one where <strong>“working software” is mistaken for “well-designed software.”</strong></p>
<hr />
<h2>The Rise of Template-Driven Development</h2>
<p>Frameworks like Spring Boot didn’t become dominant by accident.</p>
<p>They offer:</p>
<ul>
<li><p>immediate productivity</p>
</li>
<li><p>standardized structure</p>
</li>
<li><p>fast onboarding</p>
</li>
</ul>
<p>They allow teams to produce output quickly—often without deeply understanding the domain.</p>
<p>And that’s where the shift happens.</p>
<p>Instead of asking:</p>
<blockquote>
<p><em>What is the correct model of this domain?</em></p>
</blockquote>
<p>Teams start asking:</p>
<blockquote>
<p><em>Where does this go in the template?</em></p>
</blockquote>
<p>At that point, development turns into something else entirely:</p>
<blockquote>
<p><strong>Stenography.</strong> Translating user stories into predefined technical slots.</p>
</blockquote>
<p>The result:</p>
<ul>
<li><p>large amounts of integration code</p>
</li>
<li><p>thin or absent domain logic</p>
</li>
<li><p>systems that function—but are difficult to evolve</p>
</li>
</ul>
<hr />
<h2>The Cost You Don’t See</h2>
<p>Systems built on heavy frameworks don’t usually fail.</p>
<p>They degrade.</p>
<p>Not in obvious ways, but in how time and effort are spent.</p>
<p>At first, development feels fast:</p>
<ul>
<li><p>scaffolding is generated</p>
</li>
<li><p>endpoints are wired</p>
</li>
<li><p>persistence is handled</p>
</li>
</ul>
<p>Features appear quickly.</p>
<p>But over time, a shift happens.</p>
<p>The system is no longer primarily about automating the business domain.</p>
<p>It becomes increasingly about maintaining the technical environment around it.</p>
<h3>The Hidden Shift in Effort</h3>
<p>In many enterprise systems, a large portion of engineering effort is spent on:</p>
<ul>
<li><p>upgrading frameworks (e.g. annual cycles in Spring Boot)</p>
</li>
<li><p>adapting to breaking changes</p>
</li>
<li><p>resolving dependency conflicts</p>
</li>
<li><p>aligning with new conventions and best practices</p>
</li>
<li><p>re-testing behavior that should not have changed</p>
</li>
</ul>
<p>This is not business value.</p>
<p>It is <strong>tooling maintenance</strong>.</p>
<p>Over time, the ratio shifts:</p>
<blockquote>
<p>Less effort goes into improving the domain.<br />More effort goes into keeping the system compatible with its own foundation.</p>
</blockquote>
<p>In extreme cases, the majority of work is no longer about <em>what the system does</em>, but about <em>what it runs on</em>.</p>
<h3>Integration Becomes the System</h3>
<p>As frameworks evolve, systems accumulate:</p>
<ul>
<li><p>layers of adapters</p>
</li>
<li><p>configuration overrides</p>
</li>
<li><p>compatibility fixes</p>
</li>
</ul>
<p>The result is a codebase where:</p>
<ul>
<li><p>most code connects things</p>
</li>
<li><p>very little code expresses the domain</p>
</li>
</ul>
<p>At that point:</p>
<blockquote>
<p>The system is no longer a model of the business.<br />It is a network of integrations.</p>
</blockquote>
<h3>The Upgrade Trap</h3>
<p>Modern frameworks evolve continuously.</p>
<p>Each upgrade promises:</p>
<ul>
<li><p>improvements</p>
</li>
<li><p>performance gains</p>
</li>
<li><p>new capabilities</p>
</li>
</ul>
<p>But each upgrade also introduces:</p>
<ul>
<li><p>migration effort</p>
</li>
<li><p>subtle behavioral changes</p>
</li>
<li><p>renewed testing cycles</p>
</li>
</ul>
<p>Individually, these seem manageable.</p>
<p>Collectively, they create a constant background load.</p>
<p>A system that was supposed to simplify development now requires <strong>continuous adaptation just to remain operational</strong>.</p>
<h3>Loss of Focus</h3>
<p>The most damaging effect is not technical—it’s directional.</p>
<p>When most effort is spent on:</p>
<ul>
<li><p>frameworks</p>
</li>
<li><p>infrastructure</p>
</li>
<li><p>compatibility</p>
</li>
</ul>
<p>Then the business domain becomes secondary.</p>
<p>Teams stop asking:</p>
<blockquote>
<p><em>How do we model this problem better?</em></p>
</blockquote>
<p>And start asking:</p>
<blockquote>
<p><em>How do we make this work within the framework?</em></p>
</blockquote>
<p>At that point, the system is no longer driven by the domain.</p>
<p>It is driven by the tooling.</p>
<hr />
<h3>The Real Cost</h3>
<p>This cost rarely appears in metrics.</p>
<p>It shows up as:</p>
<ul>
<li><p>slower feature delivery over time</p>
</li>
<li><p>increasing effort for simple changes</p>
</li>
<li><p>growing system fragility</p>
</li>
<li><p>loss of clarity about what the system actually does</p>
</li>
</ul>
<p>And most critically:</p>
<blockquote>
<p>A large portion of engineering capacity is spent on work that does not move the business forward.</p>
</blockquote>
<hr />
<h2>The Alternative: Start With the Model</h2>
<p>There is another way to build systems—one that doesn’t start with frameworks or templates.</p>
<p>It starts with a different premise:</p>
<blockquote>
<p><strong>Software development is primarily a modeling activity.</strong></p>
</blockquote>
<p>Before writing code, you ask:</p>
<ul>
<li><p>What are the core responsibilities?</p>
</li>
<li><p>Where does behavior belong?</p>
</li>
<li><p>What are the invariants of the system?</p>
</li>
</ul>
<p>From there, you build:</p>
<ul>
<li><p><strong>rich domain objects</strong> that own behavior</p>
</li>
<li><p><strong>clear boundaries</strong> that prevent concern leakage</p>
</li>
<li><p><strong>explicit lifecycles</strong> that reflect real interactions</p>
</li>
</ul>
<p>In such a system:</p>
<ul>
<li><p>objects are <em>used</em>, not orchestrated</p>
</li>
<li><p>behavior is <em>invoked</em>, not assembled</p>
</li>
<li><p>structure reflects <em>meaning</em>, not framework conventions</p>
</li>
</ul>
<hr />
<h2>“But What About Wiring?”</h2>
<p>A common assumption in enterprise development is that systems require extensive wiring:</p>
<ul>
<li><p>dependency injection</p>
</li>
<li><p>service composition</p>
</li>
<li><p>configuration graphs</p>
</li>
</ul>
<p>But this is often a symptom, not a necessity.</p>
<p>When responsibilities are well-defined and localized:</p>
<ul>
<li><p>objects don’t need to be assembled dynamically</p>
</li>
<li><p>behavior doesn’t need external orchestration</p>
</li>
<li><p>lifecycle can be handled at clear boundaries</p>
</li>
</ul>
<p>Instead of wiring a system together, you:</p>
<blockquote>
<p><strong>define objects that already make sense together</strong></p>
</blockquote>
<p>Infrastructure concerns—like persistence or messaging—can be handled through:</p>
<ul>
<li><p>decorators</p>
</li>
<li><p>well-defined interaction boundaries</p>
</li>
</ul>
<p>Not scattered across the system.</p>
<p>The result is not “no composition,” but:</p>
<blockquote>
<p><strong>composition that is internal, stable, and invisible</strong></p>
</blockquote>
<hr />
<h2>Why This Feels Slower (But Isn’t)</h2>
<p>Taking time to understand the domain can feel like a delay.</p>
<p>But consider the alternative:</p>
<ul>
<li><p>building quickly in the wrong direction</p>
</li>
<li><p>discovering mismatches later</p>
</li>
<li><p>restructuring under pressure</p>
</li>
</ul>
<p>It’s the difference between:</p>
<ul>
<li><p>planning a route before driving</p>
</li>
<li><p>or heading “roughly east” and hoping to arrive</p>
</li>
</ul>
<p>The first appears slower. The second <em>is</em> slower—just not immediately.</p>
<hr />
<h2>The Real Constraint</h2>
<p>If this approach is so effective, why isn’t it the norm?</p>
<p>Because it depends on something rare:</p>
<blockquote>
<p><strong>Strong conceptual thinking</strong></p>
</blockquote>
<p>Framework-driven development scales because it:</p>
<ul>
<li><p>reduces decision-making</p>
</li>
<li><p>standardizes structure</p>
</li>
<li><p>works with uneven skill levels</p>
</li>
</ul>
<p>Conceptual modeling does not:</p>
<ul>
<li><p>it requires alignment</p>
</li>
<li><p>it requires discipline</p>
</li>
<li><p>it requires engineers who can think in systems</p>
</li>
</ul>
<p>So organizations optimize for:</p>
<blockquote>
<p>predictable output</p>
</blockquote>
<p>Instead of:</p>
<blockquote>
<p>optimal design</p>
</blockquote>
<hr />
<h2>The Resulting Illusion</h2>
<p>This leads to a persistent illusion in the industry:</p>
<ul>
<li><p>Systems built with heavy tooling are seen as “modern”</p>
</li>
<li><p>Systems built with strong models are seen as “overthinking”</p>
</li>
</ul>
<p>Because one produces:</p>
<ul>
<li>immediate, visible progress</li>
</ul>
<p>And the other produces:</p>
<ul>
<li>long-term structural integrity</li>
</ul>
<p>But without a direct comparison, the difference remains invisible.</p>
<hr />
<h2>A Different Standard of Success</h2>
<p>If we want to build sustainable systems, we need to change the definition of success.</p>
<p>Not:</p>
<blockquote>
<p>“Does it work?”</p>
</blockquote>
<p>But:</p>
<ul>
<li><p>How easy is it to understand?</p>
</li>
<li><p>How localized is change?</p>
</li>
<li><p>How much of the code reflects the domain vs integration?</p>
</li>
</ul>
<p>Because ultimately:</p>
<blockquote>
<p>A system that merely works today can become a liability tomorrow. A system that is well modeled continues to work—even as it evolves.</p>
</blockquote>
<hr />
<h2>Closing Thought</h2>
<p>Tooling is not the enemy.</p>
<p>But it becomes a problem when it replaces the very thing it was meant to support:</p>
<blockquote>
<p><strong>Engineering.</strong></p>
</blockquote>
<p>Frameworks can accelerate implementation. They cannot replace understanding.</p>
<p>And without understanding, we’re not engineering systems.</p>
<p>We’re assembling them—and hoping they hold.</p>
]]></content:encoded></item><item><title><![CDATA[The Ghostwriter, the House Builder, and the Missing Domain Model Walk Into a Bar]]></title><description><![CDATA[Software development is often described as “building systems”.But there are two professions that might describe the job much better: writing a book and designing a house.
Both involve creating somethi]]></description><link>https://blog.leonpennings.com/the-ghostwriter-the-house-builder-and-the-missing-domain-model-walk-into-a-bar</link><guid isPermaLink="true">https://blog.leonpennings.com/the-ghostwriter-the-house-builder-and-the-missing-domain-model-walk-into-a-bar</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Java]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Mon, 16 Mar 2026 06:54:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/1011c477-3bc9-4747-a9cb-60008ed7fbb7.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Software development is often described as “building systems”.<br />But there are two professions that might describe the job much better: <strong>writing a book</strong> and <strong>designing a house</strong>.</p>
<p>Both involve creating something coherent out of many parts.<br />Both require understanding purpose before execution.<br />And both reveal a common mistake that appears surprisingly often in modern software development.</p>
<p>To see why, imagine two professionals: a ghostwriter and a house builder.</p>
<hr />
<h2>The Ghostwriter Version of Software Development</h2>
<p>Imagine you hire a ghostwriter to write a book based on your ideas.</p>
<p>You send them fragments:</p>
<ul>
<li><p>a story about a childhood memory</p>
</li>
<li><p>a chapter about leadership</p>
</li>
<li><p>a few anecdotes about business</p>
</li>
<li><p>a paragraph about innovation</p>
</li>
<li><p>a repeated explanation of a concept you already mentioned earlier</p>
</li>
</ul>
<p>A bad ghostwriter simply writes everything down exactly as provided.</p>
<p>The result is technically correct. The grammar is fine. The sentences are clear.</p>
<p>But the book becomes a mess:</p>
<ul>
<li><p>topics overlap</p>
</li>
<li><p>concepts repeat</p>
</li>
<li><p>arguments contradict each other</p>
</li>
<li><p>the narrative jumps randomly between ideas</p>
</li>
</ul>
<p>Nothing ties the material together into a coherent story.</p>
<p>The ghostwriter has focused on <strong>transcription</strong>, not <strong>authorship</strong>.</p>
<p>Unfortunately, software development is sometimes practiced in a very similar way.</p>
<p>A team receives a series of user stories:</p>
<ul>
<li><p>“Customers should be able to create orders.”</p>
</li>
<li><p>“Orders can have discounts.”</p>
</li>
<li><p>“Admins can modify customer data.”</p>
</li>
<li><p>“Orders must be validated before submission.”</p>
</li>
</ul>
<p>Each story is implemented somewhere in the codebase. A controller here, a service there, a validation rule somewhere else.</p>
<p>Every story is technically implemented.</p>
<p>But over time the system starts to show symptoms:</p>
<ul>
<li><p>business rules appear in multiple places</p>
</li>
<li><p>behavior becomes inconsistent</p>
</li>
<li><p>changes require touching many unrelated components</p>
</li>
<li><p>nobody is entirely sure where certain logic belongs anymore</p>
</li>
</ul>
<p>The system becomes the equivalent of the badly written book: <strong>a collection of fragments without a coherent narrative</strong>.</p>
<p>The problem is not coding skill. The problem is the absence of <strong>structure</strong>.</p>
<hr />
<h2>The House Builder Version of Software Development</h2>
<p>Now imagine designing a wooden house.</p>
<p>But instead of starting with how people will live in it, the builders start with their tools.</p>
<p>The plumber places the bathroom where the pipes are easiest to install.</p>
<p>The electrician places the kitchen where wiring is convenient.</p>
<p>The carpenter builds bedrooms wherever the structure is simplest.</p>
<p>Each professional does excellent work.</p>
<p>The plumbing is perfect.<br />The wiring is flawless.<br />The construction is solid.</p>
<p>But when the house is finished, something feels very wrong.</p>
<p>The dining room is on the opposite end of the kitchen.<br />The shower is installed in the kitchen because the pipes were already there.<br />The bedrooms are nowhere near the bathroom.</p>
<p>Every part of the house is technically well built.</p>
<p>But the house <strong>does not work as a house</strong>.</p>
<p>No one began by asking the most important question:</p>
<p><strong>How will people live here?</strong></p>
<p>The same thing happens in software development when architecture is driven primarily by tools and technologies.</p>
<p>Discussions revolve around:</p>
<ul>
<li><p>frameworks</p>
</li>
<li><p>infrastructure</p>
</li>
<li><p>microservices</p>
</li>
<li><p>deployment pipelines</p>
</li>
<li><p>cloud platforms</p>
</li>
</ul>
<p>All important tools.</p>
<p>But they are <strong>construction techniques</strong>, not <strong>design principles</strong>.</p>
<p>Without understanding how the system is supposed to behave as a whole, even the best tools can produce a system that is technically impressive but conceptually broken.</p>
<hr />
<h2>What’s Missing: Coherent Design</h2>
<p>Both examples expose the same underlying problem.</p>
<p>The ghostwriter fails because they never discovered the <strong>story</strong>.</p>
<p>The house builders fail because they never understood the <strong>purpose of the house</strong>.</p>
<p>In software engineering, the equivalent is failing to understand the <strong>domain</strong>.</p>
<p>User stories describe fragments of behavior.</p>
<p>But if every story is simply implemented as-is, the system slowly loses coherence.</p>
<p>Engineering cannot start with implementation. It must start with <strong>understanding</strong>.</p>
<p>What activities exist in the domain?<br />What concepts matter?<br />What rules must always hold?<br />Where should responsibilities live?</p>
<p>Only when those questions are answered does the structure of the system begin to emerge.</p>
<hr />
<h2>Enter the Domain Model</h2>
<p>This is where the <strong>domain model</strong> becomes essential.</p>
<p>A domain model acts as a <strong>responsibility localizer</strong> and <strong>logic contextualizer</strong>.</p>
<p>Instead of scattering behavior across the codebase, the model provides structure:</p>
<ul>
<li><p>concepts are represented explicitly</p>
</li>
<li><p>rules live with the concepts they govern</p>
</li>
<li><p>responsibilities have clear homes</p>
</li>
</ul>
<p>When a new user story arrives, the question is no longer:</p>
<blockquote>
<p>“Where should we add this piece of code?”</p>
</blockquote>
<p>Instead the question becomes:</p>
<blockquote>
<p>“What does this story mean for our understanding of the domain?”</p>
</blockquote>
<p>Sometimes the answer is simple.</p>
<p>Sometimes it requires adjusting the model itself.</p>
<p>But the goal remains the same: <strong>preserve conceptual integrity</strong>.</p>
<p>Without that integrity, software inevitably turns into the badly written book or the badly designed house.</p>
<hr />
<h2>The Risk of AI</h2>
<p>AI-assisted coding is incredibly powerful.</p>
<p>It can generate code, implement features, suggest refactorings, and remove enormous amounts of repetitive work. Used well, it is an enormous productivity accelerator.</p>
<p>But AI is strongest at <strong>local implementation</strong>.</p>
<p>It excels at doing exactly what it is asked: implementing a function, adding a feature, modifying an existing piece of code. In that sense it behaves very much like the literal ghostwriter who writes the paragraph that was requested, or the contractor who builds a perfectly constructed room.</p>
<p>What AI does not replace is <strong>modeling the domain</strong>.</p>
<p>It does not determine:</p>
<ul>
<li><p>what the core concepts of the system should be</p>
</li>
<li><p>where responsibilities belong</p>
</li>
<li><p>how rules should be structured</p>
</li>
<li><p>how the system should reflect the purpose of the business</p>
</li>
</ul>
<p>Those decisions require understanding intent and discovering structure. They are design activities.</p>
<p>AI can dramatically accelerate the <strong>technical execution</strong> of a system. But it cannot replace the need for <strong>coherent design</strong>.</p>
<p>Otherwise we risk the same outcomes as before: a technically correct book without a story, or a well-built house that no one can live in.</p>
<hr />
<h2>Conclusion</h2>
<p>In an interesting way, the rise of AI coding tools highlights something that has always been true in software development.</p>
<p>Many teams have already been operating primarily at the level of <strong>feature implementation</strong>, while the deeper design work was often implicit, inconsistent, or missing entirely.</p>
<p>Custom software is essentially a <strong>one-off prototype</strong>. There is no reference design to compare it to. There is no second version of the same system built by another team. There is only one implementation: the one that ends up running in production.</p>
<p>That makes design mistakes difficult to spot early.</p>
<p>A book with a broken narrative may only reveal its problems once the entire manuscript is finished.<br />A badly designed house may only reveal its flaws once people try to live in it.</p>
<p>Software is no different.</p>
<p>Which is why the design phase — understanding the purpose of the system and shaping a coherent domain model — cannot be skipped.</p>
<p>The ghostwriter must understand the story.<br />The architect must understand how the house will be lived in.</p>
<p>And the software engineer must understand the <strong>domain</strong> before writing the code that brings it to life.</p>
]]></content:encoded></item><item><title><![CDATA[The Two Levels of Software Development — And Why Most Enterprise Applications Fail Over Time]]></title><description><![CDATA[There are two fundamentally different levels in software development.
Level 1 — Getting the system to run
At this level the goal is straightforward:

the application compiles

the system deploys

feat]]></description><link>https://blog.leonpennings.com/the-two-levels-of-software-development-and-why-most-enterprise-applications-fail-over-time</link><guid isPermaLink="true">https://blog.leonpennings.com/the-two-levels-of-software-development-and-why-most-enterprise-applications-fail-over-time</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Java]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Rich Domain Model]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Tue, 10 Mar 2026 07:38:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/c33adbc4-3f98-4a3a-a7d2-79546d597cae.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There are two fundamentally different levels in software development.</p>
<p><strong>Level 1 — Getting the system to run</strong></p>
<p>At this level the goal is straightforward:</p>
<ul>
<li><p>the application compiles</p>
</li>
<li><p>the system deploys</p>
</li>
<li><p>features behave as expected</p>
</li>
<li><p>users can operate the software</p>
</li>
</ul>
<p>If those conditions are met, the project is considered successful.</p>
<p>Most modern frameworks are extremely good at helping teams achieve this level. A stack such as Spring Framework provides a ready-made structure for building applications quickly: web infrastructure, dependency injection, persistence integration, configuration management, and more. With the right templates and tooling, teams can produce a working system in relatively little time.</p>
<p><strong>Level 2 — Keeping the system economically evolvable</strong></p>
<p>The second level is far harder.</p>
<p>Once the system has been running for several years, the real question becomes:</p>
<ul>
<li><p>Can developers still reason about the business rules?</p>
</li>
<li><p>Can features be added without breaking existing behavior?</p>
</li>
<li><p>Does the cost of change remain predictable?</p>
</li>
</ul>
<p>This is the level where software must remain <strong>economically viable</strong>. The system must evolve along with the business without collapsing under its own complexity.</p>
<p>Most of the industry focuses almost entirely on Level 1, because Level 2 is much harder to see.</p>
<hr />
<h2>The Observability Problem</h2>
<p>In many engineering disciplines, different designs can be compared directly.</p>
<p>Two airplane designs can be tested against each other. Two bridge designs can be analyzed under the same load conditions. Engineers can evaluate alternatives objectively.</p>
<p>Enterprise software is different.</p>
<p>Most systems are <strong>unique implementations of a specific business domain</strong>. A logistics system for one company will not be rebuilt several times with different architectures just to compare which approach works best.</p>
<p>Because of that, organizations rarely observe multiple competing implementations of the same domain.</p>
<p>There is no side-by-side comparison like:</p>
<pre><code class="language-plaintext">System A — rich domain model
System B — procedural service architecture
</code></pre>
<p>running the same business processes for several years.</p>
<p>Instead there is only one system: the one that was built.</p>
<p>As a result, success is evaluated using the only clearly visible metric:</p>
<pre><code class="language-plaintext">Does the application run?
</code></pre>
<p>If the answer is yes, the architecture is usually considered successful.</p>
<hr />
<h2>The Rise of the Software Factory</h2>
<p>Over time, the industry optimized for what was easiest to measure: producing working applications quickly.</p>
<p>This led to the emergence of <strong>standardized software production lines</strong>.</p>
<p>Typical enterprise stacks often follow a very familiar structure:</p>
<pre><code class="language-plaintext">controller
service
repository
database
</code></pre>
<p>Add REST APIs, containerization, messaging infrastructure, and often microservices, and a predictable pattern emerges.</p>
<p>Framework ecosystems reinforce this pattern. They provide conventions, templates, and project generators that make it easy to spin up new services quickly.</p>
<p>From an organizational perspective, this approach <strong>promises several advantages</strong>:</p>
<ul>
<li><p>developers can move between projects easily</p>
</li>
<li><p>teams can scale rapidly</p>
</li>
<li><p>systems follow familiar patterns</p>
</li>
<li><p>onboarding new engineers becomes simpler</p>
</li>
</ul>
<p>These promises are appealing, especially to organizations managing large engineering teams.</p>
<p>However, these advantages are rarely tested against alternative architectural approaches. Because most enterprise systems are built only once, there is no direct comparison showing whether a different design would actually have been more efficient or easier to evolve.</p>
<p>As a result, the perceived success of factory-style development often rests on a simple observation:</p>
<ul>
<li><p>The application runs</p>
</li>
<li><p>Features are delivered</p>
</li>
</ul>
<p>Without a comparable implementation built around a different architectural philosophy, it is difficult to see whether the chosen approach truly delivered its promised benefits.</p>
<p>The result is a development process that resembles a <strong>software factory</strong>: a standardized production line designed to produce working applications quickly.</p>
<p>Whether it produces the <strong>right kind of system for the domain</strong> is a different question entirely.</p>
<hr />
<h2>The Model T Problem</h2>
<p>The logic behind this standardization is similar to the philosophy associated with Henry Ford and the Ford Model T.</p>
<p>The Model T revolutionized manufacturing through standardization. One of the famous ideas attributed to Ford was that customers could choose any color, <strong>as long as it was black</strong>.</p>
<p>This approach worked because the product itself was standardized.</p>
<p>Cars were produced for a broad market with relatively similar requirements.</p>
<p>Enterprise software is fundamentally different.</p>
<p>Each system represents a <strong>specific business domain</strong>:</p>
<ul>
<li><p>logistics operations</p>
</li>
<li><p>insurance policies</p>
</li>
<li><p>trading platforms</p>
</li>
<li><p>healthcare workflows</p>
</li>
</ul>
<p>These domains have very different requirements and behaviors.</p>
<p>In effect, each enterprise application needs a different type of vehicle.</p>
<p>Some domains resemble <strong>heavy trucks</strong> carrying complex transactional logic. Others behave more like <strong>high-performance machines</strong>, where performance and precision matter enormously. Some systems are small and lightweight.</p>
<p>Yet many development ecosystems attempt to solve all of them with the same architectural pattern — the equivalent of producing a <strong>Model T for every possible use case</strong>.</p>
<hr />
<h2>Why This Appears to Work</h2>
<p>Despite the mismatch, the Model T architecture still appears successful.</p>
<p>After all, a Model T can still move forward. It can transport people and even carry small loads.</p>
<p>Similarly, standardized enterprise architectures can deliver features:</p>
<ul>
<li><p>endpoints respond to requests</p>
</li>
<li><p>data is stored and retrieved</p>
</li>
<li><p>workflows execute</p>
</li>
</ul>
<p>From the outside, the application works.</p>
<p>Because organizations rarely build the same system twice with different architectures, they never see a direct comparison. There is no competing design demonstrating that the system could have been far simpler or easier to evolve.</p>
<p>As long as the application runs and delivers features, the architecture appears to perform as expected.</p>
<hr />
<h2>The Hidden Cost of Factory Style Development</h2>
<p>The real cost of factory-style architectures emerges gradually and usually in two dimensions: <strong>effort</strong> and <strong>functional quality</strong>.</p>
<h3>Effort: Why development gets slower over time</h3>
<p>In factory-style systems, implementing new functionality tends to require roughly the same effort every time.</p>
<p>Every feature follows the same pattern:</p>
<pre><code class="language-plaintext">controller
service
repository
integration logic
</code></pre>
<p>Because the architecture is primarily procedural, the system rarely accumulates reusable domain behavior. Each feature often introduces new service logic rather than building upon existing concepts.</p>
<p>As the system grows, the situation frequently worsens:</p>
<ul>
<li><p>similar logic appears in multiple services</p>
</li>
<li><p>developers must read many parts of the system to understand behavior</p>
</li>
<li><p>debugging requires tracing through multiple layers and integrations</p>
</li>
<li><p>knowledge transfer becomes difficult</p>
</li>
</ul>
<p>The effort required to implement new functionality often <strong>remains constant or even increases over time</strong>.</p>
<p>Function-driven systems behave very differently.</p>
<p>When a system evolves around a coherent domain model, the model itself becomes a <strong>growing knowledge base of the business</strong>. Domain objects accumulate responsibilities and reusable behavior.</p>
<p>As the model matures:</p>
<ul>
<li><p>new features often extend existing objects</p>
</li>
<li><p>behavior is reused rather than reimplemented</p>
</li>
<li><p>developers can understand the system by understanding the model</p>
</li>
</ul>
<p>Knowledge transfer becomes easier because the model tells the story of the application.</p>
<p>Debugging is also simpler. When rules live in their responsible objects, it becomes immediately clear where behavior originates. Developers do not need to search across multiple services implementing slightly different versions of the same logic.</p>
<p>Over time, the effort required to add functionality <strong>tends to decrease</strong>, because the model provides increasing leverage.</p>
<h3>Quality: One version of the truth</h3>
<p>Factory-style architectures often distribute business rules across multiple services.</p>
<p>It is common to find logic that is similar but not identical in different places:</p>
<ul>
<li><p>slightly different validation rules</p>
</li>
<li><p>small variations in calculations</p>
</li>
<li><p>edge cases handled in one service but not another</p>
</li>
</ul>
<p>These inconsistencies are rarely intentional. They appear gradually as new features are implemented independently.</p>
<p>The result is a system with <strong>multiple interpretations of the same business rule</strong>.</p>
<p>Function-driven systems address this differently.</p>
<p>Each business rule belongs to the object responsible for that concept. The rule has <strong>one canonical implementation</strong>.</p>
<p>If a system contains an <code>Order</code> concept, the logic related to orders lives with the <code>Order</code> object. If there is pricing logic, it belongs to the pricing model.</p>
<p>This creates a <strong>single version of the truth</strong>.</p>
<p>Rules are not scattered across services or hidden inside orchestration layers. They are located where the business concept itself lives.</p>
<p>This greatly reduces contradictions and makes the system far easier to reason about.</p>
<hr />
<h2>The Engineering Principle</h2>
<p>Enterprise systems should be <strong>function-driven rather than tool-driven</strong>.</p>
<p>Architecture should begin with understanding the domain:</p>
<ul>
<li><p>the concepts involved</p>
</li>
<li><p>the relationships between them</p>
</li>
<li><p>the rules that must remain consistent</p>
</li>
</ul>
<p>Only after that understanding emerges should tools and frameworks be introduced to support the system.</p>
<p>Tools are valuable when they solve real problems in the running application:</p>
<ul>
<li><p>persistence</p>
</li>
<li><p>messaging</p>
</li>
<li><p>scaling</p>
</li>
<li><p>reliability</p>
</li>
</ul>
<p>But they should not dictate the structure of the domain model.</p>
<hr />
<h2>Why Rich Domain Models Help</h2>
<p>A function-driven system begins with the <strong>domain model</strong>, not with architectural patterns or infrastructure.</p>
<p>Instead of starting with decisions like:</p>
<ul>
<li><p>microservices</p>
</li>
<li><p>event-driven architecture</p>
</li>
<li><p>CQRS</p>
</li>
<li><p>messaging platforms</p>
</li>
</ul>
<p>development begins with understanding the domain itself.</p>
<p>The first goal is to model the core concepts and their responsibilities.</p>
<p>Typical objects might represent concepts such as:</p>
<ul>
<li><p>Order</p>
</li>
<li><p>Customer</p>
</li>
<li><p>Shipment</p>
</li>
<li><p>Invoice</p>
</li>
</ul>
<p>These objects contain the behavior that defines the business logic.</p>
<p>At this stage, the focus is entirely on implementing the <strong>core functionality of the application</strong>.</p>
<p>Infrastructure concerns are introduced only when they become necessary.</p>
<p>For example:</p>
<ul>
<li><p>persistence is added when data must be stored</p>
</li>
<li><p>messaging appears when asynchronous coordination is required</p>
</li>
<li><p>scaling mechanisms appear when load actually demands them</p>
</li>
</ul>
<p>In practice, many enterprise systems never reach the scale that requires complex distributed architectures.</p>
<p>For the majority of applications, a well-designed domain model within a cohesive system is entirely sufficient.</p>
<p>Only when real operational constraints appear should the architecture evolve technically.</p>
<p>This approach keeps the system aligned with the domain while avoiding premature technical complexity.</p>
<p>The result is software that grows <strong>organically around the business model</strong>, rather than being constrained by predefined architectural templates.</p>
<hr />
<h2>Closing Thought</h2>
<p>TThe software industry has become extremely good at producing applications that run.</p>
<p>Frameworks, templates, and standardized stacks make it possible to build complex systems faster than ever before.</p>
<p>But enterprise software is not a short-term product. It is a long-lived system that must evolve together with the business.</p>
<p>Designing such systems requires something different from a software factory. It requires starting with the domain, building a coherent model, and letting the architecture grow from the problem instead of from the tools.</p>
<p>Otherwise we keep producing the same solution for every problem:</p>
<p>another black Model T.</p>
<p>The problem with that approach is not aesthetic. It is economic.</p>
<p>When the architecture does not match the domain, the mismatch shows up in three places:</p>
<ul>
<li><p><strong>more engineering effort</strong> to implement and understand functionality</p>
</li>
<li><p><strong>higher long-term costs</strong> as development slows and operational complexity increases</p>
</li>
<li><p><strong>more fragile systems</strong> where business rules are scattered and difficult to reason about</p>
</li>
</ul>
<p>In other words, the wrong architectural vehicle does not merely look inelegant — it makes the system harder and more expensive to operate for the rest of its lifetime.</p>
<p><strong>Long-lived enterprise software requires something better than a one-size-fits-all production line.</strong></p>
<p><strong>It requires architectures that are designed around the domain they serve.</strong></p>
]]></content:encoded></item><item><title><![CDATA[How to Recognize an Effective Software Engineer]]></title><description><![CDATA[In software development, confidence is cheap.
You can memorize frameworks, quote architecture patterns, and repeat industry “best practices.” After a few years, it becomes easy to sound like an expert]]></description><link>https://blog.leonpennings.com/how-to-recognize-an-effective-software-engineer</link><guid isPermaLink="true">https://blog.leonpennings.com/how-to-recognize-an-effective-software-engineer</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Java]]></category><category><![CDATA[software development]]></category><category><![CDATA[software quality]]></category><category><![CDATA[#Software Engineering Basics]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Thu, 05 Mar 2026 08:53:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/8a6c7293-335d-4017-b137-67eef8401b02.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In software development, confidence is cheap.</p>
<p>You can memorize frameworks, quote architecture patterns, and repeat industry “best practices.” After a few years, it becomes easy to sound like an expert.</p>
<p>But none of that proves someone is thinking like an engineer.</p>
<p>The real signal appears the moment their ideas are questioned.</p>
<p><strong>Real software engineers welcome scrutiny.</strong><br /><strong>Weak engineers resist it.</strong></p>
<p>That difference sounds subtle — until you see its consequences on the systems they build.</p>
<p>Engineering is not about defending ideas.<br />It’s about <strong>breaking them before reality does</strong>.</p>
<hr />
<h2>Engineering Is Adversarial — Against Your Own Ideas</h2>
<p>Every engineering discipline assumes designs are wrong until proven otherwise.</p>
<p>Bridges are stress-tested.<br />Aircraft parts are pushed to failure.<br />Mechanical designs go through brutal review cycles.</p>
<p>The goal is simple: <strong>find weaknesses early, when they’re cheap to fix.</strong></p>
<p>Software should work the same way.</p>
<p>Design discussions and architecture reviews exist to expose flaws before they reach production.</p>
<p>Strong engineers instinctively understand this.</p>
<p>When someone challenges a design, they respond with curiosity:</p>
<blockquote>
<p>“Good point. Let’s walk through that scenario.”</p>
</blockquote>
<p>Every challenge is free stress-testing of the idea.</p>
<hr />
<h2>The Moment Engineering Turns Into Dogma</h2>
<p>The opposite mindset is easy to spot. Ask a simple “why” question about a design, and the response sounds like:</p>
<ul>
<li><p>“Everyone does it this way.”</p>
</li>
<li><p>“That’s the best practice.”</p>
</li>
<li><p>“That’s the standard architecture.”</p>
</li>
<li><p>“That’s how the framework expects it.”</p>
</li>
</ul>
<p>Notice what’s missing: reasoning, trade-offs, context — replaced entirely by authority.</p>
<p>Dogma is attractive because it removes responsibility. If a rule exists, you can apply it everywhere without thinking.</p>
<p>But real systems are messy. Constraints differ. Scale differs. Failure modes differ. Engineering requires reasoning. Dogma replaces thinking with imitation.</p>
<hr />
<h2>Why This Matters</h2>
<p>This difference shapes the systems teams build.</p>
<p>Teams that welcome scrutiny produce software that is:</p>
<ul>
<li><p>simpler</p>
</li>
<li><p>easier to modify</p>
</li>
<li><p>resilient to edge cases</p>
</li>
<li><p>understandable by new engineers</p>
</li>
</ul>
<p>Teams that avoid scrutiny accumulate:</p>
<ul>
<li><p>accidental architecture</p>
</li>
<li><p>unnecessary abstraction layers</p>
</li>
<li><p>rigid patterns nobody understands</p>
</li>
<li><p>rules that exist only because “that’s how we do it here”</p>
</li>
</ul>
<p>Over time, these systems become fragile. Ironically, teams most confident in their “best practices” often maintain the most brittle codebases.</p>
<hr />
<h2>The Quiet Signal</h2>
<p>Strong engineers share one surprising trait: <strong>they are comfortable being wrong.</strong></p>
<p>Not because they lack confidence, but because they understand: <strong>ideas improve under pressure.</strong></p>
<p>If a flaw is pointed out in a discussion, it’s removed before it reaches production. That is not a loss. That is engineering at work.</p>
<hr />
<h2>The Real Difference</h2>
<p>Weak engineers defend solutions.<br />Strong engineers investigate problems.</p>
<p>The difference shows up in small moments:</p>
<ul>
<li><p>One asks, “Why are we doing this?”</p>
</li>
<li><p>The other says, “Because that’s the standard.”</p>
</li>
<li><p>One explores trade-offs.</p>
</li>
<li><p>The other quotes rules.</p>
</li>
<li><p>One treats ideas as hypotheses.</p>
</li>
<li><p>The other treats them as territory.</p>
</li>
</ul>
<p>Over time, the systems these two mindsets produce look very different.</p>
<hr />
<h2>The Most Dangerous Engineer on a Team</h2>
<p>The biggest risk is <strong>not the junior developer who makes mistakes, nor the architect who experiments too much</strong>.</p>
<p>It’s the engineer who <strong>cannot be questioned</strong>.</p>
<p>They will:</p>
<ul>
<li><p>defend ideas with authority, not reasoning</p>
</li>
<li><p>tell you, “This is just how framework X works”</p>
</li>
<li><p>repeat dogma instead of analyzing trade-offs</p>
</li>
<li><p>stop discussions before flaws are exposed</p>
</li>
</ul>
<p>The systems they leave behind look solid at first. Layers of rules, patterns, and abstractions give the illusion of control. But underneath, every flaw they refused to discuss becomes <strong>technical debt, fragile modules, and unexpected outages</strong>.</p>
<p>The “dangerous engineer” doesn’t intend to fail. They just protect their ego over the system. Every skipped discussion, every dismissed critique, every “because we’ve always done it this way” is a seed of future problems.</p>
<p>By contrast, engineers who welcome scrutiny quietly build resilience. They treat questions as free stress tests, fix flaws before production, and teach teams to <strong>reason, not obey</strong>.</p>
<hr />
<h2>One Final Signal</h2>
<p>Ask yourself this next time you discuss a design or talk about implementation:</p>
<ul>
<li><p>Does this engineer <strong>invite questions</strong>, or do they shut them down?</p>
</li>
<li><p>Do they <strong>explain why a decision was made</strong>, or do they quote rules?</p>
</li>
<li><p>Are they <strong>curious about alternatives</strong>, or are they protecting territory?</p>
</li>
</ul>
<p>The answers tell you everything you need to know.</p>
<p>Weak engineers <strong>defend solutions</strong>.<br />Strong engineers <strong>investigate problems</strong>.</p>
<p>The strongest engineers don’t just write code — they <strong>engineer thinking itself</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[Why Do You Need a Rich Domain Model? That's old school and not modern!]]></title><description><![CDATA[Most modern software is built in iterations.
We select a set of user stories, implement them in a sprint, verify the new functionality, and move on to the next set. Each sprint adds something. The sys]]></description><link>https://blog.leonpennings.com/why-do-you-need-a-rich-domain-model-that-s-old-school-and-not-modern</link><guid isPermaLink="true">https://blog.leonpennings.com/why-do-you-need-a-rich-domain-model-that-s-old-school-and-not-modern</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[enterprise software]]></category><category><![CDATA[Java]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Mon, 02 Mar 2026 09:23:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/3c63cab2-70c2-4aa2-9ba4-8633699ddde4.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most modern software is built in iterations.</p>
<p>We select a set of user stories, implement them in a sprint, verify the new functionality, and move on to the next set. Each sprint adds something. The system grows feature by feature.</p>
<p>This works well — especially in the beginning.</p>
<p>The first stories are straightforward. The logic is easy to follow. The team still remembers where things live. Tests pass. Velocity is stable.</p>
<p>Nothing appears wrong.</p>
<h2>The Nature of Additive Development</h2>
<p>This way of working leads to what we can call <em>additive development</em>.</p>
<p>Each story adds behavior somewhere in the system:</p>
<ul>
<li><p>A new condition in a service.</p>
</li>
<li><p>An extra validation in a controller.</p>
</li>
<li><p>A new branch in an existing method.</p>
</li>
<li><p>A special case layered on top of an old one.</p>
</li>
</ul>
<p>Individually, these changes are reasonable. They solve the story at hand. They pass the tests written for that story.</p>
<p>But over time, something subtle happens.</p>
<p>Behavior becomes distributed.<br />Rules are implemented in different places.<br />Overlapping requirements are handled locally.<br />Old logic is adjusted without revisiting the bigger picture.</p>
<p>The system keeps working — but each sprint demands more effort to ensure nothing breaks. What was easy at sprint three can take hours of tracing, testing, and guesswork at sprint thirty. This is not about elegance; it is about operational risk.</p>
<h2>Documentation Does Not Solve This</h2>
<p>You can document rules.</p>
<p>You can describe processes.</p>
<p>You can maintain diagrams.</p>
<p>But documentation does not enforce behavior.</p>
<p>Only code does.</p>
<p>If the code does not have a single authoritative place where business rules live, then documentation becomes commentary on a system that may already behave differently in different execution paths.</p>
<p>That is not a communication issue.<br />That is a structural issue.</p>
<h2>What Is a Rich Domain Model?</h2>
<p>A rich domain model takes a different approach.</p>
<p>In a rich domain model:</p>
<ul>
<li><p>Business behavior lives together with the data it governs.</p>
</li>
<li><p>Rules are owned by the model, not by surrounding services.</p>
</li>
<li><p>Objects expose meaningful operations instead of raw setters.</p>
</li>
<li><p>Invariants are centralized and protected.</p>
</li>
</ul>
<p>Instead of treating domain objects as passive data structures, they become active representations of the business.</p>
<p>A rich domain model is a single, coherent expression of how the business behaves.</p>
<p>There is one authoritative place where rules live. One place where state changes are defined. One place that describes what is allowed and what is not.</p>
<h2>Additive vs Analytical Development</h2>
<p>The real difference appears when new stories arrive.</p>
<p>In additive development, the implicit question is:</p>
<blockquote>
<p>Where can I implement this story?</p>
</blockquote>
<p>In a rich domain model, the question becomes:</p>
<blockquote>
<p>What does this story mean for our understanding of the domain?</p>
</blockquote>
<p>That shift changes everything.</p>
<p>New requirements are not simply attached somewhere convenient. They are analyzed in relation to the existing model. Do they fit the current rules? Do they contradict an invariant? Do they reveal that our understanding was incomplete?</p>
<p>Sometimes the model absorbs the change easily.<br />Sometimes the model itself must evolve.</p>
<p>But the evolution happens in one central place.</p>
<p>Rich domain models are not difficult because they are complex. They are difficult because they require the team to take responsibility for understanding the domain instead of just implementing requirements.</p>
<p>That responsibility forces clarity.</p>
<p>Contradictions surface early.<br />Ambiguities become visible.<br />Hidden assumptions are challenged.</p>
<h2>Why Development Becomes Easier Over Time</h2>
<p>A common misconception is that rich domain models slow development down.</p>
<p>In practice, the opposite tends to happen.</p>
<p>When the model is coherent and well-shaped:</p>
<ul>
<li><p>The logic for new stories is often already partially present.</p>
</li>
<li><p>Behavior has a clear home.</p>
</li>
<li><p>You do not search the entire codebase to find where rules might live.</p>
</li>
<li><p>You do not duplicate validations in multiple layers.</p>
</li>
<li><p>You are not afraid to change existing code.</p>
</li>
</ul>
<p>Each sprint deepens the same model.</p>
<p>In additive systems, every sprint adds more places where logic might live. In rich systems, every sprint strengthens the same conceptual center.</p>
<p>Over time, additive systems become harder to extend.<br />Rich systems become easier to reason about and faster to extend and maintain.</p>
<p>The difference is not visible in the first few sprints. It becomes visible after months and years.</p>
<p><strong>But rich modeling is not automatic.</strong> It is hard because it requires engineers to think in terms of behavior, not just features. Teams without this skill may fail to capture the benefits, or worse, create pseudo-rich models that combine the worst of both worlds: scattered behavior and extra complexity.</p>
<h2>The Real Benefit</h2>
<p>The greatest advantage of a rich domain model is not technical elegance.</p>
<p>It is clarity.</p>
<p>When business behavior is centralized and explicit, the software becomes a mirror of the domain. If two requirements contradict each other, the model makes that tension visible. The team can address it before the contradiction becomes embedded in production logic.</p>
<p>The system does not merely accumulate features.<br />It accumulates understanding.</p>
<p>And in long-lived software — the kind that represents real business processes and institutional knowledge — that difference determines whether the system remains an asset or slowly turns into a burden.</p>
]]></content:encoded></item><item><title><![CDATA[A Practical Guide to AI Code Generation]]></title><description><![CDATA[The Seductive Promise: Build in Days, Not Months
There is something undeniably compelling about modern AI code generation.
You describe a feature.It scaffolds the controllers.It writes the repository ]]></description><link>https://blog.leonpennings.com/a-practical-guide-to-ai-code-generation</link><guid isPermaLink="true">https://blog.leonpennings.com/a-practical-guide-to-ai-code-generation</guid><category><![CDATA[AI]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Java]]></category><category><![CDATA[Application Development]]></category><category><![CDATA[software architecture]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Thu, 26 Feb 2026 07:15:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6909c071175a29281d26fa0e/f515d3ba-cc51-4cfd-8e86-6a57135409aa.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The Seductive Promise: Build in Days, Not Months</h2>
<p>There is something undeniably compelling about modern AI code generation.</p>
<p>You describe a feature.<br />It scaffolds the controllers.<br />It writes the repository layer.<br />It generates DTOs, tests, migrations, and configuration.</p>
<p>What used to take weeks can now appear in minutes.</p>
<p>For greenfield projects especially, the acceleration feels almost unfair. A single developer can prototype at a pace that previously required a small team. Refactoring feels assisted rather than manual. Exploration becomes interactive.</p>
<p>If you measure success by <em>initial velocity</em>, AI looks like a revolution.</p>
<p>And in many contexts, it is.</p>
<p>But velocity is only one axis of software quality.</p>
<p>The real question is not:</p>
<blockquote>
<p>“Can AI generate my application?”</p>
</blockquote>
<p>It clearly can.</p>
<p>The real question is:</p>
<blockquote>
<p>“What happens after generation?”</p>
</blockquote>
<p>That is where engineering discipline becomes decisive.</p>
<hr />
<h2>1. The Licensing Risk</h2>
<p>This topic tends to be exaggerated and underestimated at the same time.</p>
<h3>What is the concern?</h3>
<p>AI models are trained on large corpora of public and licensed code. That creates two potential risks:</p>
<ol>
<li><p><strong>Inbound copyright contamination</strong><br />The model might generate code substantially similar to licensed material.</p>
</li>
<li><p><strong>Outbound trade secret exposure</strong><br />You might paste proprietary logic into a cloud system without adequate contractual protection.</p>
</li>
</ol>
<p>The first risk is statistically low but non-zero.<br />The second risk depends entirely on how and where you use the tool.</p>
<h3>Practical mitigation</h3>
<p>Treat AI output like third-party external code:</p>
<ul>
<li><p>Review it rigorously.</p>
</li>
<li><p>Refactor it into your own architectural style.</p>
</li>
<li><p>Avoid accepting large structured blocks verbatim.</p>
</li>
<li><p>Run automated license scanning in CI.</p>
</li>
<li><p>Avoid pasting core proprietary algorithms into consumer-tier tools.</p>
</li>
</ul>
<p>In most commercial environments, this is manageable with policy and review discipline. It is not a reason to avoid AI entirely — but it is a reason to use it intentionally.</p>
<hr />
<h2>2. The Loss of Engineering Control</h2>
<p>This is the more serious risk.</p>
<p>AI optimizes locally.</p>
<p>It does not:</p>
<ul>
<li><p>Maintain architectural invariants across months of development.</p>
</li>
<li><p>Enforce consistency of abstraction boundaries.</p>
</li>
<li><p>Protect aggregate integrity.</p>
</li>
<li><p>Guard against semantic drift in business rules.</p>
</li>
</ul>
<p>If a developer merely accepts generated code, they are no longer designing the system. They are curating output.</p>
<p>Over time this leads to:</p>
<ul>
<li><p>Inconsistent abstractions</p>
</li>
<li><p>Leaky layers</p>
</li>
<li><p>Duplication of logic</p>
</li>
<li><p>Hidden coupling</p>
</li>
<li><p>Architectural entropy</p>
</li>
</ul>
<p>The system “works,” but its internal coherence degrades.</p>
<p>And once that coherence is gone, refactoring cost increases non-linearly.</p>
<p>AI accelerates code production.<br />It does not automatically accelerate architectural reasoning.</p>
<p>If that reasoning is outsourced, engineering control erodes.</p>
<hr />
<h2>3. Additive Development Without a Domain Model</h2>
<p>AI makes additive development dangerously easy.</p>
<p>A user story arrives:</p>
<blockquote>
<p>“As a user, I want X.”</p>
</blockquote>
<p>The developer prompts:</p>
<blockquote>
<p>“Implement X in Spring Boot.”</p>
</blockquote>
<p>The feature appears. It compiles. It passes basic tests.</p>
<p>Next sprint, another story.<br />Another prompt.<br />Another isolated feature.</p>
<p>What is missing?</p>
<ul>
<li><p>No shared domain language.</p>
</li>
<li><p>No explicit invariants.</p>
</li>
<li><p>No aggregate boundaries.</p>
</li>
<li><p>No semantic model anchoring decisions.</p>
</li>
<li><p>No systemic review of how new logic interacts with existing rules.</p>
</li>
</ul>
<p>The application grows by accumulation, not by modeling.</p>
<p>Each user story is implemented in isolation.</p>
<p>This works surprisingly well in simple domains.</p>
<p>It fails gradually — and then suddenly — in complex domains.</p>
<p>Conflicting business logic emerges.<br />Edge cases multiply.<br />Validation rules scatter.<br />Performance assumptions break.</p>
<p>The problem is not that AI wrote the code.</p>
<p>The problem is that no one owned the conceptual integrity of the system.</p>
<p>AI amplifies additive development because it reduces the friction that previously forced engineers to think before coding.</p>
<p>When implementation becomes trivial, modeling must become deliberate.</p>
<hr />
<h2>4. Where AI Truly Excels</h2>
<p>It would be intellectually dishonest to ignore where AI is extraordinarily effective.</p>
<p>AI code generation is highly valuable for:</p>
<ul>
<li><p>Refactoring suggestions</p>
</li>
<li><p>Syntax transformations</p>
</li>
<li><p>Exploratory prototyping</p>
</li>
<li><p>Learning unfamiliar APIs</p>
</li>
<li><p>Discussing domain trade-offs</p>
</li>
<li><p>UX best practices and semantic HTML scaffolding</p>
</li>
</ul>
<p>Notice what is absent here:</p>
<p>Not “boilerplate.”<br />Not DTO factories.<br />Not mindless CRUD scaffolding.</p>
<p>Those patterns often signal the absence of a meaningful domain model in the first place.</p>
<p>AI shines when it augments cognition — not when it replaces modeling.</p>
<p>In low-complexity systems, the development approach often matters less. If logic is shallow and bounded, AI-generated solutions may be entirely sufficient.</p>
<p>The danger increases as:</p>
<ul>
<li><p>Domain complexity increases</p>
</li>
<li><p>Invariants become subtle</p>
</li>
<li><p>Concurrency becomes relevant</p>
</li>
<li><p>Transactions carry business meaning</p>
</li>
<li><p>Regulatory constraints appear</p>
</li>
<li><p>Performance characteristics matter</p>
</li>
</ul>
<p>The more your system embodies meaning rather than mechanics, the more indispensable human modeling becomes.</p>
<hr />
<h2>5. AI as an Amplifier, Not a Replacement</h2>
<p>AI is a force multiplier.</p>
<p>It accelerates:</p>
<ul>
<li><p>Exploration</p>
</li>
<li><p>Refactoring</p>
</li>
<li><p>Learning</p>
</li>
<li><p>Iteration speed</p>
</li>
</ul>
<p>It does not replace:</p>
<ul>
<li><p>Judgment</p>
</li>
<li><p>Modeling</p>
</li>
<li><p>Trade-off analysis</p>
</li>
<li><p>Responsibility</p>
</li>
<li><p>Architectural ownership</p>
</li>
</ul>
<p>A healthy pattern looks like this:</p>
<ol>
<li><p>Model the domain deliberately.</p>
</li>
<li><p>Define invariants explicitly.</p>
</li>
<li><p>Decide architectural boundaries consciously.</p>
</li>
<li><p>Use AI to accelerate implementation within that framework.</p>
</li>
<li><p>Review and reshape generated output.</p>
</li>
<li><p>Maintain systemic coherence.</p>
</li>
</ol>
<p>An unhealthy pattern looks like this:</p>
<ol>
<li><p>Prompt.</p>
</li>
<li><p>Read and Learn.</p>
</li>
<li><p>Adapt where necessary.</p>
</li>
<li><p>Repeat.</p>
</li>
</ol>
<p>The difference is not in the tool.<br />It is in who remains accountable for the system’s integrity.</p>
<hr />
<h2>6. AI Is Not a Substitute for Engineering Expertise</h2>
<p>There is one final misconception worth addressing.</p>
<p>AI is not a gap-filler for missing engineering expertise.</p>
<p>It can accelerate someone who already understands:</p>
<ul>
<li><p>Abstraction boundaries</p>
</li>
<li><p>Invariants</p>
</li>
<li><p>Transactional semantics</p>
</li>
<li><p>Coupling and cohesion</p>
</li>
<li><p>Long-term maintainability</p>
</li>
</ul>
<p>But it cannot compensate for the absence of that understanding.</p>
<p>Framework knowledge is not engineering expertise.<br />Pattern memorization is not engineering expertise.<br />Syntax fluency is not engineering expertise.</p>
<p>Engineering expertise is, among other things:</p>
<ul>
<li><p>The ability to model a domain precisely.</p>
</li>
<li><p>The discipline to protect invariants.</p>
</li>
<li><p>The skill to keep abstractions aligned with business meaning.</p>
</li>
<li><p>The judgment to decide what <em>not</em> to build.</p>
</li>
<li><p>The restraint to prevent accidental complexity.</p>
</li>
<li><p>The awareness to detect when the model is drifting.</p>
</li>
</ul>
<p>AI can generate structures.<br />It cannot guarantee conceptual integrity.</p>
<p>If a developer lacks domain modeling skills, AI will not fix that gap.<br />It will often amplify it — by making it easier to produce more code faster.</p>
<p>And more code without a coherent model does not create better systems.<br />It creates faster entropy.</p>
<hr />
<h2>Conclusion</h2>
<p>AI is a remarkable accelerator.</p>
<p>But acceleration without direction is drift.</p>
<p>AI amplifies the strengths of a disciplined engineer.<br />It also amplifies the weaknesses of an undisciplined one.</p>
<p>The real competitive advantage in the age of AI is not typing speed.</p>
<p>It is the ability to design systems whose internal logic remains coherent as they evolve.</p>
<p>That remains a deeply human responsibility.</p>
]]></content:encoded></item><item><title><![CDATA[The Risk of Ivory Tower Software Development — and Why AI Amplifies It]]></title><description><![CDATA[Software design does not fail because people lack ideas.
It fails because ideas are not tested against reality early enough.
One of the most powerful feedback mechanisms in software development is not]]></description><link>https://blog.leonpennings.com/the-risk-of-ivory-tower-software-development-and-why-ai-amplifies-it</link><guid isPermaLink="true">https://blog.leonpennings.com/the-risk-of-ivory-tower-software-development-and-why-ai-amplifies-it</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[Java]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Mon, 23 Feb 2026 07:51:27 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6909c071175a29281d26fa0e/7d4e41be-649e-4ab2-91c1-82319abdbb51.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Software design does not fail because people lack ideas.</p>
<p>It fails because ideas are not tested against reality early enough.</p>
<p>One of the most powerful feedback mechanisms in software development is not architecture review, not UML diagrams, and not planning sessions.</p>
<p>It is writing code.</p>
<p>Actual coding is not merely implementation.<br />It is a continuous validation of the design against the domain.</p>
<p>When that feedback loop is weakened, design drifts away from reality. When it is removed entirely, software starts to “run in PowerPoint.”</p>
<p>AI-assisted development increases this risk dramatically.</p>
<hr />
<h2>Coding Is a Feedback Mechanism</h2>
<p>There is a common misconception that coding is a mechanical activity — translating design into syntax.</p>
<p>In reality, coding is where design meets constraint.</p>
<p>When implementing a rich domain model:</p>
<ul>
<li><p>Edge cases surface.</p>
</li>
<li><p>Invariants become harder than expected.</p>
</li>
<li><p>Responsibilities feel misplaced.</p>
</li>
<li><p>Names no longer fit.</p>
</li>
<li><p>Assumptions break.</p>
</li>
</ul>
<p>Every friction point during implementation is feedback:</p>
<ul>
<li><p>The model might be incomplete.</p>
</li>
<li><p>The model might be wrong.</p>
</li>
<li><p>The abstraction might be too shallow.</p>
</li>
<li><p>The business concept might be misunderstood.</p>
</li>
</ul>
<p>Writing code forces the developer to confront the domain at a granular level.</p>
<p>This confrontation produces learning.</p>
<p>Coding builds understanding as much as it builds functionality.</p>
<hr />
<h2>The Domain Enables Functionality — It Is Not the Same</h2>
<p>A common failure mode in story-driven development is conflating the domain with required behavior.</p>
<p>But the domain is not the list of user stories.</p>
<p>The domain is the conceptual structure that makes those stories possible.</p>
<p>Describing a business domain purely by its features is like describing an animal only by its behavior:</p>
<ul>
<li><p>“It runs.”</p>
</li>
<li><p>“It eats.”</p>
</li>
<li><p>“It sleeps.”</p>
</li>
</ul>
<p>Those are outputs.</p>
<p>They do not explain structure.</p>
<p>A domain model is the anatomy.<br />Stories are movements.</p>
<p>If the anatomy is wrong, movements will conflict.</p>
<hr />
<h2>Modeling Is Iterative Learning</h2>
<p>In simple domains, the gap between initial model and actual domain may be small.</p>
<p>In complex domains, it rarely is.</p>
<p>Each new functional story reveals something:</p>
<ul>
<li><p>A missing state.</p>
</li>
<li><p>An implicit rule.</p>
</li>
<li><p>A conflicting assumption.</p>
</li>
<li><p>An incorrect boundary.</p>
</li>
</ul>
<p>A well-structured domain model absorbs that feedback.<br />It evolves deliberately.</p>
<p>If coding is happening, this evolution is continuous.</p>
<p>The model is constantly tested against real constraints.</p>
<p>Without coding involvement, that feedback loop weakens.</p>
<hr />
<h2>The Ivory Tower Effect</h2>
<p>Ivory tower software development occurs when:</p>
<ul>
<li><p>Strategic or senior decisions are made about detailed design,</p>
</li>
<li><p>Without direct involvement in implementation,</p>
</li>
<li><p>And without experiencing the friction of coding.</p>
</li>
</ul>
<p>At that point, design decisions are evaluated abstractly rather than empirically.</p>
<p>Ideas look clean in diagrams.<br />They look coherent in slide decks.</p>
<p>But code exposes reality:</p>
<ul>
<li><p>What is awkward?</p>
</li>
<li><p>What is redundant?</p>
</li>
<li><p>What does not compose?</p>
</li>
<li><p>What contradicts existing invariants?</p>
</li>
</ul>
<p>Without that confrontation, design becomes speculative.</p>
<p>When feedback from implementers is ignored — especially when authority is hierarchical — the disconnect grows.</p>
<p>This is the real-world version of the “runs in PowerPoint” meme:</p>
<p>The system appears coherent conceptually, but its operational behavior tells a different story.</p>
<hr />
<h2>AI-Assisted Development: Removing the Friction</h2>
<p>AI-assisted development changes the dynamic in a subtle but profound way.</p>
<p>Traditionally, flawed design reveals itself through implementation pain.</p>
<p>With AI:</p>
<ul>
<li><p>Complex logic can be generated quickly.</p>
</li>
<li><p>Edge-case handling can be synthesized.</p>
</li>
<li><p>Boilerplate friction disappears.</p>
</li>
<li><p>Implementation obstacles shrink.</p>
</li>
</ul>
<p>The system “works.”</p>
<p>But friction is not just an inconvenience.<br />It is feedback.</p>
<p>If AI absorbs that friction:</p>
<ul>
<li><p>The developer experiences less resistance.</p>
</li>
<li><p>The design is challenged less directly.</p>
</li>
<li><p>Structural weaknesses can be papered over with generated complexity.</p>
</li>
</ul>
<p>AI is extremely good at making something function.</p>
<p>It is not inherently responsible for questioning whether the underlying domain model is coherent.</p>
<p>If the model is weak, AI will happily implement increasingly elaborate logic around it.</p>
<p>This is where divergence accelerates.</p>
<hr />
<h2>Where Do the Lessons Go?</h2>
<p>In traditional coding:</p>
<ul>
<li><p>A developer struggles with an awkward method.</p>
</li>
<li><p>Realizes responsibility is misplaced.</p>
</li>
<li><p>Refactors the model.</p>
</li>
<li><p>Simplifies behavior.</p>
</li>
<li><p>Strengthens invariants.</p>
</li>
</ul>
<p>That learning is embodied in the design.</p>
<p>With prompt-driven development:</p>
<ul>
<li><p>The developer asks for feature X.</p>
</li>
<li><p>The AI produces working logic.</p>
</li>
<li><p>Tests are generated.</p>
</li>
<li><p>The story is complete.</p>
</li>
</ul>
<p>But where did the design feedback go?</p>
<p>If the developer does not deliberately step back and re-evaluate the model, no automatic mechanism ensures that lessons learned during implementation are incorporated into domain structure.</p>
<p>AI can generate working code faster than teams can reflect on model integrity.</p>
<p>Velocity increases. Reflection often does not.</p>
<hr />
<h2>Preventing Divergence Across Stories</h2>
<p>This becomes especially critical for core concepts like <code>Order</code>.</p>
<p>Across multiple user stories:</p>
<ul>
<li><p>Orders gain new states.</p>
</li>
<li><p>Exceptions accumulate.</p>
</li>
<li><p>Special-case transitions appear.</p>
</li>
<li><p>Cross-cutting flags are introduced.</p>
</li>
</ul>
<p>If each story is implemented additively — especially via prompts — divergence becomes likely:</p>
<ul>
<li><p>Different stories interpret the same concept differently.</p>
</li>
<li><p>Invariants become conditional.</p>
</li>
<li><p>The model becomes a collection of patches.</p>
</li>
</ul>
<p>To prevent this:</p>
<ol>
<li><p>Every new story must begin with model validation:</p>
<ul>
<li><p>Does the current <code>Order</code> model still represent reality?</p>
</li>
<li><p>Are its invariants still coherent?</p>
</li>
<li><p>Does the lifecycle still make sense?</p>
</li>
</ul>
</li>
<li><p>Implementation friction must be examined, not bypassed.</p>
<ul>
<li>If something feels awkward, the model may be wrong.</li>
</ul>
</li>
<li><p>AI must be used to refine models — not only extend behavior.</p>
</li>
</ol>
<p>If stories accumulate faster than the model evolves, divergence is inevitable.</p>
<hr />
<h2>The Core Risk</h2>
<p>Ivory tower development disconnects design from implementation reality.</p>
<p>AI-assisted development risks disconnecting implementation from conceptual learning.</p>
<p>Both weaken the feedback loop between:</p>
<ul>
<li><p>Model</p>
</li>
<li><p>Code</p>
</li>
<li><p>Domain understanding</p>
</li>
</ul>
<p>When that loop weakens, conceptual drift accelerates.</p>
<p>And when core concepts drift, conflicting business logic is only a matter of time.</p>
<hr />
<h2>AI as Amplifier — Not Villain</h2>
<p>AI is not the enemy here.</p>
<p>It is an amplifier.</p>
<p>And amplification works in both directions.</p>
<p>In environments where teams already treat coding as a feedback mechanism — where friction triggers reflection and models evolve deliberately — AI becomes a powerful collaborator. It accelerates exploration, surfaces alternatives, fills knowledge gaps, and acts as an always-available discussion partner.</p>
<p>But in environments where development is already procedural — where stories are implemented sequentially without structural re-evaluation — AI does not change the underlying dynamic.</p>
<p>It scales it.</p>
<p>If the dominant habit is:</p>
<ul>
<li><p>Add a story</p>
</li>
<li><p>Make it pass</p>
</li>
<li><p>Move on</p>
</li>
</ul>
<p>Then AI simply increases throughput.</p>
<p>The structural weakness was already there.<br />AI makes it faster, cheaper, and less visibly painful.</p>
<p>The real danger is not AI.</p>
<p>The danger is bypassing reflection at scale.</p>
<p>AI removes mechanical friction.<br />That is progress.</p>
<p>But if the organization never had a practice of responding to conceptual friction, AI ensures that absence compounds silently.</p>
<p>It does not introduce drift.<br />It accelerates whatever discipline — or lack of discipline — already exists.</p>
<hr />
<h2>The Real Question</h2>
<p>A reasonable counter-argument is:</p>
<p>“If the stories work, why does model divergence matter?”</p>
<p>Individually, they do work.</p>
<p>Each story passes its tests.<br />Each feature behaves correctly within its own context.</p>
<p>The problem is that stories are not independent.</p>
<p>They operate on shared concepts, shared data, and shared invariants.</p>
<p>In a real system — whether a distributed microservice landscape or a monolith — every story modifies the same underlying domain.</p>
<p>And when each story subtly reshapes that domain without structural re-evaluation, coherence erodes.</p>
<p>The result is not immediate failure.</p>
<p>It is unpredictability.</p>
<p>Cancellation rules interact strangely with refund logic.<br />Fraud review states conflict with shipment transitions.<br />Administrative overrides bypass invariants assumed elsewhere.<br />Flags introduced for one use case leak into others.</p>
<p>Nothing is obviously broken.</p>
<p>But behavior becomes conditional, context-dependent, and difficult to reason about.</p>
<p><strong>The system no longer behaves as a unified model - It behaves as accumulated exceptions.</strong></p>
<p>A rich domain model exists precisely to prevent this.</p>
<p>It ensures that integration is intentional — not accidental.</p>
<p>It provides a single, coherent definition of:</p>
<ul>
<li><p>What an Order is</p>
</li>
<li><p>What states are valid</p>
</li>
<li><p>What transitions are legal</p>
</li>
<li><p>What invariants must always hold</p>
</li>
</ul>
<p>Without that coherence, stories remain locally correct but globally inconsistent.</p>
<p>And that inconsistency compounds.</p>
<hr />
<h2>Final Thought</h2>
<p>Software rarely collapses because individual features fail.</p>
<p>It becomes fragile because the relationships between features stop making sense.</p>
<p>Local correctness is easy to achieve.</p>
<p>Global coherence is not.</p>
<p>A system can pass every story-level test and still drift into unpredictability — not because the code is broken, but because the underlying model has fragmented.</p>
<p>When shared concepts evolve without deliberate validation, integration becomes accidental.</p>
<p>And accidental integration always produces conditional behavior, hidden coupling, and contradictory assumptions.</p>
<p>Coding is the mechanism that keeps this from happening.</p>
<p>It forces ideas to confront constraints.<br />It exposes where invariants strain.<br />It reveals when abstractions no longer hold.</p>
<p>That confrontation is not an inconvenience.</p>
<p>It is the stabilizing force of design.</p>
<p>AI does not eliminate that force — but it can soften it.</p>
<p>When friction is absorbed automatically and stories ship without structural pause, learning decouples from implementation.</p>
<p>Velocity rises.<br />Reflection does not necessarily follow.</p>
<p>The result is not immediate failure.</p>
<p>It is gradual loss of predictability.</p>
<p>And predictability is what makes software reliable, extensible, and economically sustainable.</p>
<p>AI will continue to increase throughput.</p>
<p>The question is whether teams will increase structural discipline at the same pace.</p>
<p>Because in the end, software does not degrade when ideas are ambitious.</p>
<p>It degrades when feedback is ignored.</p>
<p>And the most important feedback loop in software development is still the one between:</p>
<p>Model.<br />Code.<br />Reality.</p>
<p>Preserve that loop — and AI becomes a multiplier of clarity.</p>
<p>Neglect it — and AI becomes a multiplier of drift.</p>
]]></content:encoded></item><item><title><![CDATA[Preventing Conflicting Business Logic in Growing Software Systems]]></title><description><![CDATA[Most large systems do not fail because individual requirements were implemented incorrectly.They fail because many requirements were implemented correctly — but inconsistently.
When software evolves s]]></description><link>https://blog.leonpennings.com/preventing-conflicting-business-logic-in-growing-software-systems</link><guid isPermaLink="true">https://blog.leonpennings.com/preventing-conflicting-business-logic-in-growing-software-systems</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Java]]></category><category><![CDATA[software development]]></category><category><![CDATA[software architecture]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Sat, 21 Feb 2026 09:02:00 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6909c071175a29281d26fa0e/9e4eeb60-748c-4d8c-b4e5-ccfe846d24af.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most large systems do not fail because individual requirements were implemented incorrectly.<br />They fail because many requirements were implemented correctly — but inconsistently.</p>
<p>When software evolves story by story, business rules accumulate. Each new feature works in isolation. Tests pass. Code reviews succeed. Deployment looks stable.</p>
<p>And yet, over time, the system’s behavior becomes contradictory.</p>
<p>The root cause is structural: business rules are implemented without a mechanism that forces them to reconcile.</p>
<hr />
<h2>The Structural Risk of Story-Driven Development</h2>
<p>Story-driven development optimizes for delivering isolated behavior:</p>
<ul>
<li><p>A story defines a scenario.</p>
</li>
<li><p>A service implements that scenario.</p>
</li>
<li><p>Tests verify the scenario.</p>
</li>
</ul>
<p>This process is efficient. It is also structurally dangerous in larger systems.</p>
<p>User stories describe <em>cases</em>.<br />They do not define <em>invariants</em>.</p>
<p>Consider an <code>Order</code> concept:</p>
<ul>
<li><p>Closed orders cannot be modified.</p>
</li>
<li><p>VIP customers may modify any order.</p>
</li>
<li><p>Orders older than 30 days cannot be changed.</p>
</li>
</ul>
<p>Each rule is valid. Each can be implemented cleanly inside a dedicated service.</p>
<p>But when rules are implemented procedurally — distributed across services — nothing forces them to converge. Each addition extends the system laterally.</p>
<p>The result is an accumulation of overlapping constraints without reconciliation.</p>
<hr />
<h2>Fat Services and Distributed Authority</h2>
<p>In procedural, service-centric systems:</p>
<ul>
<li><p>Data structures are passive.</p>
</li>
<li><p>Services orchestrate and mutate state.</p>
</li>
<li><p>Business rules are embedded in workflows.</p>
</li>
<li><p>Multiple services modify the same conceptual entity.</p>
</li>
</ul>
<p>There is no single authority over state transitions.</p>
<p>When a new rule is introduced, developers must manually discover where related logic already exists. Conflict detection depends on:</p>
<ul>
<li><p>Personal experience</p>
</li>
<li><p>Code search</p>
</li>
<li><p>Team memory</p>
</li>
<li><p>Review discipline</p>
</li>
</ul>
<p>As the system grows, this becomes untenable.</p>
<p>The number of overlapping scenarios increases. The surface area of potential rule interaction expands. But the architecture provides no increasing structural protection against contradiction.</p>
<p>Conflicts are not prevented. They are postponed.</p>
<hr />
<h2>Fragmentation Does Not Remove Conflict</h2>
<p>A common reaction to growing complexity is to split logic into smaller, independent services.</p>
<p>Locally, this feels like improvement:</p>
<ul>
<li><p>Each service becomes focused.</p>
</li>
<li><p>Code appears cleaner.</p>
</li>
<li><p>Responsibilities seem separated.</p>
</li>
</ul>
<p>But business rules do not stop interacting simply because code is divided.</p>
<p>Instead of colliding inside a single module, contradictory rules now manifest across services. The conflict moves from design time to runtime.</p>
<p>The consequences appear in persisted data:</p>
<ul>
<li><p>Invalid state combinations</p>
</li>
<li><p>Inconsistent status transitions</p>
</li>
<li><p>Manual reconciliation processes</p>
</li>
<li><p>Compensating logic layered on top of earlier logic</p>
</li>
</ul>
<p>Once inconsistent production data exists, repair is no longer a refactoring problem. It becomes an operational problem. Historical correctness may be lost. Migration may be partial. Business trust may be affected.</p>
<p>Avoiding design friction early often produces operational friction later — at a much higher cost.</p>
<hr />
<h2>The Testing Illusion</h2>
<p>Automated tests do not automatically protect against this failure mode.</p>
<p>Story-driven development naturally produces story-scoped tests:</p>
<ul>
<li><p>Given a scenario</p>
</li>
<li><p>When an action occurs</p>
</li>
<li><p>Then the expected outcome is produced</p>
</li>
</ul>
<p>These tests confirm that each individual requirement behaves as intended.</p>
<p>They do not confirm that all rules concerning a concept remain mutually consistent.</p>
<p>As stories accumulate:</p>
<ul>
<li><p>The test suite grows.</p>
</li>
<li><p>Coverage metrics improve.</p>
</li>
<li><p>Confidence increases.</p>
</li>
</ul>
<p>But what increases is scenario coverage — not invariant coverage.</p>
<p>All tests can pass while the system violates a broader constraint that was never explicitly modeled.</p>
<p>Testing mirrors structure.<br />If invariants are not structurally centralized, they are rarely centrally tested.</p>
<hr />
<h2>What Is Missing: Invariant Boundaries and Model Validation</h2>
<p>Business rules are constraints over allowed state transitions.</p>
<p>To prevent contradiction, those constraints must converge at a single structural boundary.</p>
<p>But something even more fundamental is often missing:</p>
<p><strong>Before implementing a new story, the domain model itself must be validated.</strong></p>
<p>In procedural, story-driven development, implementation typically begins like this:</p>
<ul>
<li><p>Identify the service to extend.</p>
</li>
<li><p>Add logic for the new scenario.</p>
</li>
<li><p>Add tests for the new behavior.</p>
</li>
</ul>
<p>The current model is assumed to be sufficient.</p>
<p>In a domain-centered approach, implementation begins differently:</p>
<ol>
<li><p>Does the current model accurately represent the business concept?</p>
</li>
<li><p>Does the new requirement fit within the existing invariants?</p>
</li>
<li><p>If not, what must change in the model itself?</p>
</li>
</ol>
<p>The model is not an implementation detail.<br />It is the core definition of the business domain.</p>
<p>If the business expands, the model must be re-evaluated.</p>
<p>If a new rule cannot be added without bypassing existing constraints, that is not an inconvenience — it is a signal:</p>
<ul>
<li><p>Either the new requirement conflicts with established invariants,</p>
</li>
<li><p>Or the model is incomplete or incorrectly structured.</p>
</li>
</ul>
<p>In both cases, the model must change before behavior is extended.</p>
<hr />
<h2>Behavior Must Live with Responsibility</h2>
<p>When behavior is implemented in services, it is possible to extend functionality without confronting the underlying concept.</p>
<p>When behavior lives with the domain model, that is no longer possible.</p>
<p>Instead of multiple services independently mutating an <code>Order</code>, the concept itself owns its transitions:</p>
<ul>
<li><p><code>Order.modify()</code></p>
</li>
<li><p><code>Order.close()</code></p>
</li>
<li><p><code>Order.reopen()</code></p>
</li>
</ul>
<p>Every rule affecting order state must pass through the same boundary.</p>
<p>Contradictory requirements cannot be implemented in isolation. They must reconcile inside the model.</p>
<p>This creates deliberate friction.</p>
<p>That friction is the warning mechanism.</p>
<p>Testing also changes:</p>
<ul>
<li><p>Tests target allowed and forbidden state transitions.</p>
</li>
<li><p>Invariants become explicit and enforceable.</p>
</li>
<li><p>The concept, not the service flow, becomes the primary test subject.</p>
</li>
</ul>
<p>The model becomes the consistency mechanism.</p>
<hr />
<h2>Why This Becomes Critical in Larger Systems</h2>
<p>In small systems, informal coordination may suffice.</p>
<p>In larger systems:</p>
<ul>
<li><p>Teams change.</p>
</li>
<li><p>Context is lost.</p>
</li>
<li><p>Features overlap.</p>
</li>
<li><p>Edge cases multiply.</p>
</li>
<li><p>Historical decisions fade.</p>
</li>
</ul>
<p>Relying on human memory to prevent contradictory business rules does not scale.</p>
<p>If the architecture does not enforce reconciliation structurally — and if every new story does not begin by validating the domain model — contradiction will accumulate silently until it appears in production data.</p>
<hr />
<h2>AI-Assisted Story Development: Acceleration Without Reconciliation</h2>
<p>AI-assisted development amplifies the structural risks described above.</p>
<p>Large language models are particularly effective at:</p>
<ul>
<li><p>Implementing a described scenario.</p>
</li>
<li><p>Generating service-layer logic.</p>
</li>
<li><p>Producing matching tests for that scenario.</p>
</li>
<li><p>Extending existing workflows with minimal friction.</p>
</li>
</ul>
<p>Given a prompt such as:</p>
<blockquote>
<p>“Add support for VIP overrides.”</p>
</blockquote>
<p>An AI system will generate correct, locally consistent code.</p>
<p>It will also generate tests that confirm the behavior works as described.</p>
<p>But AI operates at the level of the prompt.</p>
<p>It does not maintain an internal, evolving model of the business domain unless explicitly guided to do so. It does not spontaneously validate global invariants across the entire system. It optimizes for satisfying the current instruction.</p>
<p>As a result:</p>
<ul>
<li><p>Feature velocity increases.</p>
</li>
<li><p>Additive logic accumulates faster.</p>
</li>
<li><p>Story-scoped correctness improves.</p>
</li>
<li><p>Cross-story reconciliation weakens.</p>
</li>
</ul>
<p>In human-only development, conceptual drift happens gradually.</p>
<p>With AI-assisted story implementation, drift can occur at machine speed.</p>
<p>The danger is subtle:</p>
<p>Because each individual change appears correct and well-tested, the system gives a strong illusion of health — even as its conceptual coherence erodes.</p>
<p>Without explicit invariant boundaries and model validation, AI becomes a powerful accelerator of divergence.</p>
<p>The technology is not the problem.<br />The absence of structural reconciliation is.</p>
<p>When the model is the primary artifact, AI can assist in refining it.<br />When stories are the primary artifact, AI amplifies fragmentation.</p>
<hr />
<h2>Conclusion</h2>
<p>Procedural, service-centric, story-driven development makes it easy to implement new requirements quickly. It does not make it easy to preserve conceptual integrity.</p>
<p>By distributing business rules across workflows instead of consolidating them around responsibility boundaries, it removes the structural warning signals that reveal contradiction early.</p>
<p>Preventing conflicting business logic requires:</p>
<ul>
<li><p>Explicit invariant boundaries</p>
</li>
<li><p>Central ownership of state transitions</p>
</li>
<li><p>Model validation before implementation</p>
</li>
<li><p>Tests that verify constraints, not just scenarios</p>
</li>
</ul>
<p>Without these, correctness depends on memory and discipline.</p>
<p>With them, consistency becomes structural rather than accidental.</p>
]]></content:encoded></item><item><title><![CDATA[Using ACID Properties as a Simplicity Litmus Test]]></title><description><![CDATA[ACID is often discussed as a database feature.Sometimes as an outdated one.
That framing misses something fundamental.
ACID was never primarily about storage engines. It was about correctness: how a system preserves meaning while work is performed. E...]]></description><link>https://blog.leonpennings.com/using-acid-properties-as-a-simplicity-litmus-test</link><guid isPermaLink="true">https://blog.leonpennings.com/using-acid-properties-as-a-simplicity-litmus-test</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Java]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Sun, 15 Feb 2026 19:01:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/y_spQMQTFjs/upload/9e058418c5280e267b130d9230348017.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>ACID is often discussed as a database feature.<br />Sometimes as an outdated one.</p>
<p>That framing misses something fundamental.</p>
<p>ACID was never primarily about storage engines. It was about <strong>correctness</strong>: how a system preserves meaning while work is performed. Early transactional theory, notably articulated by Jim Gray, treated transactions as units of state transformation that preserve invariants. In other words: something happens, and the system remains valid.</p>
<p>Stripped of implementation detail, ACID says something almost trivial:</p>
<p>A piece of business logic either <strong>succeeds</strong> or <strong>fails</strong>.</p>
<p>Nothing more.</p>
<p>This article proposes a simple idea:</p>
<blockquote>
<p>The ease with which you can express ACID semantics is a useful litmus test for system simplicity.</p>
</blockquote>
<p>Not simplicity as “few services” or “small codebase”, but simplicity as <strong>clear, local, and comprehensible business behavior</strong>.</p>
<hr />
<h2 id="heading-acid-without-mysticism">ACID Without Mysticism</h2>
<p>Before discussing architecture, it helps to de-dramatize ACID.</p>
<p>At a semantic level:</p>
<ul>
<li><p><strong>Atomicity</strong> → The operation completes or it does not.</p>
</li>
<li><p><strong>Consistency</strong> → Business rules remain true.</p>
</li>
<li><p><strong>Isolation</strong> → Concurrent operations do not observe nonsense.</p>
</li>
<li><p><strong>Durability</strong> → Completed decisions are not forgotten.</p>
</li>
</ul>
<p>These are not database tricks.</p>
<p>They are descriptions of how real-world business actions are expected to behave.</p>
<p>When someone places an order, the business does not conceptually accept “half an order”.<br />When money is transferred, the organization does not consider “eventually maybe transferred” a meaningful state.</p>
<p>ACID is simply the smallest vocabulary that describes this expectation.</p>
<hr />
<h2 id="heading-cheap-acid-boundaries">Cheap ACID Boundaries</h2>
<p>If a piece of business logic:</p>
<ul>
<li><p>Executes in one place</p>
</li>
<li><p>Operates on a coherent model</p>
</li>
<li><p>Owns its own invariants</p>
</li>
</ul>
<p>then drawing an ACID boundary is almost boring.</p>
<p>Begin work.<br />Perform logic.<br />Commit or rollback.</p>
<p>This is what we can call a <strong>cheap ACID boundary</strong>.</p>
<p>Cheap not in licensing cost.<br />Cheap in <strong>reasoning cost</strong>.</p>
<p>There is one execution context.<br />One authority over state.<br />One moment where success or failure is decided.</p>
<p>In such a setup, a relational database is a natural storage fit—not because it “creates correctness”, but because it can mechanically record an already clear decision.</p>
<p>The database is not the source of rigor.</p>
<p>The model is.</p>
<hr />
<h2 id="heading-essential-complexity-and-visibility">Essential Complexity and Visibility</h2>
<p>Fred Brooks famously distinguished between:</p>
<ul>
<li><p><strong>Essential complexity</strong> – inherent to the problem.</p>
</li>
<li><p><strong>Accidental complexity</strong> – introduced by the solution.</p>
</li>
</ul>
<p>Business rules, invariants, and transactional boundaries are essential complexity. They exist whether we like it or not.</p>
<p>When essential complexity is:</p>
<ul>
<li><p>Explicit</p>
</li>
<li><p>Local</p>
</li>
<li><p>Visible</p>
</li>
</ul>
<p>a business operation can be understood as a single unit of intent.</p>
<p>Once that unit exists, ACID semantics follow naturally.</p>
<p>Not as an architectural goal.<br />Not as a framework feature.<br />But as a consequence.</p>
<hr />
<h2 id="heading-when-acid-becomes-hard">When ACID Becomes “Hard”</h2>
<p>ACID usually becomes “hard” only after something else happened first.</p>
<p>Typically:</p>
<ul>
<li><p>Business rules are split across services</p>
</li>
<li><p>State ownership is divided</p>
</li>
<li><p>Invariants are enforced in multiple places</p>
</li>
</ul>
<p>Now a single business operation no longer has a single center.</p>
<p>At that point:</p>
<ul>
<li><p>Atomicity becomes choreography</p>
</li>
<li><p>Consistency becomes convention</p>
</li>
<li><p>Isolation becomes probabilistic</p>
</li>
<li><p>Durability becomes replicated folklore</p>
</li>
</ul>
<p>ACID did not become complex on its own.</p>
<p>The <strong>business concept was fragmented</strong>.</p>
<p>The cost increase is secondary.</p>
<hr />
<h2 id="heading-the-litmus-test">The Litmus Test</h2>
<p>For any business operation, ask:</p>
<ul>
<li><p>Where is the moment that decides success or failure?</p>
</li>
<li><p>Where are the invariants enforced?</p>
</li>
<li><p>Who owns the truth of this operation?</p>
</li>
</ul>
<p>If these questions have clear, local answers, ACID will be easy.</p>
<p>If the answers involve:</p>
<ul>
<li><p>Multiple services</p>
</li>
<li><p>Multiple databases</p>
</li>
<li><p>Multiple asynchronous hops</p>
</li>
</ul>
<p>then ACID will be expensive.</p>
<p>Not because ACID is outdated.</p>
<p>But because the system no longer has a simple place where “this either worked or it didn’t” can be stated.</p>
<p>That is the litmus test.</p>
<hr />
<h2 id="heading-relational-databases-and-the-boring-path">Relational Databases and the Boring Path</h2>
<p>Relational databases often appear in this discussion, and that is not accidental.</p>
<p>They provide:</p>
<ul>
<li><p>Transactions</p>
</li>
<li><p>Constraints</p>
</li>
<li><p>Rollback</p>
</li>
</ul>
<p>Which map cleanly onto cohesive business operations.</p>
<p>This does not make them universally superior.</p>
<p>It makes them <strong>unsurprising</strong>.</p>
<p>When your model is simple, simple tools fit.</p>
<p>When your model is fragmented, no tool will feel simple.</p>
<hr />
<h2 id="heading-acid-as-a-complexity-indicator">ACID as a Complexity Indicator</h2>
<p>This perspective does not attempt to accommodate every architectural fashion.</p>
<p>It does not try to explain how ACID can be stretched, simulated, or approximated in increasingly fragmented systems.</p>
<p>Instead, it treats ACID as a <strong>simple indicator</strong>:</p>
<blockquote>
<p>When a business operation can no longer be expressed as a straightforward success-or-failure unit, the system has already become more complex than it needs to be.</p>
</blockquote>
<p>That complexity may be intentional.<br />It may be justified by extreme scaling constraints or unusual operational environments.</p>
<p>But it is still complexity.</p>
<p>ACID is useful precisely because it makes this visible.</p>
<p>Not as a rule to obey.<br />Not as a feature to retrofit.</p>
<p>But as a signal:</p>
<p>If drawing a cheap ACID boundary feels unnatural, forced, or impossible, that is not a limitation of ACID.</p>
<p>It is a sign that the application’s structure has drifted away from the simplest expression of its business intent.</p>
<hr />
<h2 id="heading-acid-as-a-mirror">ACID as a Mirror</h2>
<p>ACID does not impose discipline.</p>
<p>It reflects discipline.</p>
<p>When ACID feels heavy, awkward, or impossible, that is a signal worth listening to.</p>
<p>Not about databases.</p>
<p>Not about frameworks.</p>
<p>But about whether the essential complexity of the domain is still visible, coherent, and whole.</p>
<p>In that sense, ACID is less a technology choice than a mirror.</p>
<p>And mirrors are useful precisely because they do not lie.</p>
]]></content:encoded></item><item><title><![CDATA[The Singleton Reality: Why “It Works” Is Not Evidence of Good Engineering]]></title><description><![CDATA[If you plow a field with a Ferrari F40, the field will be plowed.
The outcome is correct.The task is completed.
Yet the Ferrari is:

Excessively expensive for the job

Fragile under the wrong conditions

Costly to maintain

Poorly suited for rain, mu...]]></description><link>https://blog.leonpennings.com/the-singleton-reality-why-it-works-is-not-evidence-of-good-engineering</link><guid isPermaLink="true">https://blog.leonpennings.com/the-singleton-reality-why-it-works-is-not-evidence-of-good-engineering</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[clean code]]></category><category><![CDATA[software design]]></category><category><![CDATA[Rich Domain Model]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Mon, 02 Feb 2026 13:21:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/I5CxwTxE38k/upload/8d798e77ac21fc4613473b9643de929b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you plow a field with a Ferrari F40, the field will be plowed.</p>
<p>The outcome is correct.<br />The task is completed.</p>
<p>Yet the Ferrari is:</p>
<ul>
<li><p>Excessively expensive for the job</p>
</li>
<li><p>Fragile under the wrong conditions</p>
</li>
<li><p>Costly to maintain</p>
</li>
<li><p>Poorly suited for rain, mud, or sustained use</p>
</li>
</ul>
<p>In the real world, everyone knows a Ferrari is not the right tool for plowing. We have tractors. We have benchmarks. We understand what works and why.</p>
<p>Now imagine the only field in the world. There are no other machines to compare it to. No tractors, no benchmarks, no historical experience — just the Ferrari. It works. The field is plowed. But we have <strong>no way to judge its suitability, efficiency, or long-term cost</strong>.</p>
<p>This is the world of software engineering. Every company building custom applications faces this reality: there is no reference frame, no control group, no benchmark beyond whether the system “works.” Success becomes the <strong>only visible measure</strong>, and quality in any deeper sense is unknowable.</p>
<p>What does this mean for how we design, implement, and maintain software? That’s what the singleton reality is all about.</p>
<hr />
<h2 id="heading-the-singleton-reality">The Singleton Reality</h2>
<p>There is no true A/B testing in software architecture.</p>
<p>You cannot take the same system and implement it twice — once with Framework X, once with Framework Y.<br />You cannot run the same organization through CQRS and a layered monolith under identical conditions.<br />You cannot replay the same business evolution using synchronous calls instead of event-driven architecture.</p>
<p>Once a system is built, it proceeds along a single, irreversible path.<br />Architecture, framework, and tooling collapse into one history.</p>
<p>And like the Ferrari plowing a field, the system produces results.</p>
<p>Features ship.<br />Users are served.<br />The business functions.</p>
<p>But without knowledge of the <em>tractor</em> — without a grounded understanding of what a <strong>fit-for-purpose architecture</strong> looks like for this kind of problem — there is no way to judge:</p>
<ul>
<li><p>Whether the design is economically rational</p>
</li>
<li><p>How much accidental complexity was introduced</p>
</li>
<li><p>What long-term maintenance will cost</p>
</li>
<li><p>Or whether a simpler, more robust approach would have been better</p>
</li>
</ul>
<p>The system works — and that success silences the question of suitability.</p>
<hr />
<h2 id="heading-success-silences-better-questions">Success Silences Better Questions</h2>
<p>In singleton systems, success suppresses counterfactuals.</p>
<p>If the system works:</p>
<ul>
<li><p>The tools get credit</p>
</li>
<li><p>The architecture is justified retroactively</p>
</li>
<li><p>The design choices are treated as “proven”</p>
</li>
</ul>
<p>Inefficiencies are explained away as:</p>
<ul>
<li><p>“The domain was hard”</p>
</li>
<li><p>“The requirements were unclear”</p>
</li>
<li><p>“The team didn’t execute well enough”</p>
</li>
</ul>
<p>Rarely do we ask whether the <strong>approach itself</strong> was ill-suited.</p>
<p>This creates a dangerous asymmetry:</p>
<blockquote>
<p>If the system doesn’t work, it invites analysis.<br />If the system works, analysis is unnecessary.</p>
</blockquote>
<p>So the system is never evaluated for appropriateness, economy, or durability.</p>
<hr />
<h2 id="heading-working-software-is-not-the-same-as-good-engineering">Working Software Is Not the Same as Good Engineering</h2>
<p>In software, we conflate <em>correct output</em> with <em>engineering quality</em>.</p>
<p>But “it works” tells us nothing about:</p>
<ul>
<li><p>How difficult the system is to change</p>
</li>
<li><p>How much knowledge it takes to maintain</p>
</li>
<li><p>Whether complexity reflects the domain or the tooling</p>
</li>
<li><p>Whether the system can survive years of learning and correction</p>
</li>
</ul>
<p>A Ferrari plowing a field will:</p>
<ul>
<li><p>Break more often</p>
</li>
<li><p>Cost more to maintain</p>
</li>
<li><p>Fail catastrophically under the wrong conditions</p>
</li>
</ul>
<p>None of this is visible if the only metric is “the field got plowed.”</p>
<p>This is the core problem of singleton systems:</p>
<p><strong>They hide misfit behind functionality.</strong></p>
<hr />
<h2 id="heading-the-drift-from-engineering-to-assembly">The Drift From Engineering to Assembly</h2>
<p>Because systems are singletons, pressure accumulates in one direction:</p>
<ul>
<li><p>Deliver features</p>
</li>
<li><p>Meet deadlines</p>
</li>
<li><p>Make it work</p>
</li>
</ul>
<p>The dominant question becomes:</p>
<blockquote>
<p>“Does this solve today’s problem?”</p>
</blockquote>
<p>Not:</p>
<blockquote>
<p>“Is this the right <em>kind</em> of solution for the kind of problem this is?”</p>
</blockquote>
<p>This is where engineering quietly gives way to assembly.</p>
<p>As long as behavior is correct, no one examines:</p>
<ul>
<li><p>Whether internal structure is legible</p>
</li>
<li><p>Whether essential complexity is visible</p>
</li>
<li><p>Whether tomorrow’s changes have somewhere to go</p>
</li>
</ul>
<p>The Ferrari plows the field.<br />The conversation ends.</p>
<hr />
<h2 id="heading-when-it-works-becomes-the-only-optimization-target">When “It Works” Becomes the Only Optimization Target</h2>
<p>Once success is defined purely by outcome, optimization silently shifts.</p>
<p>If the only question is:</p>
<blockquote>
<p>“Does it work?”</p>
</blockquote>
<p>Then the system is no longer optimized to be <strong>understood</strong> or <strong>changed</strong>.</p>
<p>It is optimized for <strong>least resistance to implementation</strong>.</p>
<p>This shift is not incompetence.<br />It is a rational response to pressure in a system where correctness is the only visible metric.</p>
<hr />
<h2 id="heading-implementation-is-the-easy-part">Implementation Is the Easy Part</h2>
<p>Writing code that produces the correct result is rarely the hard problem.</p>
<p>Modern languages, frameworks, and tooling are extremely good at helping us <em>implement</em> behavior.</p>
<p>The difficult work is something else entirely:</p>
<ul>
<li><p>Understanding the domain</p>
</li>
<li><p>Discovering which rules actually matter</p>
</li>
<li><p>Learning which constraints are essential and which are incidental</p>
</li>
<li><p>Knowing where behavior truly belongs</p>
</li>
</ul>
<p>That understanding emerges slowly — through use, failure, and correction.</p>
<p>But when optimization is focused solely on “making it work,” that understanding is treated as overhead.</p>
<hr />
<h2 id="heading-optimizing-for-least-resistance">Optimizing for Least Resistance</h2>
<p>When resistance to implementation becomes the primary concern, certain patterns appear everywhere:</p>
<ul>
<li><p>Behavior collapses into declarative annotations</p>
</li>
<li><p>Constructors are reduced to wiring points</p>
</li>
<li><p>Objects lose responsibility and become data carriers</p>
</li>
<li><p>Framework conventions replace explicit structure</p>
</li>
</ul>
<p>Each of these choices reduces friction <em>today</em>.</p>
<p>They allow engineers to focus almost exclusively on tooling — the mechanically easy part of software construction.</p>
<p>And each of them quietly removes something far more valuable.</p>
<p>They erase information about <strong>why</strong> the system behaves the way it does.</p>
<hr />
<h2 id="heading-what-gets-lost-is-not-functionality-it-is-meaning">What Gets Lost Is Not Functionality — It Is Meaning</h2>
<p>The system continues to function.</p>
<p>Features ship.<br />Tests pass.<br />Users are served.</p>
<p>But the code stops explaining itself.</p>
<p>It no longer communicates:</p>
<ul>
<li><p>Why a rule exists</p>
</li>
<li><p>Why a boundary matters</p>
</li>
<li><p>Why a concept deserves to be modeled explicitly</p>
</li>
</ul>
<p>That knowledge migrates into:</p>
<ul>
<li><p>The heads of a few people</p>
</li>
<li><p>Tribal conventions</p>
</li>
<li><p>Framework internals</p>
</li>
<li><p>Historical accidents</p>
</li>
</ul>
<p>The Ferrari still plows the field.</p>
<hr />
<h2 id="heading-why-this-undermines-endurance">Why This Undermines Endurance</h2>
<p>Software endures not because it was easy to write, but because it remains possible to <strong>re-understand</strong>.</p>
<p>Long-lived systems must absorb new knowledge:</p>
<ul>
<li><p>New edge cases</p>
</li>
<li><p>New constraints</p>
</li>
<li><p>New interpretations of old rules</p>
</li>
</ul>
<p>If the code was optimized only for minimal resistance during implementation, it offers no structure to integrate that learning.</p>
<p>Change becomes additive instead of corrective.<br />Patches accumulate.<br />Workarounds replace design.</p>
<p>The system still works — but it can no longer evolve cleanly.</p>
<hr />
<h2 id="heading-essential-vs-accidental-complexity">Essential vs. Accidental Complexity</h2>
<p>Every system contains <strong>essential complexity</strong> — the irreducible complexity of the domain itself.</p>
<p>Good engineering keeps that complexity:</p>
<ul>
<li><p>Explicit</p>
</li>
<li><p>Legible</p>
</li>
<li><p>Close to the code</p>
</li>
</ul>
<p>Bad engineering replaces it with <strong>accidental complexity</strong>:</p>
<ul>
<li><p>Framework indirection</p>
</li>
<li><p>Implicit behavior</p>
</li>
<li><p>Generated wiring</p>
</li>
<li><p>Convention-heavy design</p>
</li>
</ul>
<p>In a singleton system, accidental complexity is especially dangerous.</p>
<p>Because there is no steering mechanism.</p>
<p>If essential complexity is no longer legible, the system can evolve only by:</p>
<ul>
<li><p>Upgrading dependencies</p>
</li>
<li><p>Adding patches</p>
</li>
<li><p>Introducing workarounds</p>
</li>
</ul>
<p>When essential complexity is no longer visible, corrections are no longer possible — only compensations. Workarounds emerge in place of design.</p>
<hr />
<h2 id="heading-code-changes-because-understanding-changes">Code Changes Because Understanding Changes</h2>
<p>Software does not change primarily because developers make mistakes.</p>
<p>It changes because <strong>understanding grows</strong>.</p>
<p>Teams learn:</p>
<ul>
<li><p>Which rules actually matter</p>
</li>
<li><p>Which edge cases are fundamental</p>
</li>
<li><p>Where earlier assumptions were wrong</p>
</li>
</ul>
<p>If the system was written only for today’s understanding, that new knowledge has nowhere to live.</p>
<p>It gets bolted on.<br />Hidden behind flags.<br />Embedded in conditionals.</p>
<p>The system still works — but its internal coherence degrades.</p>
<p>Expressive systems behave differently.</p>
<p>They give new understanding a place to land.</p>
<hr />
<h2 id="heading-expressiveness-as-a-survival-strategy">Expressiveness as a Survival Strategy</h2>
<p>Expressive code does not exist to be verbose.</p>
<p>It exists to:</p>
<ul>
<li><p>Name concepts explicitly</p>
</li>
<li><p>Encode invariants visibly</p>
</li>
<li><p>Make responsibility undeniable</p>
</li>
<li><p>Preserve the shape of the domain over time</p>
</li>
</ul>
<p>In a world of repeatable systems, this might be optional.</p>
<p>In a world of unique singleton applications, it is essential.</p>
<p>Because expressiveness preserves <strong>engineering intent</strong> when outcomes alone cannot tell us whether we chose the right machine.</p>
<hr />
<h2 id="heading-tooling-is-not-architecture">Tooling Is Not Architecture</h2>
<p>One of the clearest signs of singleton failure is when:</p>
<ul>
<li><p>Removing a framework collapses the system</p>
</li>
<li><p>Upgrading a dependency requires redesign</p>
</li>
<li><p>Behavior lives in annotations instead of code</p>
</li>
<li><p>The runtime behaves in ways the source does not explain</p>
</li>
</ul>
<p>This is not leverage.</p>
<p>It is hidden coupling.</p>
<p>The system works — until it doesn’t.<br />And when it breaks, understanding is nowhere to be found.</p>
<hr />
<h2 id="heading-conclusion-engineering-without-second-chances">Conclusion: Engineering Without Second Chances</h2>
<p>The singleton reality does not make quality impossible.</p>
<p>It means quality is <strong>never proven by success alone</strong>.</p>
<p>A working system may still be:</p>
<ul>
<li><p>Overengineered</p>
</li>
<li><p>Underfit</p>
</li>
<li><p>Fragile</p>
</li>
<li><p>Economically irrational</p>
</li>
</ul>
<p>Just like a Ferrari plowing a field.</p>
<p>Good software engineering, under singleton conditions, is not about modernity or brevity.</p>
<p>It is about preserving:</p>
<ul>
<li><p>Legible essential complexity</p>
</li>
<li><p>Explicit assumptions</p>
</li>
<li><p>Structural honesty</p>
</li>
</ul>
<p>Because in a world where every system is built once,<br /><strong>the only thing that survives is what continues to explain itself.</strong></p>
<p>And if we never ask whether we’re building tractors —<br />we will keep celebrating Ferraris that merely happen to work.</p>
]]></content:encoded></item><item><title><![CDATA[The Integration Tax: Why Distributed Systems Hide the Truth Until It’s Too Late]]></title><description><![CDATA[Large, integrated codebases have long been framed as a liability.
They are described as entangled, brittle, slow to change, and resistant to scaling. These observations are often factually correct. Integrated systems do create friction.
But friction ...]]></description><link>https://blog.leonpennings.com/the-integration-tax-why-distributed-systems-hide-the-truth-until-its-too-late</link><guid isPermaLink="true">https://blog.leonpennings.com/the-integration-tax-why-distributed-systems-hide-the-truth-until-its-too-late</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Java]]></category><category><![CDATA[Microservices]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[ROI (Return on Investment)]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Tue, 27 Jan 2026 13:33:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/M5tzZtFCOfs/upload/74890b75eee0b8fda12bb790f474537f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Large, integrated codebases have long been framed as a liability.</p>
<p>They are described as entangled, brittle, slow to change, and resistant to scaling. These observations are often factually correct. Integrated systems <em>do</em> create friction.</p>
<p>But friction is not always a defect. In many systems, friction is a signal.</p>
<p>This article argues that much of what was labeled “entanglement” in integrated systems was actually an <strong>early-warning mechanism</strong>. When that mechanism was removed through distribution, the underlying problems did not disappear. They became quieter, slower, and significantly more expensive.</p>
<p>The failure moved from code into data.</p>
<hr />
<h2 id="heading-the-alarm-that-integrated-systems-produce">The Alarm That Integrated Systems Produce</h2>
<p>In physical systems, noise is rarely neutral. A rattling engine or grinding gearbox indicates misalignment. The noise is not the problem; it is evidence of one.</p>
<p>Integrated software systems behave similarly.</p>
<p>When business logic is tightly integrated, several forms of friction emerge:</p>
<ul>
<li><p>Changes in one area break assumptions elsewhere</p>
</li>
<li><p>Builds fail when invariants no longer align</p>
</li>
<li><p>Tests surface unexpected dependencies</p>
</li>
<li><p>Transactions roll back when rules conflict</p>
</li>
</ul>
<p>This friction is commonly interpreted as a sign that the system is “too coupled” or “badly designed.” The usual response is structural separation: splitting the codebase into independently deployable units.</p>
<p>The immediate effect is predictable. The noise stops.</p>
<p>What is often overlooked is that the absence of noise does not imply the absence of misalignment. It only implies that misalignment is no longer detected early.</p>
<hr />
<h2 id="heading-integrated-code-and-fail-fast-reality">Integrated Code and Fail-Fast Reality</h2>
<p>An integrated codebase has one defining property: <strong>assumptions are forced to reconcile early</strong>.</p>
<p>When domain logic is integrated:</p>
<ul>
<li><p>There is a shared model of state</p>
</li>
<li><p>Business rules are enforced atomically</p>
</li>
<li><p>Invariants are validated before data is committed</p>
</li>
</ul>
<p>If a rule changes, dependent logic is affected immediately.<br />If a concept becomes inconsistent, the compiler or transaction boundary rejects it.<br />If two parts of the system disagree, progress stops.</p>
<p>This is not a matter of code quality or architectural purity. It is a structural consequence of integration.</p>
<p>Integrated systems fail fast not because they are fragile, but because they <strong>refuse to accept inconsistency</strong>. The system either makes sense, or it does not proceed.</p>
<p>This early resistance is often experienced as development pain. In reality, it is risk surfacing at the lowest possible cost.</p>
<hr />
<h2 id="heading-the-myth-of-the-clean-split">The Myth of the Clean Split</h2>
<p>Distributed architectures are frequently justified by the promise of independence: teams can move faster, deploy separately, and avoid stepping on each other’s work.</p>
<p>What is actually being split, however, is not just code.</p>
<p>What is split is the <strong>contract of truth</strong>.</p>
<p>In an integrated system:</p>
<ul>
<li><p>Integration happens in code</p>
</li>
<li><p>Assumptions collide during development</p>
</li>
<li><p>Failure is immediate and visible</p>
</li>
</ul>
<p>In a distributed system:</p>
<ul>
<li><p>Integration moves to runtime</p>
</li>
<li><p>Assumptions no longer collide synchronously</p>
</li>
<li><p>Failure becomes delayed and ambiguous</p>
</li>
</ul>
<p>This creates a structural visibility gap:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Aspect</td><td>Integrated System</td><td>Distributed System</td></tr>
</thead>
<tbody>
<tr>
<td>Integration point</td><td>Code</td><td>Data</td></tr>
<tr>
<td>Failure mode</td><td>Loud, early</td><td>Quiet, late</td></tr>
<tr>
<td>Verification</td><td>Compiler, transactions</td><td>Logs, dashboards</td></tr>
<tr>
<td>Cost of misalignment</td><td>Minutes (CI build)</td><td>Weeks or months (Misaligned data found)</td></tr>
</tbody>
</table>
</div><p>When integration leaves code, it does not disappear.<br />It reappears in production data.</p>
<hr />
<h2 id="heading-why-distribution-feels-easier">Why Distribution Feels Easier</h2>
<p>Distributed systems often feel easier to work with, especially at scale. This perception is not accidental. Distribution optimizes for a different kind of effectiveness.</p>
<p>Modern distributed architectures reward:</p>
<ul>
<li><p>Framework proficiency</p>
</li>
<li><p>Infrastructure and deployment literacy</p>
</li>
<li><p>API boundary design</p>
</li>
<li><p>Local correctness within a bounded context</p>
</li>
<li><p>Tooling fluency (CI/CD, observability, orchestration)</p>
</li>
</ul>
<p>These skills are valuable and necessary. But they share a defining characteristic: <strong>they allow productivity without global understanding</strong>.</p>
<p>An engineer can be effective inside a service without understanding how the broader domain behaves as a whole.</p>
<p>Integrated systems do not permit this mode of work.</p>
<p>To make meaningful changes in an integrated codebase, it is necessary to:</p>
<ul>
<li><p>Understand upstream and downstream effects</p>
</li>
<li><p>Reason about invariants across modules</p>
</li>
<li><p>Grasp end-to-end data flow</p>
</li>
<li><p>Understand why rules exist, not just where they are implemented</p>
</li>
</ul>
<p>This is not a question of intelligence or seniority. It is a question of <strong>cognitive scope</strong>.</p>
<p>Integrated systems enforce holistic reasoning.<br />Distributed systems allow local reasoning.</p>
<hr />
<h2 id="heading-skill-distribution-and-the-integration-tax">Skill Distribution and the Integration Tax</h2>
<p>This difference in cognitive scope has architectural consequences.</p>
<p>Distributed systems scale teams more easily than they scale coherence. They lower the barrier to entry by allowing work to be partitioned into technically isolated units. This is often a deliberate organizational choice.</p>
<p>However, the cost is subtle and delayed.</p>
<p>When engineers are incentivized to reason locally:</p>
<ul>
<li><p>Decisions are optimized for individual services</p>
</li>
<li><p>Tooling validates only local correctness</p>
</li>
<li><p>Tests confirm behavior in isolation</p>
</li>
<li><p>Deployment pipelines signal success prematurely</p>
</li>
</ul>
<p>Nothing in the system enforces semantic alignment across services.</p>
<p>Data misalignment is therefore not caused by poor engineering. It is caused by <strong>locally correct decisions made without global constraint</strong>.</p>
<p>Integrated systems make this kind of drift difficult. Distributed systems make it likely.</p>
<p>This increased probability of misalignment is part of the integration tax.</p>
<hr />
<h2 id="heading-data-duplication-and-semantic-drift">Data Duplication and Semantic Drift</h2>
<p>To function independently, distributed systems duplicate data.</p>
<p>Each service maintains:</p>
<ul>
<li><p>Its own schema</p>
</li>
<li><p>Its own representation of shared concepts</p>
</li>
<li><p>Its own interpretation of business state</p>
</li>
</ul>
<p>Initially, everything works. APIs respond. Events flow. Tests pass.</p>
<p>Over time, meanings diverge.</p>
<p>One service treats “Cancelled” as refunded.<br />Another treats it as pending return.<br />A third treats it as archived and immutable.</p>
<p>Each interpretation is internally consistent. None are aligned.</p>
<p>APIs do not renegotiate meaning when assumptions change elsewhere. They preserve contracts long after those contracts no longer reflect reality.</p>
<p>This divergence is known as <strong>semantic drift</strong>.</p>
<p>It is invisible during development, invisible during deployment, and invisible to monitoring systems.</p>
<hr />
<h2 id="heading-data-decay-failure-without-alarms">Data Decay: Failure Without Alarms</h2>
<p>Semantic drift leads to a more dangerous failure mode: <strong>data decay</strong>.</p>
<p>Data decay is the gradual corruption of business truth caused by delayed semantic misalignment in distributed systems.</p>
<p>Its defining traits are:</p>
<ul>
<li><p>No crashes</p>
</li>
<li><p>No failed builds</p>
</li>
<li><p>No immediate customer-facing errors</p>
</li>
<li><p>No alerts</p>
</li>
</ul>
<p>Instead, it surfaces indirectly:</p>
<ul>
<li><p>Financial reports fail to reconcile</p>
</li>
<li><p>Regulatory numbers drift</p>
</li>
<li><p>Manual correction jobs become permanent</p>
</li>
<li><p>“Temporary” analytics fixes accumulate</p>
</li>
</ul>
<p>By the time the problem is detected, the system has often been producing incorrect data for months.</p>
<p>The failure did not happen at the moment of discovery. It happened when assumptions silently diverged.</p>
<hr />
<h2 id="heading-continuous-deployment-as-an-accelerator">Continuous Deployment as an Accelerator</h2>
<p>Continuous Deployment is often presented as a safety mechanism: smaller changes, deployed more frequently, reduce risk.</p>
<p>What actually changes is <strong>where integration happens</strong>.</p>
<p>In distributed systems, continuous deployment accelerates the rate at which assumptions enter production. Integration no longer happens before release; it happens in live data.</p>
<p>Conflicts are not rejected. They are accumulated.</p>
<p>The system appears stable because nothing crashes. But stability is not correctness.</p>
<p>Deployment speed increases, while semantic alignment lags behind.</p>
<hr />
<h2 id="heading-integrated-truth-and-transactional-boundaries">Integrated Truth and Transactional Boundaries</h2>
<p>This is not an argument against distribution in all forms. It is an argument against <strong>distribution without an integrated core of truth</strong>.</p>
<p>Somewhere in the system, there must be:</p>
<ul>
<li><p>A place where invariants are enforced</p>
</li>
<li><p>A boundary where business rules meet</p>
</li>
<li><p>A transaction where assumptions are forced to align</p>
</li>
</ul>
<p>When such a boundary exists:</p>
<ul>
<li><p>Changes propagate before data is committed</p>
</li>
<li><p>Misalignment fails early</p>
</li>
<li><p>Truth remains atomic</p>
</li>
</ul>
<p>When it does not, coherence must be enforced organizationally rather than architecturally. That is a far more expensive mechanism.</p>
<hr />
<h2 id="heading-the-real-integration-tax">The Real Integration Tax</h2>
<p>The integration tax is rarely paid in performance or build times.</p>
<p>It is paid in:</p>
<ul>
<li><p>Growing headcount to manage inconsistencies</p>
</li>
<li><p>Reconciliation teams and data cleanup pipelines</p>
</li>
<li><p>Manual exception handling</p>
</li>
<li><p>Loss of trust in reporting</p>
</li>
<li><p>Regulatory exposure</p>
</li>
<li><p>Permanent compensating processes</p>
</li>
</ul>
<p>Integrated systems force discipline early.<br />Distributed systems defer discipline until it becomes unavoidable.</p>
<hr />
<h2 id="heading-conclusion-the-singleton-trap">Conclusion: The Singleton Trap</h2>
<p>If integration is difficult in code, the design requires work.<br />If integration is difficult in data, the organization is already paying for failure.</p>
<p>The industry-wide shift toward distributed systems was an attempt to bypass the friction of integrated codebases. That friction, however, was not eliminated; it was displaced. The result is a quieter, more pervasive crisis.</p>
<p>Every instance of data decay is effectively a <strong>singleton</strong>: a unique outcome of a specific architectural sprawl combined with a particular set of organizational boundaries. Because no two failures look the same, there is no shared baseline for comparison and no obvious signal that something systemic is wrong.</p>
<p>In the absence of a universal yardstick, the consequences are normalized. Expanding headcount, permanent reconciliation teams, and continuous data-cleaning pipelines are often treated as the natural cost of software at scale.</p>
<p>They are not.</p>
<p>They are the measurable price of silencing an architectural alarm.</p>
<p>Integrated systems make misalignment audible while the cost of correction is still low. Distributed systems render it silent, allowing it to accumulate until it manifests as institutionalized overhead. Silence is not safety; it is deferred truth.</p>
<p>The interest on that debt is paid in the long-term integrity of the business.</p>
]]></content:encoded></item><item><title><![CDATA[The Silent Profit Killer: Why High Cognitive Load is Harming Your Software Development]]></title><description><![CDATA[Most organizations believe their software delivery problems stem from a lack of time, talent, or discipline. Teams are told to work harder, plan better, or adopt yet another framework. The reality is more uncomfortable: much of the lost productivity ...]]></description><link>https://blog.leonpennings.com/the-silent-profit-killer-why-high-cognitive-load-is-harming-your-software-development</link><guid isPermaLink="true">https://blog.leonpennings.com/the-silent-profit-killer-why-high-cognitive-load-is-harming-your-software-development</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[Java]]></category><category><![CDATA[Software Efficiency]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Thu, 22 Jan 2026 07:23:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/MiSPnHknw4w/upload/28e3b65aca64ae4f7c4723a3c1494022.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most organizations believe their software delivery problems stem from a lack of time, talent, or discipline. Teams are told to work harder, plan better, or adopt yet another framework. The reality is more uncomfortable: much of the lost productivity is self-inflicted.</p>
<p>The culprit is not technical debt in the abstract, nor insufficient process maturity. It is <strong>cognitive load</strong>—the amount of mental effort required for developers to understand, reason about, and safely change a system.</p>
<p>High cognitive load does not appear on balance sheets. It does not trigger alerts or dashboards. But it quietly drains delivery speed, inflates costs, and erodes morale. Over time, it becomes a direct and measurable profit killer.</p>
<hr />
<h2 id="heading-what-is-cognitive-load-in-software">What Is Cognitive Load (in Software)?</h2>
<p>In software development, cognitive load is the mental overhead required to answer seemingly simple questions:</p>
<ul>
<li><p>What is this code <em>trying</em> to do?</p>
</li>
<li><p>Why does it exist?</p>
</li>
<li><p>What business problem does it solve?</p>
</li>
<li><p>What will break if I change it?</p>
</li>
</ul>
<p>When these answers are obvious, change is cheap. Developers can act with confidence, make localized modifications, and move on.</p>
<p>When the answers are hidden behind layers of indirection, speculative abstractions, and unclear responsibility, every change becomes slow and risky. Developers must reconstruct intent before they can touch behavior.</p>
<p>Crucially, cognitive load compounds. Each unnecessary abstraction, ambiguous name, or misplaced responsibility increases the effort required for the <em>next</em> person. Over time, the system becomes harder not because it does more, but because it explains less.</p>
<hr />
<h2 id="heading-the-busy-paradox">The “Busy” Paradox</h2>
<p>Consider a familiar situation.</p>
<p>You have a team of ten senior developers. Everyone works full-time. Standups are attended. Tickets move across the board. And yet, after two weeks, the sprint produces a single minor feature.</p>
<p>Where is the time going?</p>
<p>It is not laziness.<br />It is not incompetence.<br />It is <strong>cognitive friction</strong>.</p>
<p>Most of the team’s energy is spent rebuilding mental models: reloading context, rediscovering intent, and validating assumptions that the code itself should have made explicit. Progress is slow not because developers are typing less, but because they are thinking harder than necessary.</p>
<p>Organizations are not blind to this problem. A common response is standardization. Codebases are aligned around familiar structures: services contain logic, repositories handle persistence, controllers orchestrate flows. Frameworks are standardized so that “everything looks the same.”</p>
<p>This helps with <em>orientation</em>, but not with <em>understanding</em>.</p>
<p>The problem is rarely the technology or the mechanics of execution—the <em>how</em>. The real cost lies in the <em>why</em> and the <em>what</em>: why this behavior exists, what business rule it enforces, what outcome it protects.</p>
<p>That information is often missing from the code itself. Instead, it is learned slowly through debugging sessions, tribal knowledge, and historical accidents. Weeks or months pass before a developer truly understands what the system is doing and why. This is where the real time and money are lost.</p>
<hr />
<h2 id="heading-1-the-backlog-tax-why-deferring-fixes-is-financial-suicide">1. The Backlog Tax: Why Deferring Fixes Is Financial Suicide</h2>
<p>Most organizations treat their backlog as a neutral to-do list. In practice, it behaves like a high-interest credit card.</p>
<h3 id="heading-the-logic">The Logic</h3>
<p>A developer is implementing a feature and notices a small bug, an awkward name, or a local design flaw. At that moment, the relevant mental context is fully loaded:</p>
<ul>
<li><p>They understand why the code exists</p>
</li>
<li><p>They know which constraints matter</p>
</li>
<li><p>They can judge the safety of a change immediately</p>
</li>
</ul>
<p>This is the cheapest possible moment to improve the system.</p>
<h3 id="heading-the-waste">The Waste</h3>
<p>Instead, the issue is logged in Jira.</p>
<p>Two months later, another developer picks it up. Before making a five-minute fix, they spend hours:</p>
<ul>
<li><p>Reconstructing domain knowledge</p>
</li>
<li><p>Tracing execution paths</p>
</li>
<li><p>Rebuilding trust in unfamiliar code</p>
</li>
</ul>
<p>The cost was not deferred—it was multiplied.</p>
<p>This problem compounds further because these issues are rarely urgent. They are perpetually deprioritized in favor of “more important work.” The backlog grows, the friction increases, and the same irritations are encountered sprint after sprint, each time with the silent promise: “We’ll fix this next time.”</p>
<p>The result is a system that is increasingly expensive to work on, not because it is complex, but because its intent has decayed.</p>
<h3 id="heading-the-rule">The Rule</h3>
<p><strong>If you see it, and you are already there, fix it now.</strong></p>
<p>Deferring known improvements is not prudence. It is a deliberate increase in future cognitive load.</p>
<hr />
<h2 id="heading-2-the-alignment-gap-why-how-is-expensive-without-why">2. The Alignment Gap: Why “How” Is Expensive Without “Why”</h2>
<p>Few things increase cognitive load faster than unclear intent.</p>
<h3 id="heading-the-logic-1">The Logic</h3>
<p>When developers do not fully understand the business goal, they compensate by writing <strong>defensive code</strong>:</p>
<ul>
<li><p>Over-general abstractions</p>
</li>
<li><p>Configuration points for hypothetical futures</p>
</li>
<li><p>“Just in case” extension mechanisms</p>
</li>
</ul>
<p>This is not foresight. It is fear.</p>
<p>The code attempts to remain flexible because the developer cannot be confident about what actually matters. The result is <strong>speculative complexity</strong>—structures designed to support scenarios that never occur, but must still be understood and maintained.</p>
<h3 id="heading-the-cost">The Cost</h3>
<p>Speculative complexity bloats the codebase and obscures meaning. Every reader must mentally filter out irrelevant paths and imagined use cases to locate the real behavior.</p>
<p>When the <em>why</em> is clear, the code can be written with purpose. It describes business intent rather than technical possibility. Behavior becomes traceable to outcomes, and understanding becomes dramatically easier.</p>
<h3 id="heading-the-rule-1">The Rule</h3>
<p><strong>Clarity on the “Why” dramatically simplifies the “How.”</strong></p>
<p>In practice, complete clarity on intent often results in far less code—and far less cognitive load—because the system only expresses what is necessary.</p>
<hr />
<h2 id="heading-3-mechanics-vs-meaning-where-design-becomes-a-cognitive-problem">3. Mechanics vs. Meaning: Where Design Becomes a Cognitive Problem</h2>
<p>At this point, cognitive load stops being a process issue and becomes a design issue.</p>
<h3 id="heading-the-mechanic-procedural-thinking-in-disguise">The Mechanic: Procedural Thinking in Disguise</h3>
<p>In many modern systems—often Spring-based—business logic is implemented procedurally:</p>
<ul>
<li><p>Service layers orchestrate behavior</p>
</li>
<li><p>DTOs move state between components</p>
</li>
<li><p>Methods mutate data from the outside</p>
</li>
</ul>
<p>To understand what is happening, the reader must behave like a debugger: jumping between files, tracking values, and inferring intent from side effects.</p>
<p>The code explains <em>how</em> things happen, but not <em>why</em>.</p>
<h3 id="heading-the-meaning-domain-first-design">The Meaning: Domain-First Design</h3>
<p>In a domain-first design, meaning is explicit and behavior lives where it belongs.</p>
<p>Compare:</p>
<p><strong>orderService.updateStatus(orderId, Status.CANCELLED, currentUser);</strong></p>
<p>with:</p>
<p><strong>order.cancel(currentUser);</strong></p>
<p>The second version immediately reduces cognitive load:</p>
<ul>
<li><p>The intent is explicit</p>
</li>
<li><p>Responsibility is unambiguous</p>
</li>
<li><p>Invariants are enforced at the right level</p>
</li>
</ul>
<p>The reader no longer needs to infer meaning from mechanics. The code communicates its purpose directly.</p>
<h3 id="heading-the-knowledge-leak-when-intent-lives-outside-the-code">The Knowledge Leak: When Intent Lives Outside the Code</h3>
<p>If your code only encodes <em>how</em> to perform a task, but not <em>why</em> it is doing it, you are losing money every time a developer presses “Save.”</p>
<p>When business intent is not embedded in the domain model, it leaks out of the codebase and into external systems: Jira tickets, Slack threads, pull request comments, and long-forgotten emails. The system may still work, but its meaning no longer lives where developers need it most.</p>
<p>Every future change now requires <strong>archaeology</strong>.</p>
<p>Developers must dig through history to reconstruct intent:</p>
<ul>
<li><p>Why was this implemented this way?</p>
</li>
<li><p>What constraint was it protecting?</p>
</li>
<li><p>Is this behavior accidental or deliberate?</p>
</li>
</ul>
<p>This makes teams brittle. Bugs become expensive because intent must be rediscovered before it can be corrected. Developers become non-interchangeable because knowledge accumulates in people rather than in code. Staff turnover turns into operational risk.</p>
<h3 id="heading-the-cost-of-manual-abstraction">The Cost of Manual Abstraction</h3>
<p>Procedural code forces developers to perform <strong>manual abstraction</strong> every time they revisit it.</p>
<p>They must translate mechanical steps—loops, flags, state mutations—back into business meaning in their own heads. This translation is repeated by every developer, every time, slightly differently. It is pure waste.</p>
<p>A Rich Domain Model performs that abstraction <strong>once</strong>.</p>
<p>By encoding the <em>why</em> directly into the code, the system communicates its purpose at a glance. Intent is no longer inferred; it is declared. From that point on, understanding is cheap, bugs are easier to identify, and developers become far more interchangeable.</p>
<p>Lower cognitive load does not just improve readability.<br />It makes the system resilient—to team changes, shifting priorities, and market pressure.</p>
<h3 id="heading-the-rule-2">The Rule</h3>
<p><strong>Contextualize everything.</strong></p>
<p>If the code does not explain its own business meaning, it is technical debt—regardless of how clean, testable, or well-structured it appears.</p>
<hr />
<h2 id="heading-a-useful-analogy-action-without-context">A Useful Analogy: Action Without Context</h2>
<p>Procedural or purely functional code often resembles a script containing only actions:</p>
<ul>
<li><p>Do this</p>
</li>
<li><p>Then that</p>
</li>
<li><p>Then something else</p>
</li>
</ul>
<p>It is like reading a comic book with only motion lines and no panels explaining who is acting or why.</p>
<p>Contextualized code provides both <strong>action and context</strong>. That context dramatically reduces the effort required to understand, change, and trust the system.</p>
<hr />
<h2 id="heading-the-financial-audit-calculating-the-profit-killer">The Financial Audit: Calculating the Profit Killer</h2>
<p>Cognitive load is a recurring line item on your balance sheet. To understand the true cost, compare two organizations:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>The "High-Load" Organization</strong></td><td><strong>The "Low-Load" Organization</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Onboarding</strong></td><td>2 months or more before a hire is "safe" to touch logic.</td><td>a week or less to understand the domain and ship.</td></tr>
<tr>
<td><strong>Maintenance</strong></td><td>80% of capacity spent "re-learning" context.</td><td>20% of capacity; intent is explicit.</td></tr>
<tr>
<td><strong>The Backlog Tax</strong></td><td>Every bug costs 10x more due to Jira cycles.</td><td>Quality dividend: Fix-it-now keeps debt at zero.</td></tr>
<tr>
<td><strong>Staffing</strong></td><td>Relies on "Hero" devs with tribal knowledge.</td><td>Any competent engineer can contribute.</td></tr>
</tbody>
</table>
</div><h3 id="heading-the-real-world-roi">The Real-World ROI</h3>
<p>If you have a team of 10 developers and you reduce their cognitive load by just <strong>25%</strong>, you aren't just making them happier. You are effectively gaining <strong>2.5 full-time senior engineers</strong> without increasing your payroll by a single cent.</p>
<p>Conversely, every hour your team spends deciphering "mechanical" code or grooming a stale backlog is an hour of capital investment that yields zero business value.</p>
<h2 id="heading-final-thought">Final Thought</h2>
<p>The most profitable optimization you can make is not faster infrastructure or better tooling. It is the ruthless pursuit of <strong>software that is simply easier to think about.</strong> Reducing cognitive load lowers the barrier to entry for new talent, reduces the risk of knowledge silos, and—most importantly—returns your team to a state of high-velocity delivery. If your codebase has become a puzzle that only your most senior developers can solve, you are dealing with a bottleneck that is actively harming your bottom line.</p>
]]></content:encoded></item><item><title><![CDATA[CVE-2026-0603 – Hibernate security issue: Should you be worried?]]></title><description><![CDATA[CVE-2026-0603: Second-Order SQL Injection in Hibernate ORM – Risk Assessment
Abstract CVE-2026-0603, disclosed on January 19, 2026, describes a high-severity (CVSS 8.3) second-order SQL injection vulnerability in Hibernate ORM’s InlineIdsOrClauseBuil...]]></description><link>https://blog.leonpennings.com/cve-2026-0603-hibernate-security-issue-should-you-be-worried</link><guid isPermaLink="true">https://blog.leonpennings.com/cve-2026-0603-hibernate-security-issue-should-you-be-worried</guid><category><![CDATA[CVE-2026-0603]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Java]]></category><category><![CDATA[CVE]]></category><dc:creator><![CDATA[Leon Pennings]]></dc:creator><pubDate>Tue, 20 Jan 2026 06:32:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Ap0alm8xpxw/upload/5d0cd2790c91246da1fb411a4e1f541e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>CVE-2026-0603: Second-Order SQL Injection in Hibernate ORM – Risk Assessment</strong></p>
<p><strong>Abstract</strong> CVE-2026-0603, disclosed on January 19, 2026, describes a high-severity (CVSS 8.3) second-order SQL injection vulnerability in Hibernate ORM’s InlineIdsOrClauseBuilder. The issue arises when unsanitized string values are incorporated into dynamic IN or OR clauses during query construction. This article examines the vulnerability’s mechanics, affected versions, and conditions under which applications remain unaffected, with particular emphasis on systems that employ sequence-generated numeric primary keys.</p>
<p><strong>Introduction</strong> SQL injection vulnerabilities remain a persistent threat in object-relational mapping (ORM) frameworks. CVE-2026-0603 affects Hibernate ORM when string-based identifiers are processed in a manner that permits injection of malicious SQL fragments. The vulnerability is classified as second-order: malicious input must first be stored in the database and later retrieved for use in a dynamic query.</p>
<p><strong>Vulnerability Description</strong> The flaw resides in the construction of SQL clauses that combine multiple identifier values. When identifiers are strings, Hibernate may concatenate values without sufficient sanitization, enabling an authenticated attacker with low privileges to store SQL metacharacters (e.g., quotes, comments, semicolons) in a field that later serves as an identifier. Subsequent query execution can result in arbitrary SQL statements, leading to data exposure (high confidentiality impact), unauthorized modification (high integrity impact), or limited denial-of-service (low availability impact).</p>
<p><strong>Affected Versions</strong> The vulnerability impacts Hibernate ORM versions 5.2.8 through 5.6.15. Red Hat products incorporating vulnerable Hibernate builds (e.g., JBoss EAP, Fuse, OpenShift Application Runtimes) are also exposed.</p>
<p><strong>Conditions for Non-Exposure</strong> Applications are not vulnerable when primary keys are exclusively numeric (Long/BIGINT) and generated by database sequences. In such cases:</p>
<ul>
<li><p>Identifier values are controlled by the database and never originate from user input.</p>
</li>
<li><p>Hibernate processes numeric identifiers as bind parameters or literal numbers, bypassing string concatenation logic.</p>
</li>
<li><p>The vulnerable InlineIdsOrClauseBuilder path is not executed.</p>
</li>
</ul>
<p>This configuration effectively prevents exploitation, even on unpatched versions.</p>
<p><strong>Discussion</strong> The use of user-supplied values as primary keys or identifiers introduces unnecessary risk. Numeric, sequence-generated keys provide a robust defense-in-depth layer against injection and enumeration attacks. When user-controlled values (e.g., usernames, slugs, codes) are required, they should be stored in separate indexed columns with strict validation, normalization, and sanitization. Direct use of such values as identifiers violates separation of concerns and increases attack surface.</p>
<p><strong>Conclusion</strong> For applications that rely solely on sequence-generated numeric primary keys, CVE-2026-0603 presents no practical risk. Patching remains advisable for general hygiene, but urgency is low in these environments. Systems employing string-based identifiers, particularly those influenced by user input, require prompt remediation.</p>
<p><strong>References</strong></p>
<ul>
<li><p>Red Hat Security Advisory: <a target="_blank" href="https://access.redhat.com/security/cve/cve-2026-0603">https://access.redhat.com/security/cve/cve-2026-0603</a></p>
</li>
<li><p>Related discussion on dependency update practices: <a target="_blank" href="https://blog.leonpennings.com/why-blind-dependency-updates-are-costing-java-teams-more-than-they-save">https://blog.leonpennings.com/why-blind-dependency-updates-are-costing-java-teams-more-than-they-save</a></p>
</li>
</ul>
<p><strong>Correction (Feb 2026)</strong>: Affected versions are Hibernate ORM 5.2.8 through 5.6.15 (per Red Hat advisory). Earlier reports listed 6.x in error — 6.x is not impacted. Core advice unchanged.</p>
]]></content:encoded></item></channel></rss>