PresentationDomainDataLayering

team organization · database · encapsulation · application architecture · web development

tags:

One of the most common ways to modularize an information-rich program is to separate it into three broad layers: presentation (UI), domain logic (aka business logic), and data access. So you often see web applications divided into a web layer that knows about handling http requests and rendering HTML, a business logic layer that contains validations and calculations, and a data access layer that sorts out how to manage persistant data in a database or remote services.

On the whole I've found this to be an effective form of modularization for many applications and one that I regularly use and encourage. It's biggest advantage (for me) is that it allows me to reduce the scope of my attention by allowing me to think about the three topics relatively independently. When I'm working on domain logic code I can mostly ignore the UI and treat any interaction with data sources as an abstract set of functions that give me the data I need and update it as I wish. When I'm working on the data access layer I focus on the details of wrangling the data into the form required by my interface. When I'm working on the presentation I can focus on the UI behavior, treating any data to display or update as magically appearing by function calls. By separating these elements I narrow the scope of my thinking in each piece, which makes it easier for me to follow what I need to do.

This narrowing of scope doesn't imply any sequence to programming them - I usually find I need to iterate between the layers. I might build the data and domain layers off my initial understanding of the UX, but when refining the UX I need to change the domain which necessitates a change to the data layer. But even with that kind of cross-layer iteration, I find it easier to focus on one layer at a time as I make changes. It's similar to the switching of thinking modes you get with refactoring's two hats.

Another reason to modularize is to allow me to substitute different implementations of modules. This separation allows me to build multiple presentations on top of the same domain logic without duplicating it. Multiple presentations could be separate pages in a web app, having a web app plus mobile native apps, an API for scripting purposes, or even an old fashioned command line interface. Modularizing the data source allows me to cope gracefully with a change in database technology, or to support services for persistance that may change with little notice. However I have to mention that while I often hear about data access substitution being a driver for separating the data source layer, I rarely hear of someone actually doing it.

Modularity also supports testability, which naturally appeals to me as a big fan of SelfTestingCode. Module boundaries expose seams that are good affordance for testing. UI code is often tricky to test, so it's good to get as much logic as you can into a domain layer which is easily tested without having to do gymnastics to access the program through a UI [1]. Data access is often slow and awkward, so using TestDoubles around the data layer often makes domain logic testing much easier and responsive.

While substitutability and testability are certainly benefits of this layering, I must stress that even without either of these reasons I would still divide into layers like this. The reduced scope of attention reason is sufficient on its own.

When talking about this we can either look at it as one pattern (presentation-domain-data) or split it into two patterns (presentation-domain, and domain-data). Both points of view are useful - I think of presentation-domain-data as a composite of presentation-domain and domain-data.

I consider these layers to be a form of module, which is a generic word I use for how we clump our software into relatively independent pieces. Exactly how this corresponds to code depends on the programming environment we're in. Usually the lowest level is some form of subroutine or function. An object-oriented language will have a notion of class that collects functions and data structure. Most languages have some form of higher level called packages or namespaces, which often can be formed into a hierarchy. Modules may correspond to separately deployable units: libraries, or services, but they don't have to.

Layering can occur at any of these levels. A small program may just put separate functions for the layers into different files. A larger system may have layers corresponding to namespaces with many classes in each.

I've mentioned three layers here, but it's common to see architectures with more than three layers. A common variation is to put a service layer between the domain and presentation, or to split the presentation layer into separate layers with something like Presentation Model. I don't find that more layers breaks the essential pattern, since the core separations still remain.

The dependencies generally run from top to bottom through the layer stack: presentation depends on the domain, which then depends on the data source. A common variation is to arrange things so that the domain does not depend on its data sources by introducing a mapper between the domain and data source layers. This approach is often referred to as a Hexagonal Architecture.

Although presentation-domain-data separation is a common approach, it should only be applied at a relatively small granularity. As an application grows, each layer can get sufficiently complex on its own that you need to modularize further. When this happens it's usually not best to use presentation-domain-data as the higher level of modules. Often frameworks encourage you to have something like view-model-data as the top level namespaces; that's ok for smaller systems, but once any of these layers gets too big you should split your top level into domain oriented modules which are internally layered.

Developers don't have to be full-stack but teams should be.

One common way I've seen this layering lead organizations astray is the AntiPattern of separating development teams by these layers. This looks appealing because front-end and back-end development require different frameworks (or even languages) making it easy for developers to specialize in one or the other. Putting those people with common skills together supports skill sharing and allows the organization to treat the team as a provider of a single, well-delineated type of work. In the same way, putting all the database specialists together fits in with the common centralization of databases and schemas. But the rich interplay between these layers necessitates frequent swapping between them. This isn't too hard when you have specialists in the same team who can casually collaborate, but team boundaries add considerable friction, as well as reducing an individual's motivation to develop the important cross-layer understanding of a system. Worse, separating the layers into teams adds distance between developers and users. Developers don't have to be full-stack (although that is laudable) but teams should be.

Further Reading

I've written about this separation from a number of different angles elsewhere. This layering drives the structure of P of EAA and chapter 1 of that book talks more about this layering. I didn't make this layering a pattern in its own right in that book but have toyed with that territory with Separated Presentation and PresentationDomainSeparation.

For more on why presentation-domain-data shouldn't be the highest level modules in a larger system, take a look at the writing and speaking of Simon Brown. I also agree with him that software architecture should be embedded in code.

I had a fascinating discussion with my colleague Badri Janakiraman about the nature of hexagonal architectures. The context was mostly around applications using Ruby on Rails, but much of the thinking applies to other cases when you may be considering this approach.

Acknowledgements

James Lewis, Jeroen Soeters, Marcos Brizeno, Rouan Wilsenach, and Sean Newham discussed drafts of this post with me.

Notes

1: A PageObject is also an important tool to help testing around UIs.

Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

AntiPattern

bad things · writing

tags:

Andrew Koenig first coined the term "antipattern" in an article in JOOP[1], which is sadly not available on the internet. The essential idea (as I remember it [2]) was that an antipattern was something that seems like a good idea when you begin, but leads you into trouble. Since then the term has often been used just to indicate any bad idea, but I think the original focus is more useful.

In the paper Koenig said

An antipattern is just like a pattern, except that instead of a solution it gives something that looks superficially like a solution but isn't one.

-- Andrew Koenig

This is what makes a good antipattern something separate to just a bad thing to point and laugh at. The fact that it looks like a good solution is its essential danger. Since it looks good, sensible people will take the path - only once you've put a lot of effort into it will you know it's bad result.

When writing a description of an antipattern it's valuable to describe how to get out of trouble if you've taken the bad path. I see that as useful but not necessary. If there's no good way to get out of it, that doesn't reduce the value of the warning.

It's useful to remember that the same solution can be a good pattern in some contexts and an antipattern in others. The value of a solution depends on the context that you use it.

Notes

1: Journal of Object-Oriented Programming, Vol 8, no. 1. March/April 1995. It was then reprinted in "The Patterns Handbook", edited by Linda Rising (Cambridge University Press)

2: I don't have a copy of the paper, so I'm going primarily off memory and some old notes.

Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

AlignmentMap

team organization · project planning · collaboration

tags:

Alignment maps are organizational information radiators that help visualize the alignment of ongoing work with business outcomes. The work may be regular functionality addition or technical work such as re-architecting or repaying technical debt or improving the build and deployment pipeline. Team members use alignment maps to understand what business outcomes their day-to-day work is meant to improve. Business and IT sponsors use them to understand how ongoing work relates to the business outcomes they care about.

Here’s an example scenario (inspired by real life) that illustrates how these maps may be useful. A team of developers had inefficiently implemented a catalog search function as N+1 calls. The first call to the catalog index returned a set of SKU IDs. For each ID returned, a query was then made to retrieve product detail. The implementation came to the attention of an architect when it failed performance tests. He advised the team to get rid of the N+1 implementation.

“Search-in-one” was the mantra he offered the team as a way to remember their objective. Given the organizational boundary between architects and developers and the low frequency of communication between them, the mantra was taken literally. The team moved heaven and earth to implement a combined index query and detail query in a single call. They lost sight of the real objective of improving search performance and slogged away in an attempt to achieve acceptable performance in exactly one call. Funding ran out in a few months and after some heated discussions, the project was cancelled and the team disbanded.

The above example may seem absurd but sadly, enterprise IT is no stranger to architecture and business projects that are cancelled after a while because they lost sight of why they were funded in the first place. In the terminology of organizational design, these are problems of alignment.


Visualizing Alignment

Broadly, IT strategy has to align with business strategy and IT outcomes with desired business outcomes. A business outcome may be supported (in part) by one or more IT outcomes. Each IT outcome may be realized by one or more initiatives (program of work—architectural or business). At this point, it may also be useful to identify an owner for each initiative who then sponsors work (action items) across multiple teams as part of executing the initiative. Depending on the initiative the owner may be a product owner, architect, tech lead or manager. Here's an alignment map for the “search-in-one” case. Had it been in public display in the team’s work area, it might have prompted someone to take a step back and ask what their work was really meant to achieve.


Global Map

A global alignment map for the IT (appdev+ops) organization may look more like this (although real maps tend to be much larger).

As with all information radiators, such a map is a snapshot in time and needs to be updated regularly (say once a month). Each team displays a big printout of the global map in its work area.

Big organizations are likely to realize value early in this exercise by collaborating to come up with a version 1.0 of such a map that everyone agrees to. The discussions around who owns what initiatives and what outcomes an initiative contributes to leads to a fair bit of organizational clarity of what everyone is up to. Usually, the absence of well-articulated and commonly understood business and IT strategies come in the way of converging on a set of business and IT outcomes. Well-facilitated workshops with deep and wide participation across the relevant parts of the organization can help address this.


Tracing alignment paths

Once a global alignment map is in place, it allows us to trace alignment from either end. IT and business sponsors can trace what action items are in play under a given initiative. Development team members can trace through the map to understand the real purpose of items they are working on. In addition to in-progress items, we could also include action items that are planned, done or blocked.

As illustrated in the map above, each team highlights their section of the map on their copy of the global map.


Qualitative benefits validation

Once a month (or quarter), IT and business people get together to validate if all the IT activity has made any difference to business outcomes. Business people come to the meeting with red-amber-green (RAG) statues for business outcomes and IT people may come with RAG statues for their side of the map. Both parties need to be able to back up their RAG assessments with data and/or real stories from the trenches (narrative evidence).

These maps can be combined


Sriram's recent book explores how best to design an IT organization to be agile enough to survive in today's competitive jungle.


With this the group may realize that:

  • Some outcomes have turned green as compared to the previous meeting. Perhaps customer retention turned green after the last release of the responsive rewrite initiative.
  • Not all IT activity is making the expected difference to business outcomes. This provides an opportunity to discuss why this may be the case. Perhaps because:
    • It is a little early in the day. Other planned items need to complete before we can expect a difference. This is probably why, in the map above, customer acquisition is red even though site UX is green. Platform unbundling is still incomplete.
    • The initiatives and action items are sensible but a different execution approach is needed (this is the reason in case of “search-in-one”).
    • A different initiative or set of actions are required and existing ones are better cancelled. Something outside of IT has to fall into place before the business can realize value.
  • A few business outcomes are green even though the related IT initiatives aren’t. This probably means IT matters less to this outcome than other non-IT factors. In the map above, this is probably why customer retention is green even though site performance isn’t. Perhaps IT means to say that performance isn’t where it should be although it hasn’t affected retention just yet.

To summarize, alignment maps provide a common organization-wide tool to discuss the extent to which different IT initiatives are paying off. They could also improve the ability to make sense of ongoing work and bring it greater alignment with business objectives. I haven't used this technique enough yet to claim general effectiveness, although I do think it shows enough promise. If you try this out I'd be glad to hear about experiences with it.

Acknowledgements

Thanks to Jim Gumbley, Kief Morris and Vinod Sankaranarayanan for their inputs. Special thanks to Martin Fowler for his guidance with the content and help with publishing.

Further Reading

I describe other information radiators that help the cause of organizational agility in my book Agile IT Organization Design. My companion web site at www.agileorgdesign.com contains links to further writing and my talks.

Lars Barkman has posted details on how to construct an alignment map with graphviz.

Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

DevOpsCulture

delivery · agile adoption · team organization · collaboration

tags:

Agile software development has broken down some of the silos between requirements analysis, testing and development. Deployment, operations and maintenance are other activities which have suffered a similar separation from the rest of the software development process. The DevOps movement is aimed at removing these silos and encouraging collaboration between development and operations.

DevOps has become possible largely due to a combination of new operations tools and established agile engineering practices [1], but these are not enough to realize the benefits of DevOps. Even with the best tools, DevOps is just another buzzword if you don't have the right culture.

The primary characteristic of DevOps culture is increased collaboration between the roles of development and operations. There are some important cultural shifts, within teams and at an organizational level, that support this collaboration.

An attitude of shared responsibility is an aspect of DevOps culture that encourages closer collaboration. It’s easy for a development team to become disinterested in the operation and maintenance of a system if it is handed over to another team to look after. If a development team shares the responsibility of looking after a system over the course of its lifetime, they are able to share the operations staff’s pain and so identify ways to simplify deployment and maintenance (e.g. by automating deployments and improving logging). They may also gain additional ObservedRequirements from monitoring the system in production. When operations staff share responsibility of a system’s business goals, they are able to work more closely with developers to better understand the operational needs of a system and help meet these. In practice, collaboration often begins with an increased awareness from developers of operational concerns (such as deployment and monitoring) and the adoption of new automation tools and practices by operations staff.

Some organizational shifts are required to support a culture of shared responsibilities. There should be no silos between development and operations. Handover periods and documentation are a poor substitute for working together on a solution from the start. It is helpful to adjust resourcing structures to allow operations staff to get involved with teams early. Having the developers and operations staff co-located will help them to work together. Handovers and sign-offs discourage people from sharing responsibility and contributes to a culture of blame. Instead, developers and operations staff should both be responsible for the successes and failures of a system. DevOps culture blurs the line between the roles of developer and operations staff and may eventually eliminate the distinction. One common anti-pattern when introducing DevOps to an organization is to assign someone the role of 'DevOps' or to call a team a 'DevOps team'. Doing so perpetuates the kinds of silos that DevOps aims to break down and prevents DevOps culture and practices from spreading and being adopted by the wider organization.

Another valuable organizational shift is to support autonomous teams. In order to collaborate effectively, developers and operations staff need to be able to make decisions and apply changes without convoluted decision making processes. This involves trusting teams, changing the way risk is managed and creating an environment that is free of a fear of failure. For example, a team that has to produce a list of changes for sign-off in order to deploy to a testing environment is likely to be delayed frequently. Instead of requiring such a manual check, it is possible to rely on version control, which is fully auditable. Changes in version control can even be linked to tickets in the team's project management tool. Without the manual sign-off, the team can automate their deployments and speed up their testing cycle.

One effect of a shift towards DevOps culture is that it becomes easier to put new code in production. This necessitates some further cultural changes. In order to ensure that changes in production are sound, the team needs to value building quality into the development process. This includes cross-functional concerns such as performance and security. The techniques of ContinuousDelivery, including SelfTestingCode, form a basis which allows regular, low-risk deployments.

It is also important for the team to value feedback, in order to continuously improve the way in which developers and operations staff work together as well as the system itself. Production monitoring is a helpful feedback loop for diagnosing issues and spotting potential improvements.

Automation is a cornerstone of the DevOps movement and facilitates collaboration. Automating tasks such as testing, configuration and deployment frees people up to focus on other valuable activities and reduces the chance of human error. A helpful side effect of automation is that automated scripts and tests serve as useful, always up-to-date documentation of the system. Automating server configuration, for example, removes the guesswork associated with a SnowflakeServer and means that developers and operations staff are equally able to know and change how a server is configured.

Notes

1: Operations tools include virtualization, cloud computing and automated configuration management. These are often supported by engineering practices such as Continuous Integration, evolutionary design and clean code.

Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

MonolithFirst

evolutionary design · microservices

tags:

As I hear stories about teams using a microservices architecture, I've noticed a common pattern.

  1. Almost all the successful microservice stories have started with a monolith that got too big and was broken up
  2. Almost all the cases where I've heard of a system that was built as a microservice system from scratch, it has ended up in serious trouble.

This pattern has led many of my colleagues to argue that you shouldn't start a new project with microservices, even if you're sure your application will be big enough to make it worthwhile. .

Microservices are a useful architecture, but even their advocates say that using them incurs a significant MicroservicePremium, which means they are only useful with more complex systems. This premium, essentially the cost of managing a suite of services, will slow down a team, favoring a monolith for simpler applications. This leads to a powerful argument for a monolith-first strategy, where you should build a new application as a monolith initially, even if you think it's likely that it will benefit from a microservices architecture later on.

The first reason for this is classic Yagni. When you begin a new application, how sure are you that it will be useful to your users? It may be hard to scale a poorly designed but successful software system, but that's still a better place to be than its inverse. As we're now recognizing, often the best way to find out if a software idea is useful is to build a simplistic version of it and see how well it works out. During this first phase you need to prioritize speed (and thus cycle time for feedback), so the premium of microservices is a drag you should do without.

The second issue with starting with microservices is that they only work well if you come up with good, stable boundaries between the services - which is essentially the task of drawing up the right set of BoundedContexts. Any refactoring of functionality between services is much harder than it is in a monolith. But even experienced architects working in familiar domains have great difficulty getting boundaries right at the beginning. By building a monolith first, you can figure out what the right boundaries are, before a microservices design brushes a layer of treacle over them. It also gives you time to develop the MicroservicePrerequisites you need for finer-grained services.

I've heard different ways to execute a monolith-first strategy. The logical way is to design a monolith carefully, paying attention to modularity within the software, both at the API boundaries and how the data is stored. Do this well, and it's a relatively simple matter to make the shift to microservices. However I'd feel much more comfortable with this approach if I'd heard a decent number of stories where it worked out that way. [1]

A more common approach is to start with a monolith and gradually peel off microservices at the edges. Such an approach can leave a substantial monolith at the heart of the microservices architecture, but with most new development occurring in the microservices while the monolith is relatively quiescent.

Another common approach is to just replace the monolith entirely. Few people look at this as an approach to be proud of, yet there are advantages to building a monolith as a SacrificialArchitecture. Don't be afraid of building a monolith that you will discard, particularly if a monolith can get you to market quickly.

Another route I've run into is to start with just a couple of coarse-grained services, larger than those you expect to end up with. Use these coarse-grained services to get used to working with multiple services, while enjoying the fact that such coarse granularity reduces the amount of inter-service refactoring you have to do. Then as boundaries stabilize, break down into finer-grained services. [2]

While the bulk of my contacts lean toward the monolith-first approach, it is by no means unanimous. The counter argument says that starting with microservices allows you to get used to the rhythm of developing in a microservice environment. It takes a lot, perhaps too much, discipline to build a monolith in a sufficiently modular way that it can be broken down into microservices easily. By starting with microservices you get everyone used to developing in separate small teams from the beginning, and having teams separated by service boundaries makes it much easier to scale up the development effort when you need to. This is especially viable for system replacements where you have a better chance of coming up with stable-enough boundaries early. Although the evidence is sparse, I feel that you shouldn't start with microservices unless you have reasonable experience of building a microservices system in the team.

I don't feel I have enough anecdotes yet to get a firm handle on how to decide whether to use a monolith-first strategy. These are early days in microservices, and there are relatively few anecdotes to learn from. So anybody's advice on these topics must be seen as tentative, however confidently they argue.

Further Reading

Sam Newman describes a case study of a team considering using microservices on a greenfield project.

Notes

1: You cannot assume that you can take an arbitrary system and break it into microservices. Most systems acquire too many dependencies between their modules, and thus can't be sensibly broken apart. I've heard of plenty of cases where an attempt to decompose a monolith has quickly ended up in a mess. I've also heard of a few cases where a gradual route to microservices has been successful - but these cases required a relatively good modular design to start with.

2: I suppose that strictly you should call this a "duolith", but I think the approach follows the essence of monolith-first strategy: start with coarse-granularity to gain knowledge and split later.

Acknowledgements

I stole much of this thinking from my coleagues: James Lewis, Sam Newman, Thiyagu Palanisamy, and Evan Bottcher. Stefan Tilkov's comments on an earlier draft played a pivotal role in clarifying my thoughts. Chad Currie created the lovely glyphy dragons. Steven Lowe, Patrick Kua, Jean Robert D'amore, Chelsea Komlo, Ashok Subramanian, Dan Siwiec, Prasanna Pendse, Kief Morris, Chris Ford, and Florian Sellmayr discussed drafts on our internal mailing list.
Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

Yagni

process theory · project planning · evolutionary design · clean code

tags:

Yagni originally is an acronym that stands for "You Aren't Gonna Need It". It is a mantra from ExtremeProgramming that's often used generally in agile software teams. It's a statement that some capability we presume our software needs in the future should not be built now because "you aren't gonna need it".

Yagni is a way to refer to the XP practice of Simple Design (from the first edition of The White Book, the second edition refers to the related notion of "incremental design"). [1] Like many elements of XP, it's a sharp contrast to elements of the widely held principles of software engineering in the late 90s. At that time there was a big push for careful up-front planning of software development.

Let's imagine I'm working with a startup in Minas Tirith selling insurance for the shipping business. Their software system is broken into two main components: one for pricing, and one for sales. The dependencies are such that they can't usefully build sales software until the relevant pricing software is completed.

At the moment, the team is working on updating the pricing component to add support for risks from storms. They know that in six months time, they will need to also support pricing for piracy risks. Since they are currently working on the pricing engine they consider building the presumptive feature [2] for piracy pricing now, since that way the pricing service will be complete before they start working on the sales software.

Yagni argues against this, it says that since you won't need piracy pricing for six months you shouldn't build it until it's necessary. So if you think it will take two months to build this software, then you shouldn't start for another four months (neglecting any buffer time for schedule risk and updating the sales component).

The first argument for yagni is that while we may now think we need this presumptive feature, it's likely that we will be wrong. After all the context of agile methods is an acceptance that we welcome changing requirements. A plan-driven requirements guru might counter argue that this is because we didn't do a good-enough job of our requirements analysis, we should have put more time and effort into it. I counter that by pointing out how difficult and costly it is to figure out your needs in advance, but even if you can, you can still be blind-sided when the Gondor Navy wipes out the pirates, thus undermining the entire business model.

In this case, there's an obvious cost of the presumptive feature - the cost of build: all the effort spent on analyzing, programming, and testing this now useless feature.

But let's consider that we were completely correct with our understanding of our needs, and the Gondor Navy didn't wipe out the pirates. Even in this happy case, building the presumptive feature incurs two serious costs. The first cost is the cost of delayed value. By expending our effort on the piracy pricing software we didn't build some other feature. If we'd instead put our energy into building the sales software for weather risks, we could have put a full storm risks feature into production and be generating revenue two months earlier. This cost of delay due to the presumptive feature is two months revenue from storm insurance.

The common reason why people build presumptive features is because they think it will be cheaper to build it now rather than build it later. But that cost comparison has to be made at least against the cost of delay, preferably factoring in the probability that you're building an unnecessary feature, for which your odds are at least ⅔. [3]

Often people don't think through the comparative cost of building now to building later. One approach I use when mentoring developers in this situation is to ask them to imagine any refactoring they would have to do later to introduce the capability when it's needed. Often that thought experiment is enough to convince them that it won't be significantly more expensive to add it later. Another result from such an imagining is to add something that's easy to do now, adds minimal complexity, yet significantly reduces the later cost. Using lookup tables for error messages rather than inline literals are an example that are simple yet make later translations easier to support.

The cost of delay is one cost that a successful presumptive feature imposes, but another is the cost of carry. The code for the presumptive feature adds some complexity to the software, this complexity makes it harder to modify and debug that software, thus increasing the cost of other features. The extra complexity from having the piracy-pricing feature in the software might add a couple of weeks to how long it takes to build the storm insurance sales component. That two weeks hits two ways: the additional cost to build the feature, plus the additional cost of delay since it look longer to put it into production. We'll incur a cost of carry on every feature built between now and the time the piracy insurance software starts being useful. Should we never need the piracy-pricing software, we'll incur a cost of carry on every feature built until we remove the piracy-pricing feature (assuming we do), together with the cost of removing it.

So far I've divided presumptive features in two categories: successful and unsuccessful. Naturally there's really a spectrum there, and with one point on that spectrum that's worth highlighting: the right feature built wrong. Development teams are always learning, both about their users and about their code base. They learn about the tools they're using and these tools go through regular upgrades. They also learn about how their code works together. All this means that you often realize that a feature coded six months ago wasn't done the way you now realize it should be done. In that case you have accumulated TechnicalDebt and have to consider the cost of repair for that feature or the on-going costs of working around its difficulties.

So we end up with three classes of presumptive features, and four kinds of costs that occur when you neglect yagni for them.

My insurance example talks about relatively user-visible functionality, but the same argument applies for abstractions to support future flexibility. When building the storm risk calculator, you may consider putting in abstractions and parameterizations now to support piracy and other risks later. Yagni says not to do this, because you may not need the other pricing functions, or if you do your current ideas of what abstractions you'll need will not match what you learn when you do actually need them. This doesn't mean to forego all abstractions, but it does mean any abstraction that makes it harder to understand the code for current requirements is presumed guilty.

Yagni is at its most visible with larger features, but you see it more frequently with small things. Recently I wrote some code that allows me to highlight part of a line of code. For this, I allow the highlighted code to be specified using a regular expression. One problem I see with this is that since the whole regular expression is highlighted, I'm unable to deal with the case where I need the regex to match a larger section than what I'd like to highlight. I expect I can solve that by using a group within the regex and letting my code only highlight the group if a group is present. But I haven't needed to use a regex that matches more than what I'm highlighting yet, so I haven't extended my highlighting code to handle this case - and won't until I actually need it. For similar reasons I don't add fields or methods until I'm actually ready to use them.

Small yagni decisions like this fly under the radar of project planning. As a developer it's easy to spend an hour adding an abstraction that we're sure will soon be needed. Yet all the arguments above still apply, and a lot of small yagni decisions add up to significant reductions in complexity to a code base, while speeding up delivery of features that are needed more urgently.

Now we understand why yagni is important we can dig into a common confusion about yagni. Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify. Yagni is only a viable strategy if the code is easy to change, so expending effort on refactoring isn't a violation of yagni because refactoring makes the code more malleable. Similar reasoning applies for practices like SelfTestingCode and ContinuousDelivery. These are enabling practices for evolutionary design, without them yagni turns from a beneficial practice into a curse. But if you do have a malleable code base, then yagni reinforces that flexibility. Yagni has the curious property that it is both enabled by and enables evolutionary design.

Yagni is not a justification for neglecting the health of your code base. Yagni requires (and enables) malleable code.

I also argue that yagni only applies when you introduce extra complexity now that you won't take advantage of until later. If you do something for a future need that doesn't actually increase the complexity of the software, then there's no reason to invoke yagni.

Having said all this, there are times when applying yagni does cause a problem, and you are faced with an expensive change when an earlier change would have been much cheaper. The tricky thing here is that these cases are hard to spot in advance, and much easier to remember than the cases where yagni saved effort [4]. My sense is that yagni-failures are relatively rare and their costs are easily outweighed by when yagni succeeds.

Further Reading

My essay Is Design Dead talks in more detail about the role of design and architecture in agile projects, and thus role yagni plays as an enabling practice.

This principle was first discussed and fleshed out on Ward's Wiki.

Notes

1: The origin of the phrase is an early conversation between Kent Beck and Chet Hendrickson on the C3 project. Chet came up to Kent with a series of capabilities that the system would soon need, to each one Kent replied "you aren't going to need it". Chet's a fast learner, and quickly became renowned for his ability to spot opportunities to apply yagni. Although "yagni" began life as an acronym, I feel it's now entered our lexicon as a regular word, and thus forego the capital letters.

2: In this post I use "presumptive feature" to refer to any code that supports a feature that isn't yet being made available for use.

3: The ⅔ number is suggested by Kohavi et al, who analyzed the value of features built and deployed on products at microsoft and found that, even with careful up-front analysis, only ⅓ of them improved the metrics they were designed to improve.

4: This is a consequence of availability bias

Acknowledgements

Rachel Laycock talked through this post with me and played a critical role in its final organization. Chet Hendrickson and Steven Lowe reminded me to discuss small-scale yagni decisions. Rebecca Parsons, Alvaro Cavalcanti, Mark Taylor, Aman King, Rouan Wilsenach, Peter Gillard-Moss, Kief Morris, Ian Cartwright, James Lewis, Kornelis Sietsma, and Brian Mason participated in an insightful discussion about drafts of this article on our internal mailing list.
Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

MicroservicePremium

microservices

tags:

The microservices architectural style has been the hot topic over the last year. At the recent O'Reilly software architecture conference, it seemed like every session talked about microservices. Enough to get everyone's over-hyped-bullshit detector up and flashing. One of the consequences of this is that we've seen teams be too eager to embrace microservices, [1] not realizing that microservices introduce complexity on their own account. This adds a premium to a project's cost and risk - one that often gets projects into serious trouble.

While this hype around microservices is annoying, I do think it's a useful bit of terminology for a style of architecture which has been around for a while, but needed a name to make it easier to talk about. The important thing here is not how annoyed you feel about the hype, but the architectural question it raises: is a microservice architecture a good choice for the system you're working on?

"It depends" must start my answer, but then I must shift the focus to what factors it depends on. The fulcrum of whether or not to use microservices is the complexity of the system you're contemplating. The microservices approach is all about handling a complex system, but in order to do so the approach introduces its own set of complexities. When you use microservices you have to work on automated deployment, monitoring, dealing with failure, eventual consistency, and other factors that a distributed system introduces. There are well-known ways to cope with all this, but it's extra effort, and nobody I know in software development seems to have acres of free time.

So my primary guideline would be don't even consider microservices unless you have a system that's too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don't try to separate it into separate services.

The complexity that drives us to microservices can come from many sources including dealing with large teams [2], multi-tenancy, supporting many user interaction models, allowing different business functions to evolve independently, and scaling. But the biggest factor is that of sheer size - people finding they have a monolith that's too big to modify and deploy.

At this point I feel a certain frustration. Many of the problems ascribed to monoliths aren't essential to that style. I've heard people say that you need to use microservices because it's impossible to do ContinuousDelivery with monoliths - yet there are plenty of organizations that succeed with a cookie-cutter deployment approach: Facebook and Etsy are two well-known examples.

I've also heard arguments that say that as a system increases in size, you have to use microservices in order to have parts that are easy to modify and replace. Yet there's no reason why you can't make a single monolith with well defined module boundaries. At least there's no reason in theory, in practice it seems too easy for module boundaries to be breached and monoliths to get tangled as well as large.

We should also remember that there's a substantial variation in service-size between different microservice systems. I've seen microservice systems vary from a team of 60 with 20 services to a team of 4 with 200 services. It's not clear to what degree service size affects the premium.

As size and other complexity boosters kick into a project I've seen many teams find that microservices are a better place to be. But unless you're faced with that complexity, remember that the microservices approach brings a high premium, one that can slow down your development considerably. So if you can keep your system simple enough to avoid the need for microservices: do.

Notes

1: It's a common enough problem that our recent radar called it out as Microservice Envy.

2: Conway's Law says that the structure of a system follows the organization of the people that built it. Some examples of microservice usage had organizations deliberately split themselves into small, loosely coupled groups in order to push the software into a similar modular structure - a notion that's called the Inverse Conway Maneuver.

Acknowledgements

I stole much of this thinking from my colleagues: James Lewis, Sam Newman, Thiyagu Palanisamy, and Evan Bottcher. Stefan Tilkov's comments on an earlier draft were instrumental in sharpening this post. Rob Miles, David Nelson, Brian Mason, and Scott Robinson discussed drafts of this article on our internal mailing list.
Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement