Yagni

process theory · project planning · evolutionary design · clean code

tags:

Yagni originally is an acronym that stands for "You Aren't Gonna Need It". It is a mantra from ExtremeProgramming that's often used generally in agile software teams. It's a statement that some capability we presume our software needs in the future should not be built now because "you aren't gonna need it".

Yagni is a way to refer to the XP practice of Simple Design (from the first edition of The White Book, the second edition refers to the related notion of "incremental design"). [1] Like many elements of XP, it's a sharp contrast to elements of the widely held principles of software engineering in the late 90s. At that time there was a big push for careful up-front planning of software development.

Let's imagine I'm working with a startup in Minas Tirith selling insurance for the shipping business. Their software system is broken into two main components: one for pricing, and one for sales. The dependencies are such that they can't usefully build sales software until the relevant pricing software is completed.

At the moment, the team is working on updating the pricing component to add support for risks from storms. They know that in six months time, they will need to also support pricing for piracy risks. Since they are currently working on the pricing engine they consider building the presumptive feature [2] for piracy pricing now, since that way the pricing service will be complete before they start working on the sales software.

Yagni argues against this, it says that since you won't need piracy pricing for six months you shouldn't build it until it's necessary. So if you think it will take two months to build this software, then you shouldn't start for another four months (neglecting any buffer time for schedule risk and updating the sales component).

The first argument for yagni is that while we may now think we need this presumptive feature, it's likely that we will be wrong. After all the context of agile methods is an acceptance that we welcome changing requirements. A plan-driven requirements guru might counter argue that this is because we didn't do a good-enough job of our requirements analysis, we should have put more time and effort into it. I counter that by pointing out how difficult and costly it is to figure out your needs in advance, but even if you can, you can still be blind-sided when the Gondor Navy wipes out the pirates, thus undermining the entire business model.

In this case, there's an obvious cost of the presumptive feature - the cost of build: all the effort spent on analyzing, programming, and testing this now useless feature.

But let's consider that we were completely correct with our understanding of our needs, and the Gondor Navy didn't wipe out the pirates. Even in this happy case, building the presumptive feature incurs two serious costs. The first cost is the cost of delayed value. By expending our effort on the piracy pricing software we didn't build some other feature. If we'd instead put our energy into building the sales software for weather risks, we could have put a full storm risks feature into production and be generating revenue two months earlier. This cost of delay due to the presumptive feature is two months revenue from storm insurance.

The common reason why people build presumptive features is because they think it will be cheaper to build it now rather than build it later. But that cost comparison has to be made at least against the cost of delay, preferably factoring in the probability that you're building an unnecessary feature, for which your odds are at least ⅔. [3]

Often people don't think through the comparative cost of building now to building later. One approach I use when mentoring developers in this situation is to ask them to imagine any refactoring they would have to do later to introduce the capability when it's needed. Often that thought experiment is enough to convince them that it won't be significantly more expensive to add it later. Another result from such an imagining is to add something that's easy to do now, adds minimal complexity, yet significantly reduces the later cost. Using lookup tables for error messages rather than inline literals are an example that are simple yet make later translations easier to support.

The cost of delay is one cost that a successful presumptive feature imposes, but another is the cost of carry. The code for the presumptive feature adds some complexity to the software, this complexity makes it harder to modify and debug that software, thus increasing the cost of other features. The extra complexity from having the piracy-pricing feature in the software might add a couple of weeks to how long it takes to build the storm insurance sales component. That two weeks hits two ways: the additional cost to build the feature, plus the additional cost of delay since it look longer to put it into production. We'll incur a cost of carry on every feature built between now and the time the piracy insurance software starts being useful. Should we never need the piracy-pricing software, we'll incur a cost of carry on every feature built until we remove the piracy-pricing feature (assuming we do), together with the cost of removing it.

So far I've divided presumptive features in two categories: successful and unsuccessful. Naturally there's really a spectrum there, and with one point on that spectrum that's worth highlighting: the right feature built wrong. Development teams are always learning, both about their users and about their code base. They learn about the tools they're using and these tools go through regular upgrades. They also learn about how their code works together. All this means that you often realize that a feature coded six months ago wasn't done the way you now realize it should be done. In that case you have accumulated TechnicalDebt and have to consider the cost of repair for that feature or the on-going costs of working around its difficulties.

So we end up with three classes of presumptive features, and four kinds of costs that occur when you neglect yagni for them.

My insurance example talks about relatively user-visible functionality, but the same argument applies for abstractions to support future flexibility. When building the storm risk calculator, you may consider putting in abstractions and parameterizations now to support piracy and other risks later. Yagni says not to do this, because you may not need the other pricing functions, or if you do your current ideas of what abstractions you'll need will not match what you learn when you do actually need them. This doesn't mean to forego all abstractions, but it does mean any abstraction that makes it harder to understand the code for current requirements is presumed guilty.

Yagni is at its most visible with larger features, but you see it more frequently with small things. Recently I wrote some code that allows me to highlight part of a line of code. For this, I allow the highlighted code to be specified using a regular expression. One problem I see with this is that since the whole regular expression is highlighted, I'm unable to deal with the case where I need the regex to match a larger section than what I'd like to highlight. I expect I can solve that by using a group within the regex and letting my code only highlight the group if a group is present. But I haven't needed to use a regex that matches more than what I'm highlighting yet, so I haven't extended my highlighting code to handle this case - and won't until I actually need it. For similar reasons I don't add fields or methods until I'm actually ready to use them.

Small yagni decisions like this fly under the radar of project planning. As a developer it's easy to spend an hour adding an abstraction that we're sure will soon be needed. Yet all the arguments above still apply, and a lot of small yagni decisions add up to significant reductions in complexity to a code base, while speeding up delivery of features that are needed more urgently.

Now we understand why yagni is important we can dig into a common confusion about yagni. Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify. Yagni is only a viable strategy if the code is easy to change, so expending effort on refactoring isn't a violation of yagni because refactoring makes the code more malleable. Similar reasoning applies for practices like SelfTestingCode and ContinuousDelivery. These are enabling practices for evolutionary design, without them yagni turns from a beneficial practice into a curse. But if you do have a malleable code base, then yagni reinforces that flexibility. Yagni has the curious property that it is both enabled by and enables evolutionary design.

Yagni is not a justification for neglecting the health of your code base. Yagni requires (and enables) malleable code.

I also argue that yagni only applies when you introduce extra complexity now that you won't take advantage of until later. If you do something for a future need that doesn't actually increase the complexity of the software, then there's no reason to invoke yagni.

Having said all this, there are times when applying yagni does cause a problem, and you are faced with an expensive change when an earlier change would have been much cheaper. The tricky thing here is that these cases are hard to spot in advance, and much easier to remember than the cases where yagni saved effort [4]. My sense is that yagni-failures are relatively rare and their costs are easily outweighed by when yagni succeeds.

Further Reading

My essay Is Design Dead talks in more detail about the role of design and architecture in agile projects, and thus role yagni plays as an enabling practice.

This principle was first discussed and fleshed out on Ward's Wiki.

Notes

1: The origin of the phrase is an early conversation between Kent Beck and Chet Hendrickson on the C3 project. Chet came up to Kent with a series of capabilities that the system would soon need, to each one Kent replied "you aren't going to need it". Chet's a fast learner, and quickly became renowned for his ability to spot opportunities to apply yagni. Although "yagni" began life as an acronym, I feel it's now entered our lexicon as a regular word, and thus forego the capital letters.

2: In this post I use "presumptive feature" to refer to any code that supports a feature that isn't yet being made available for use.

3: The ⅔ number is suggested by Kohavi et al, who analyzed the value of features built and deployed on products at microsoft and found that, even with careful up-front analysis, only ⅓ of them improved the metrics they were designed to improve.

4: This is a consequence of availability bias

Acknowledgements

Rachel Laycock talked through this post with me and played a critical role in its final organization. Chet Hendrickson and Steven Lowe reminded me to discuss small-scale yagni decisions. Rebecca Parsons, Alvaro Cavalcanti, Mark Taylor, Aman King, Rouan Wilsenach, Peter Gillard-Moss, Kief Morris, Ian Cartwright, James Lewis, Kornelis Sietsma, and Brian Mason participated in an insightful discussion about drafts of this article on our internal mailing list.

Share:


MicroservicePremium

microservices

tags:

The microservices architectural style has been the hot topic over the last year. At the recent O'Reilly software architecture conference, it seemed like every session talked about microservices. Enough to get everyone's over-hyped-bullshit detector up and flashing. One of the consequences of this is that we've seen teams be too eager to embrace microservices, [1] not realizing that microservices introduce complexity on their own account. This adds a premium to a project's cost and risk - one that often gets projects into serious trouble.

While this hype around microservices is annoying, I do think it's a useful bit of terminology for a style of architecture which has been around for a while, but needed a name to make it easier to talk about. The important thing here is not how annoyed you feel about the hype, but the architectural question it raises: is a microservice architecture a good choice for the system you're working on?

"It depends" must start my answer, but then I must shift the focus to what factors it depends on. The fulcrum of whether or not to use microservices is the complexity of the system you're contemplating. The microservices approach is all about handling a complex system, but in order to do so the approach introduces its own set of complexities. When you use microservices you have to work on automated deployment, monitoring, dealing with failure, eventual consistency, and other factors that a distributed system introduces. There are well-known ways to cope with all this, but it's extra effort, and nobody I know in software development seems to have acres of free time.

So my primary guideline would be don't even consider microservices unless you have a system that's too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don't try to separate it into separate services.

The complexity that drives us to microservices can come from many sources including dealing with large teams [2], multi-tenancy, supporting many user interaction models, allowing different business functions to evolve independently, and scaling. But the biggest factor is that of sheer size - people finding they have a monolith that's too big to modify and deploy.

At this point I feel a certain frustration. Many of the problems ascribed to monoliths aren't essential to that style. I've heard people say that you need to use microservices because it's impossible to do ContinuousDelivery with monoliths - yet there are plenty of organizations that succeed with a cookie-cutter deployment approach: Facebook and Etsy are two well-known examples.

I've also heard arguments that say that as a system increases in size, you have to use microservices in order to have parts that are easy to modify and replace. Yet there's no reason why you can't make a single monolith with well defined module boundaries. At least there's no reason in theory, in practice it seems too easy for module boundaries to be breached and monoliths to get tangled as well as large.

We should also remember that there's a substantial variation in service-size between different microservice systems. I've seen microservice systems vary from a team of 60 with 20 services to a team of 4 with 200 services. It's not clear to what degree service size affects the premium.

As size and other complexity boosters kick into a project I've seen many teams find that microservices are a better place to be. But unless you're faced with that complexity, remember that the microservices approach brings a high premium, one that can slow down your development considerably. So if you can keep your system simple enough to avoid the need for microservices: do.

Notes

1: It's a common enough problem that our recent radar called it out as Microservice Envy.

2: Conway's Law says that the structure of a system follows the organization of the people that built it. Some examples of microservice usage had organizations deliberately split themselves into small, loosely coupled groups in order to push the software into a similar modular structure - a notion that's called the Inverse Conway Maneuver.

Acknowledgements

I stole much of this thinking from my colleagues: James Lewis, Sam Newman, Thiyagu Palanisamy, and Evan Bottcher. Stefan Tilkov's comments on an earlier draft were instrumental in sharpening this post. Rob Miles, David Nelson, Brian Mason, and Scott Robinson discussed drafts of this article on our internal mailing list.

Share:


BeckDesignRules

extreme programming · clean code · refactoring

tags:

Kent Beck came up with his four rules of simple design while he was developing ExtremeProgramming in the late 1990's. I express them like this. [1]

The rules are in priority order, so "passes the tests" takes priority over "reveals intention"

Kent Beck developed Extreme Programming, Test Driven Development, and can always be relied on for good Victorian facial hair for his local ballet.

The most important of the rules is "passes the tests". XP was revolutionary in how it raised testing to a first-class activity in software development, so it's natural that testing should play a prominent role in these rules. The point is that whatever else you do with the software, the primary aim is that it works as intended and tests are there to ensure that happens.

"Reveals intention" is Kent's way of saying the code should be easy to understand. Communication is a core value of Extreme Programing, and many programmers like to stress that programs are there to be read by people. Kent's form of expressing this rule implies that the key to enabling understanding is to express your intention in the code, so that your readers can understand what your purpose was when writing it.

The "no duplication" is perhaps the most powerfully subtle of these rules. It's a notion expressed elsewhere as DRY or SPOT [2], Kent expressed it as saying everything should be said "Once and only Once." Many programmers have observed that the exercise of eliminating duplication is a powerful way to drive out good designs. [3]

The last rule tells us that anything that doesn't serve the three prior rules should be removed. At the time these rules were formulated there was a lot of design advice around adding elements to an architecture in order to increase flexibility for future requirements. Ironically the extra complexity of all of these elements usually made the system harder to modify and thus less flexible in practice.

People often find there is some tension between "no duplication" and "reveals intention", leading to arguments about which order those rules should appear. I've always seen their order as unimportant, since they feed off each other in refining the code. Such things as adding duplication to increase clarity is often papering over a problem, when it would be better to solve it. [4]

What I like about these rules is that they are very simple to remember, yet following them improves code in any language or programming paradigm that I've worked with. They are an example of Kent's skill in finding principles that are generally applicable and yet concrete enough to shape my actions.

At the time there was a lot of “design is subjective”, “design is a matter of taste” bullshit going around. I disagreed. There are better and worse designs. These criteria aren’t perfect, but they serve to sort out some of the obvious crap and (importantly) you can evaluate them right now. The real criteria for quality of design, “minimizes cost (including the cost of delay) and maximizes benefit over the lifetime of the software,” can only be evaluated post hoc, and even then any evaluation will be subject to a large bag full of cognitive biases. The four rules are generally predictive.

-- Kent Beck

Further Reading

There are many expressions of these rules out there, here are a few that I think are worth exploring:

Acknowledgements

Kent reviewed this post and sent me some very helpful feedback, much of which I appropriated into the text.

Notes

1: Authoritative Formulation

There are many expressions of the four rules out there, Kent stated them in lots of media, and plenty of other people have liked them and phrased them their own way. So you'll see plenty of descriptions of the rules, but each author has their own twist - as do I.

If you want an authoritative formulation from the man himself, probably your best bet is from the first edition of The White Book (p 57) in the section that outlines the XP practice of Simple Design.

  • Runs all the tests
  • Has no duplicated logic. Be wary of hidden duplication like parallel class hierarchies
  • States every intention important to the programmer
  • Has the fewest possible classes and methods

(Just to be confusing, there's another formulation on page 109 that omits "runs all the tests" and splits "fewest classes" and "fewest methods" over the last two rules. I recall this was an earlier formulation that Kent improved on while writing the White Book.)

2: DRY stands for Don't Repeat Yourself, and comes from The Pragmatic Programmer. SPOT stands for Single Point Of Truth.

3: This principle was the basis of my first design column for IEEE Software.

4: When reviewing this post, Kent said "In the rare case they are in conflict (in tests are the only examples I can recall), empathy wins over some strictly technical metric." I like his point about empathy - it reminds us that when writing code we should always be thinking of the reader.

Share:


DataLake

database · big data

tags:

Data Lake is a term that's appeared in this decade to describe an important component of the data analytics pipeline in the world of Big Data. The idea is to have a single store for all of the raw data that anyone in an organization might need to analyze. Commonly people use Hadoop to work on the data in the lake, but the concept is broader than just Hadoop.

When I hear about a single point to pull together all the data an organization wants to analyze, I immediately think of the notion of the data warehouse (and data mart [1]). But there is a vital distinction between the data lake and the data warehouse. The data lake stores raw data, in whatever form the data source provides. There is no assumptions about the schema of the data, each data source can use whatever schema it likes. It's up to the consumers of that data to make sense of that data for their own purposes.

This is an important step, many data warehouse initiatives didn't get very far because of schema problems. Data warehouses tend to go with the notion of a single schema for all analytics needs, but I've taken the view that a single unified data model is impractical for anything but the smallest organizations. To model even a slightly complex domain you need multiple BoundedContexts, each with its own data model. In analytics terms, you need each analytics user to use a model that makes sense for the analysis they are doing. By shifting to storing raw data only, this firmly puts the responsibility on the data analyst.

Another source of problems for data warehouse initiatives is ensuring data quality. Trying to get an authoritative single source for data requires lots of analysis of how the data is acquired and used by different systems. System A may be good for some data, and system B for another. You run into rules where system A is better for more recent orders but system B is better for orders of a month or more ago, unless returns are involved. On top of this, data quality is often a subjective issue, different analysis has different tolerances for data quality issues, or even a different notion of what is good quality.

This leads to a common criticism of the data lake - that it's just a dumping ground for data of widely varying quality, better named a data swamp. The criticism is both valid and irrelevant. The hot title of the New Analytics is "Data Scientist". Although it's a much-abused title, many of these folks do have a solid background in science. And any serious scientist knows all about data quality problems. Consider what you might think is the simple matter of analyzing temperature readings over time. You have to take into account that some weather stations are relocated in ways that may subtly affect the readings, anomalies due to problems in equipment, missing periods when the sensors aren't working. Many of the sophisticated statistical techniques out there are created to sort out data quality problems. Scientists are always skeptical about data quality and are used to dealing with questionable data. So for them the lake is important because they get to work with raw data and can be deliberate about applying techniques to make sense of it, rather than some opaque data cleansing mechanism that probably does more harm that good.

Data warehouses usually would not just cleanse but also aggregate the data into a form that made it easier to analyze. But scientists tend to object to this too, because aggregation implies throwing away data. The data lake should contain all the data because you don't know what people will find valuable, either today or in a couple of years time.

One of my colleagues illustrated this thinking with a recent example: "We were trying to compare our automated predictive models versus manual forecasts made by the company's contract managers. To do this we decided to train our models on year old data and compare our predictions to the ones made by managers at that time. Since we now know the correct results, this should be a fair test of accuracy. When we started to do this, it appeared that the manager's predictions were horrible and that even our simple models, made in just two weeks, were crushing them. We suspected that this out-performance was too good to be true. After a lot of testing and digging we discovered that the time stamps associated with those manager predictions were incorrect. They were being modified by some end-of-month processing report. So in short, these values in the data warehouse were useless; we feared that we would have no way of performing this comparison. After more digging we found that these reports had been stored and so we could extract the real forecasts made at that time. (We're crushing them again but it's taken many months to get there)."

The complexity of this raw data means that there is room for something that curates the data into a more manageable structure (as well as reducing the considerable volume of data.) The data lake shouldn't be accessed directly very much. Because the data is raw, you need a lot of skill to make any sense of it. You have relatively few people who work in the data lake, as they uncover generally useful views of data in the lake, they can create a number of data marts each of which has a specific model for a single bounded context. A larger number of downstream users can then treat these lakeshore marts as an authoritative source for that context.

So far I've described the data lake as singular point for integrating data across an enterprise, but I should mention that isn't how it was originally intended. The term was coined by James Dixon in 2010, when he did that he intended a data lake to be used for a single data source, multiple data sources would instead form a "water garden". Despite its original formulation the prevalent usage now is to treat a data lake as combining many sources. [2]

You should use a data lake for analytic purposes, not for collaboration between operational systems. When operational systems collaborate they should do this through services designed for the purpose, such as RESTful HTTP calls, or asynchronous messaging. The lake is too complex to trawl for operational communication. It may be that analysis of the lake can lead to new operational communication routes, but these should be built directly rather than through the lake.

It is important that all data put in the lake should have a clear provenance in place and time. Every data item should have a clear trace to what system it came from and when the data was produced. The data lake thus contains a historical record. This might come from feeding Domain Events into the lake, a natural fit with Event Sourced systems. But it could also come from systems doing a regular dump of current state into the lake - an approach that's valuable when the source system doesn't have any temporal capabilities but you want a temporal analysis of its data. A consequence of this is that data put into the lake is immutable, an observation once stated cannot be removed (although it may be refuted later), you should also expect ContradictoryObservations.

The data lake is schemaless, it's up to the source systems to decide what schema to use and for consumers to work out how to deal with the resulting chaos. Furthermore the source systems are free to change their inflow data schemas at will, and again the consumers have to cope. Obviously we prefer such changes to be as minimally disruptive as possible, but scientists prefer messy data to losing data.

Data lakes are going to be very large, and much of the storage is oriented around the notion of a large schemaless structure - which is why Hadoop and HDFS are usually the technologies people use for data lakes. One of the vital tasks of the lakeshore marts is to reduce the amount of data you need to deal with, so that big data analytics doesn't have to deal with large amounts of data.

The Data Lake's appetite for a deluge of raw data raises awkward questions about privacy and security. The principle of Datensparsamkeit is very much in tension with the data scientists' desire to capture all data now. A data lake makes a tempting target for crackers, who might love to siphon choice bits into the public oceans. Restricting direct lake access to a small data science group may reduce this threat, but doesn't avoid the question of how that group is kept accountable for the privacy of the data they sail on.

Notes

1: The usual distinction is that a data mart is for a single department in an organization, while a data warehouse integrates across all departments. Opinions differ on whether a data warehouse should be the union of all data marts or whether a data mart is a logical subset (view) of data in the data warehouse.

2: In a later blog post, Dixon emphasizes the lake versus water garden distinction, but (in the comments) says that it is a minor change. For me the key point is that the lake stores a large body of data in its natural state, the number of feeder streams isn't a big deal.

Acknowledgements

My thanks to Anand Krishnaswamy, Danilo Sato, David Johnston, Derek Hammer, Duncan Cragg, Jonny Leroy, Ken Collier, Shripad Agashe, and Steven Lowe for discussing drafts of this post on our internal mailing lists

Share:


DiversityMediocrityIllusion

diversity

tags:

I've often been involved in discussions about deliberately increasing the diversity of a group of people. The most common case in software is increasing the proportion of women. Two examples are in hiring and conference speaker rosters where we discuss trying to get the proportion of women to some level that's higher than usual. A common argument against pushing for greater diversity is that it will lower standards, raising the spectre of a diverse but mediocre group.

To understand why this is an illusionary concern, I like to consider a little thought experiment. Imagine a giant bucket that contains a hundred thousand marbles. You know that 10% of these marbles have a special sparkle that you can see when you carefully examine them. You also know that 80% of these marbles are blue and 20% pink, and that sparkles exist evenly across both colors [1]. If you were asked to pick out ten sparkly marbles, you know you could confidently go through some and pick them out. So now imagine you're told to pick out ten marbles such that five were blue and five were pink.

I don't think you would react by saying “that's impossible”. After all there are two thousand pink sparkly marbles in there, getting five of them is not beyond the wit of even a man. Similarly in software, there may be less women in the software business, but there are still enough good women to fit the roles a company or a conference needs.

The point of the marbles analogy, however, is to focus on the real consequence of the demand for 50:50 split. Yes it's possible to find the appropriate marbles, but the downside is that it takes longer. [2]

That notion applies to finding the right people too. Getting a better than base proportion of women isn't impossible, but it does require more work, often much more work. This extra effort reinforces the rarity, if people have difficulty finding good people as it is, it needs determined effort to spend the extra time to get a higher proportion of the minority group — even if you are only trying to raise the proportion of women up to 30%, rather than a full 50%.

In recent years we've made increasing our diversity a high priority at ThoughtWorks. This has led to a lot of effort trying to go to where we are more likely to run into the talented women we are seeking: women's colleges, women-in-IT groups and conferences. We encourage our women to speak at conferences, which helps let other women know we value a diverse workforce.

When interviewing, we make a point of ensuring there are women involved. This gives women candidates someone to relate to, and someone to ask questions which are often difficult to ask men. It's also vital to have women interview men, since we've found that women often spot problematic behaviors that men miss as we just don't have the experiences of subtle discriminations. Getting a diverse group of people inside the company isn't just a matter of recruiting, it also means paying a lot of attention to the environment we have, to try to ensure we don't have the same AlienatingAtmosphere that much of the industry exhibits. [3]

One argument I've heard against this approach is that if everyone did this, then we would run out of pink, sparkly marbles. We'll know this is something to be worried about when women are paid significantly more than men for the same work.

One anecdote that stuck in my memory was from a large, traditional company who wanted to improve the number of women in senior management positions. They didn't impose a quota on appointing women to those positions, but they did impose a quota for women on the list of candidates. (Something like: "there must be at least three credible women candidates for each post".) This candidate quota forced the company to actively seek out women candidates. The interesting point was that just doing this, with no mandate to actually appoint these women, correlated with an increased proportion of women in those positions.

For conference planning it's a similar strategy: just putting out a call for papers and saying you'd like a diverse speaker lineup isn't enough. Neither are such things as blind review of proposals (and I'm not sure that's a good idea anyway). The important thing is to seek out women and encourage them to submit ideas. Organizing conferences is hard enough work as it is, so I can sympathize with those that don't want to add to the workload, but those that do can get there. FlowCon is a good example of a conference that made this an explicit goal and did far better than the industry average (and in case you were wondering, there was no difference between men's and women's evaluation scores).

So now that we recognize that getting greater diversity is a matter of application and effort, we can ask ourselves whether the benefit is worth the cost. In a broad professional sense, I've argued that it is, because our DiversityImbalance is reducing our ability to bring the talent we need into our profession, and reducing the influence our profession needs to have on society. In addition I believe there is a moral argument to push back against long-standing wrongs faced by HistoricallyDiscriminatedAgainst groups.

Conferences have an important role to play in correcting this imbalance. The roster of speakers is, at least subconsciously, a statement of what the profession should look like. If it's all white guys like me, then that adds to the AlienatingAtmosphere that pushes women out of the profession. Therefore I believe that conferences need to strive to get an increased proportion of historically-discriminated-against speakers. We, as a profession, need to push them to do this. It also means that women have an extra burden to become visible and act as part of that better direction for us. [4]

For companies, the choice is more personal. For me, ThoughtWorks's efforts to improve its diversity are a major factor in why I've been an employee here for over a decade. I don't think it's a coincidence that ThoughtWorks is also a company that has a greater open-mindedness, and a lack of political maneuvering, than most of the companies I've consulted with over the years. I consider those attributes to be a considerable competitive advantage in attracting talented people, and providing an environment where we can collaborate effectively to do our work.

But I'm not holding ThoughtWorks up as an example of perfection. We've made a lot of progress over the decade I've been here, but we still have a long way to go. In particular we are very short of senior technical women. We've introduced a number of programs around networks, and leadership development, to help grow women to fill those gaps. But these things take time - all you have to do is look at our Technical Advisory Board to see that we are a long way from the ratio we seek.

Despite my knowledge of how far we still have to climb, I can glimpse the summit ahead. At a recent AwayDay in Atlanta I was delighted to see how many younger technical women we've managed to bring into the company. While struggling to keep my head above water as the sole male during a late night game of Dominion, I enjoyed a great feeling of hope for our future.

Notes

1: That is 10% of blue marbles are sparkly as are 10% of pink.

2: Actually, if I dig around for a while in that bucket, I find that some marbles are neither blue nor pink, but some engaging mixture of the two.

3: This is especially tricky for a company like us, where so much of our work is done in client environments, where we aren't able to exert as much of an influence as we'd like. Some of our offices have put together special training to educate both sexes on how to deal with sexist situations with clients. As a man, I feel it's important for me to know how I can be supportive, it's not something I do well, but it is something I want to learn to improve.

4: Many people find the pressure of public speaking intimidating (I've come to hate it, even with all my practice). Feeling that you're representing your entire gender or race only makes it worse.

Acknowledgements

Camila Tartari, Carol Cintra, Dani Schufeldt, Derek Hammer, Isabella Degen, Korny Sietsma, Lindy Stephens, Mridula Jayaraman, Nikki Appleby, Rebecca Parsons, Sarah Taraporewalla, Stefanie Tinder, and Suzi Edwards-Alexander commented on drafts of this article.

Share:


SacrificialArchitecture

process theory · evolutionary design · application architecture

tags:

You're sitting in a meeting, contemplating the code that your team has been working on for the last couple of years. You've come to the decision that the best thing you can do now is to throw away all that code, and rebuild on a totally new architecture. How does that make you feel about that doomed code, about the time you spent working on it, about the decisions you made all that time ago?

For many people throwing away a code base is a sign of failure, perhaps understandable given the inherent exploratory nature of software development, but still failure.

But often the best code you can write now is code you'll discard in a couple of years time.

Often we think of great code as long-lived software. I'm writing this article in an editor which dates back to the 1980's. Much thinking on software architecture is how to facilitate that kind of longevity. Yet success can also be built on the top of code long since sent to /dev/null.

Consider the story of eBay, one of the web's most successful large businesses. It started as a set of perl scripts built over a weekend in 1995. In 1997 it was all torn down and replaced with a system written in C++ on top of the windows tools of the time. Then in 2002 the application was rewritten again in Java. Were these early versions an error because the were replaced? Hardly. Ebay is one of the great successes of the web so far, but much of that success was built on the discarded software of the 90's. Like many successful websites, ebay has seen exponential growth - and exponential growth isn't kind to architectural decisions. The right architecture to support 1996-ebay isn't going to be the right architecture for 2006-ebay. The 1996 one won't handle 2006's load but the 2006 version is too complex to build, maintain, and evolve for the needs of 1996.

Indeed this guideline can be baked into an organization's way of working. At Google, the explicit rule is to design a system for ten times its current needs, with the implication that if the needs exceed an order of magnitude then it's often better to throw away and replace from scratch [1]. It's common for subsystems to be redesigned and thrown away every few years.

Indeed it's a common pattern to see people coming into a maturing code base denigrating its lack of performance or scalability. But often in the early period of a software system you're less sure of what it really needs to do, so it's important to put more focus on flexibility for changing features rather than performance or availability. Later on you need to switch priorities as you get more users, but getting too many users on an unperforment code base is usually the better problem than its inverse. Jeff Atwood coined the phrase "performance is a feature", which some people read as saying the performance is always priority number 1. But any feature is something you have to choose versus other features. That's not saying you should ignore things like performance - software can get sufficiently slow and unreliable to kill a business - but the team has to make the difficult trade-offs with other needs. Often these are more business decisions rather than technology ones.

So what does it mean to deliberately choose a sacrificial architecture? Essentially it means accepting now that in a few years time you'll (hopefully) need to throw away what you're currently building. This can mean accepting limits to the cross-functional needs of what you're putting together. It can mean thinking now about things that can make it easier to replace when the time comes - software designers rarely think about how to design their creation to support its graceful replacement. It also means recognizing that software that's thrown away in a relatively short time can still deliver plenty of value.

Knowing your architecture is sacrificial doesn't mean abandoning the internal quality of the software. Usually sacrificing internal quality will bite you more rapidly than the replacement time, unless you're already working on retiring the code base. Good modularity is a vital part of a healthy code base, and modularity is usually a big help when replacing a system. Indeed one of the best things to do with an early version of a system is to explore what the best modular structure should be so that you can build on that knowledge for the replacement. While it can be reasonable to sacrifice an entire system in its early days, as a system grows it's more effective to sacrifice individual modules - which you can only do if you have good module boundaries.

One thing that's easily missed when it comes to handling this problem is accounting. Yes, really — we've run into situations where people have been reluctant to replace a clearly unviable system because of the way they were amortizing the codebase. This is more likely to be an issue for big enterprises, but don't forget to check it if you live in that world.

You can also apply this principle to features within an existing system. If you're building a new feature it's often wise to make it available to only a subset of your users, so you can get feedback on whether it's a good idea. To do that you may initially build it in a sacrificial way, so that you don't invest the full effort on a feature that you find isn't worth full deployment.

Modular replaceability is a principal argument in favor of a microservices architecture, but I'm wary to recommend that for a sacrificial architecture. Microservices imply distribution and asynchrony, which are both complexity boosters. I've already run into a couple of projects that took the microservice path without really needing to — seriously slowing down their feature pipeline as a result. So a monolith is often a good sacrificial architecture, with microservices introduced later to gradually pull it apart.

The team that writes the sacrificial architecture is the team that decides it's time to sacrifice it. This is a different case to a new team coming in, hating the existing code, and wanting to rewrite it. It's easy to hate code you didn't write, without an understanding of the context in which it was written. Knowingly sacrificing your own code is a very different dynamic, and knowing you going to be sacrificing the code you're about to write is a useful variant on that.

Acknowledgements

Conversations with Randy Shoup encouraged and helped me formulate this post, in particular describing the history of eBay (and some similar stories from Google). Jonny Leroy pointed out the accounting issue. Keif Morris, Jason Yip, Mahendra Kariya, Jessica Kerr, Rahul Jain, Andrew Kiellor, Fabio Pereira, Pramod Sadalage, Jen Smith, Charles Haynes, Scott Robinson and Paul Hammant provided useful comments.

Notes

1: As Jeff Dean puts it "design for ~10X growth, but plan to rewrite before ~100X"

Share:


MicroservicePrerequisites

microservices

tags:

As I talk to people about using a microservices architectural style I hear a lot of optimism. Developers enjoy working with smaller units and have expectations of better modularity than with monoliths. But as with any architectural decision there are trade-offs. In particular with microservices there are serious consequences for operations, who now have to handle an ecosystem of small services rather than a single, well-defined monolith. Consequently if you don't have certain baseline competencies, you shouldn't consider using the microservice style.

Rapid provisioning: you should be able to fire up a new server in a matter of hours. Naturally this fits in with CloudComputing, but it's also something that can be done without a full cloud service. To be able to do such rapid provisioning, you'll need a lot of automation - it may not have to be fully automated to start with, but to do serious microservices later it will need to get that way.

Basic Monitoring: with many loosely-coupled services collaborating in production, things are bound to go wrong in ways that are difficult to detect in test environments. As a result it's essential that a monitoring regime is in place to detect serious problems quickly. The baseline here is detecting technical issues (counting errors, service availability, etc) but it's also worth monitoring business issues (such as detecting a drop in orders). If a sudden problem appears then you need to ensure you can quickly rollback, hence…

Rapid application deployment: with many services to manage, you need to be able to quickly deploy them, both to test environments and to production. Usually this will involve a DeploymentPipeline that can execute in no more than a couple of hours. Some manual intervention is alright in the early stages, but you'll be looking to fully automate it soon.

These capabilities imply an important organizational shift - close collaboration between developers and operations: the DevOps culture. This collaboration is needed to ensure that provisioning and deployment can be done rapidly, it's also important to ensure you can react quickly when your monitoring indicates a problem. In particular any incident management needs to involve the development team and operations, both in fixing the immediate problem and the root-cause analysis to ensure the underlying problems are fixed.

With this kind of setup in place, you're ready for a first system using a handful of microservices. Deploy this system and use it in production, expect to learn a lot about keeping it healthy and ensuring the devops collaboration is working well. Give yourself time to do this, learn from it, and grow more capability before you ramp up your number of services.

If you don't have these capabilities now, you should ensure you develop them so they are ready by the time you put a microservice system into production. Indeed these are capabilities that you really ought to have for monolithic systems too. While they aren't universally present across software organizations, there are very few places where they shouldn't be a high priority.

Going beyond a handful of services requires more. You'll need to trace business transactions through multiple services and automate your provisioning and deployment by fully embracing ContinuousDelivery. There's also the shift to product centered teams that needs to be started. You'll need to organize your development environment so developers can easily swap between multiple repositories, libraries, and languages. Some of my contacts are sensing that there could be a useful MaturityModel here that can help organizations as they take on more microservice implementations - we should see more conversation on that in the next few years.

Acknowledgements

This list originated in discussions with my ThoughtWorks colleagues, particularly those who attended the microservice summit earlier this year. I then structured and finalized the list in discussion with Evan Bottcher, Thiyagu Palanisamy, Sam Newman, and James Lewis.

And as usual there were valuable comments from our internal mailing list from Chris Ford, Kief Morris, Premanand Chandrasekaran, Rebecca Parsons, Sarah Taraporewalla, and Ian Cartwright.

Share: