ExploratoryTesting

18 November 2019

Exploratory testing is a style of testing that emphasizes a rapid cycle of learning, test design, and test execution. Rather than trying to verify that the software conforms to a pre-written test script, exploratory testing explores the characteristics of the software, raising discoveries that will then be classified as reasonable behavior or failures.

The exploratory testing mindset is a contrast to that of scripted testing. In scripted testing, test designers create a script of tests, where each manipulation of the software is written down, together with the expected behavior of the software. These scripts are executed separately, usually many times, and usually by different actors than those who wrote them. If any test demonstrates behavior that doesn't match the expected behavior designed by the test, then we consider this a failure.

For a long time scripted tests were usually executed by testers, and you'd see lots of relatively junior folks in cubicles clicking through screens following the script and checking the result. In large part due to the influence of communities like Extreme Programming, there's been a shift to automating scripted testing. This allows the tests to be executed faster, and eliminates the human error involved in evaluating the expected behavior. I've long been a firm advocate of automated testing like this, and have seen great success with its use drastically reducing bugs.

But even the most determined automated testers realize that there are fundamental limitations with the technique, which are limitations of any form of scripted testing. Scripted testing can only verify what is in the script, catching only conditions that are known about. Such tests can be a fine net that catches any bugs that try to get through it, but how do we know that the net covers all it ought to?

Exploratory testing seeks to test the boundaries of the net, finding new behaviors that aren't in any of the scripts. Often it will find new failures that can be added to the scripts, sometimes it exposes behaviors that are benign, even welcome, but not thought of before.

Exploratory testing is a much more fluid and informal process than scripted testing, but it still requires discipline to be done well. A good way to do this is to carry out exploratory testing in time-boxed sessions. These sessions focus on a particular aspect of the software. A charter that identifies the target of the session and what information you hope to find is a fine mechanism to provide this focus.

Elisabeth Hendrickson is one of the most articulate exponents of exploratory testing, and her book is the first choice to dig for more information on how to do this well.

Such a charter can act as focus, but shouldn't attempt to define details of what will happen in the session. Exploratory testing involves trying things, learning more about what the software does, applying that learning to generate questions and hypotheses, and generating new tests in the moment to gather more information. Often this will spur questions outside the bounds of the charter, that can be explored in later sessions.

Exploratory testing requires skilled and curious testers, who are comfortable with learning about the software and coming up with new test designs during a session. They also need to be observant, on the lookout for any behavior that might seem odd, and worth further investigation. Often, however, they don't have to be full-time testers. Some teams like to have the whole team carry out exploratory testing, perhaps in pairs or in a single mob.

Exploratory testing should be a regular activity occurring throughout the software development process. Sadly it's hard to find any guidelines on how much should be done within a project. I'd suggest starting with a one hour session every couple of weeks and see what kinds of information the sessions unearth. Some teams like to arrange half-an-hour or so of exploratory testing whenever they complete a story.

If you find bugs are getting through to production, that's a sign that there are gaps in the testing regimen. It's worth looking at any bug that escapes to production and thinking about what measures could be taken to either prevent the bug from getting there, or detecting it rapidly when in production. This analysis will help you decide whether you need more exploratory testing. Bear in mind that it will take time to build up the skill to do exploratory testing well, if you haven't done much exploratory testing before.

I would consider it a red flag if a team isn't doing exploratory testing at all - even if their automated testing was excellent. Even the best automated testing is inherently scripted testing - and that alone is not good enough.

Acknowledgements

Almost all I know about Exploratory Testing comes from Elisabeth Hendrickson's fine book, which is also where I pinched the net metaphor from.

Aida Manna, Alex Fraser, Bharath Kumar Hemachandran, Chris Ford, Claire Sudbery, Daniel Mondria, David Corrales, David Cullen, David Salazar Villegas, Lina Zubyte, and Philip Peter discussed drafts of this article on our internal mailing list.


WaterfallProcess

13 November 2019

In the software world, “waterfall” is commonly used to describe a style of software process, one that contrasts with the ideas of iterative, or agile styles. Like many well-known terms in software it's meaning is ill-defined and origins are obscure - but I find its essential theme is breaking down a large effort into phases based on activity.

It's not clear how the word “waterfall” became so prevalent, but most people base its origin on a paper by Winston Royce, in particular this figure:

Although this paper seems to be universally acknowledged as the source of the notion of waterfall (based on the shape of the downward cascade of tasks), the term “waterfall” never appears in the paper. It's not clear how the name appeared later.

Royce’s paper describes his observations on the software development process of the time (late 60s) and how the usual implementation steps could be improved. [1] But “waterfall” has gone much further, to be used as a general description of a style of software development. For people like me, who speak at software conferences, it almost always only appears in a derogatory manner - I can’t recall hearing any conference speaker saying anything good about waterfall for many years. However when talking to practitioners in enterprises I do hear of it spoken as a viable, even preferred, development style. Certainly less so now than in the 90s, but more frequently than one might assume by listening to process mavens.

But what exactly is “waterfall”? That’s not an easy question to answer as, like so many things in software, there is no clear definition. In my judgment, there is one common characteristic that dominates any definition folks use for waterfall, and that’s the idea of decomposing effort into phases based on activity.

Let me unpack that phrase. Let’s say I have some software to build, and I think it’s going to take about a year to build it. Few people are going to happily say “go away for a year and tell me when its done”. Instead, most people will want to break down that year into smaller chunks, so they can monitor progress and have confidence that things are on track. The question then is how do we perform this break down?

The waterfall style, as suggested by the Royce sketch, does it by the activity we are doing. So our 1 year project might be broken down into 2 months of analysis, followed by 4 months design, 3 months of coding, and 3 months of testing. The contrast here is to an iterative style, where we would take some high level requirements (build a library management system), and divide them into subsets (search catalog, reserve a book, check-out and return, assess fines). We'd then take one of these subsets and spend a couple of months to build working software to implement that functionality, delivering either into a staging environment or preferably into a live production setting. Having done that with one subset, we'd continue with further subsets.

In this thinking waterfall means “do one activity at a time for all the features” while iterative means “do all activities for one feature at a time”.

If the origin of the word “waterfall” is murky, so is the notion of how this phase-based breakdown originated. My guess is that it’s natural to break down a large task into different activities, especially if you look to activities such as building construction as an inspiration. Each activity requires different skills, so getting all the analysts to complete analysis before you bring in all the coders makes intuitive sense. It seems logical that a misunderstanding of requirements is cheaper to fix before people begin coding - especially considering the state of computers in the late 60s. Finally the same activity-based breakdown can be used as a standard for many projects, while a feature-based breakdown is harder to teach. [2]

Although it isn’t hard to find people explain why this waterfall thinking isn’t a good idea for software development, I should summarize my primary objections to the waterfall style here. The waterfall style usually has testing and integration as two of the final phases in the cycle, but these are the most difficult to predict elements in a development project. Problems at these stages lead to rework of many steps of earlier phases, and to significant project delays. It's too easy to declare all but the late phases as "done", with much work missing, and thus it's hard to tell if the project is going well. There is no opportunity for early releases before all features are done. All this introduces a great deal of risk to the development effort.

Furthermore, a waterfall approach forces us into a predictive style of planning, it assumes that once you are done with a phase, such as requirements analysis, the resulting deliverable is a stable platform for later phases to base their work on. [3] In practice the vast majority of software projects find they need to change their requirements significantly within a few months, due to everyone learning more about the domain, the characteristics of the software environment, and changes in the business environment. Indeed we've found that delivering a subset of features does more than anything to help clarify what needs to be done next, so an iterative approach allows us to shift to an adaptive planning approach where we update our plans as we learn what the real software needs are. [4]

These are the major reasons why I've glibly said that "you should use iterative development only in projects that you want to succeed".

Waterfalls and iterations may nest inside each other. A six year project might consist of two 3 year projects, where each of the two projects are structured in a waterfall style, but the second project adds additional features. You can think of this as a two-iteration project at the top level with each iteration as a waterfall. Due to the large size and small number of iterations, I'd regard that as primarily a waterfall project. In contrast you might see a project with 16 iterations of one month each, where each iteration is planned in a waterfall style. That I'd see as primarily iterative. While in theory there's potential for a middle ground projects that are hard to classify, in practice it's usually easy to tell that one style predominates.

It is possible for a mix of waterfall and iterative where early phases (requirements analysis, high level design) are done in a waterfall style while later phases (detailed design, code, test) are done in an iterative manner. This reduces the risks inherent in late testing and integration phases, but does not enable adaptive planning.

Waterfall is often cast as the alternative to agile software development, but I don't see that as strictly true. Certainly agile processes require an iterative approach and cannot work in a waterfall style. But it is easy to follow an iterative approach (i.e. non-waterfall) but not be agile. [5] I might do this by taking 100 features and dividing them up into ten iterations over the next year, and then expecting that each iteration should complete on time with its planned set of features. If I do this, my initial plan is a predictive plan, if all goes well I should expect the work to closely follow the plan. But adaptive planning is an essential element of agile thinking. I expect features to move between iterations, new features to appear, and many features to be discarded as no longer valuable enough.g

My rule of thumb is that anyone who says “we were successful because we were on-time and on-budget” is thinking in terms of predictive planning, even if they are following an iterative process, and thus is not thinking with an agile mindset. In the agile world, success is all about business value - regardless of what was written in a plan months ago. Plans are made, but updated regularly. They guide decisions on what to do next, but are not used as a success measure.

Notes

1: There have been quite a few people seeking to interpret the Royce paper. Some argue that his paper opposes waterfall, pointing out that the paper discusses flaws in the kind of process suggested by the figure 2 that I've quoted here. Certainly he does discuss flaws, but he also says the illustrated approach is "fundamentally sound". Certainly this activity-based decomposition of projects became the accepted model in the decades that followed.

2: This leads to another common characteristic that goes with the term “waterfall” - rigid processes that tell everyone in detail what they should do. Certainly the software process folks in the 90s were keen on coming up with prescriptive methods, but such prescriptive thinking also affected many who advocated iterative techniques. Indeed although agile methods explicitly disavow this kind of Taylorist thinking, I often hear of faux-agile initiatives following this route.

3: The notion that a phase should be finished before the next one is started is a convenient fiction. Even the most eager waterfall proponent would agree that some rework on prior stages is necessary in practice, although I think most would say that if executed perfectly, each activity wouldn't need rework. Royce's paper explicitly discussed how iteration was expected between adjacent steps (eg Analysis and Program Design in his figure). However Royce argued that longer backtracks (eg between Program Design and Testing) were a serious problem.

4: This does raise the question of whether there are contexts where the waterfall style is actually better than the iterative one. In theory, waterfall might well work better in situations where there was a deep understanding of the requirements, and the technologies being used - and neither of those things would significantly change during the life of the product. I say "in theory" because I've not come across such a circumstance, so I can't judge if waterfall would be appropriate in practice. And even then I'd be reluctant to follow the waterfall style for the later phases (code-test-integrate) as I've found so much value in interleaving testing with coding while doing continuous integration..

5: In the 90s it was generally accepted in the object-oriented world that waterfall was a bad idea and should be replaced with an iterative style. However I don't think there was the degree of embracing changing requirements that appeared with the agile community.

Acknowledgements

My thanks to Ben Noble, Clare Sudbury, David Johnston, Karl Brown, Kyle Hodgson, Pramod Sadalage, Prasanna Pendse, Rebecca Parsons, Sriram Narayan, Sriram Narayanan, Tiago Griffo, Unmesh Joshi, and Vidhyalakshmi Narayanaswamy who discussed drafts of this post on our internal mailing list.


TechnicalDebt

21 May 2019

Software systems are prone to the build up of cruft - deficiencies in internal quality that make it harder than it would ideally be to modify and extend the system further. Technical Debt is a metaphor, coined by Ward Cunningham, that frames how to think about dealing with this cruft, thinking of it like a financial debt. The extra effort that it takes to add new features is the interest paid on the debt.

Imagine I have a confusing module structure in my code base. I need to add a new feature. If the module structure was clear, then it would take me four days to add the feature but with this cruft, it takes me six days. The two day difference is the interest on the debt.

What most appeals to me about the debt metaphor is how it frames how I think about how to deal with this cruft. I could take five days to clean up the modular structure, removing that cruft, metaphorically paying off the principal. If I only do it for this one feature, that's no gain, as I'd take nine days instead of six. But if I have two more similar features coming up, then I'll end up faster by removing the cruft first.

Stated like that, this sounds like a simple matter of working the numbers, and any manager with a spreadsheet should figure out the choices. Sadly since we CannotMeasureProductivity, none of these costs are objectively measurable. We can estimate how long it takes to do a feature, estimate what it might be like if the cruft was removed, and estimate the cost of removing the cruft. But our accuracy of such estimates is pretty low.

Given this, usually the best route is to do what we usually do with financial debts, pay the principal off gradually. On the first feature I'll spend an extra couple of days to remove some of the cruft. That may be enough to reduce the interest rate on future enhancements to a single day. That's still going to take extra time, but by removing the cruft I'm making it cheaper for future changes to this code. The great benefit of gradual improvement like this is that it naturally means we spend more time on removing cruft in those areas that we modify frequently, which are exactly those areas of the code base where we most need the cruft to be removed.

Thinking of this as paying interest versus paying of principal can help decide which cruft to tackle. If I have a terrible area of the code base, one that's a nightmare to change, it's not a problem if I don't have to modify it. I only trigger an interest payment when I have to work with that part of the software (this is a place where the metaphor breaks down, since financial interest payments are triggered by the passage of time). So crufty but stable areas of code can be left alone. In contrast, areas of high activity need a zero-tolerance attitude to cruft, because the interest payments are cripplingly high. This is especially important since cruft accumulates where developers make changes without paying attention to internal quality - the more changes, the greater risk of cruft building up.

The metaphor of debt is sometimes used to justify neglecting internal quality. The argument is that it takes time and effort to stop cruft from building up. If there new features that are needed urgently, then perhaps it's best to take on the debt, accepting that this debt will have to be managed in the future.

The danger here is that most of the time this analysis isn't done well. Cruft has a quick impact, slowing down the very new features that are needed quickly. Teams who do this end up maxing out all their credit cards, but still delivering later than they would have done had they put the effort into higher internal quality. Here the metaphor often leads people astray, as the dynamics don't really match those for financial loans. Taking on debt to speed delivery only works if you stay below the design payoff line of the DesignStaminaHypothesis, and teams hit that line in weeks rather than months.

There are regular debates whether different kinds of cruft should be considered as debt or not. I found it useful to think about whether the debt is acquired deliberately and whether it is prudent or reckless - leading me to the TechnicalDebtQuadrant.

Further Reading

As far as I can tell, Ward first introduced this concept in an experience report for OOPSLA 1992. It has also been discussed on the wiki.

Ward Cunningham has a video talk where he discusses this metaphor he created.

Dave Nicolette expands on Ward's view of technical debt with a fine case study of what I refer to as Prudent Intentional debt

A couple of readers sent in some similarly good names. David Panariti refers to ugly programming as deficit programming. Apparently he originally started using a few years ago when it fitted in with government policy; I suppose it's natural again now.

Scott Wood suggested "Technical Inflation could be viewed as the ground lost when the current level of technology surpasses that of the foundation of your product to the extent that it begins losing compatibility with the industry. Examples of this would be falling behind in versions of a language to the point where your code is no longer compatible with main stream compilers."

Steve McConnell brings out several good points in the metaphor, particularly how keeping your unintended debt down gives you more room to intentionally take on debt when it's useful to do so. I also like his notion of minimum payments (which are very high to fix issues with embedded systems as opposed to web sites).

Aaron Erickson talks about Enron financing.

Henrik Kniberg argues that it's older technical debt that causes the greatest problem and that it's wise to a qualitative debt ceiling to help manage it.

Erik Dietrich discusses the human cost of technical debt: team infighting, atrophied skills, and attrition.

Revisions

I originally published this post on October 1 2003. I gave it a thorough rewrite in April 2019.


LockInCost

5 March 2019

In my recent client engagement, I foresaw that serverless architecture was a perfect fit. The idea of adopting serverless architecture, though, didn’t fly to our client well due to the fear of vendor lock-in. It was an interesting time for retailers as staying in AWS might mean that Amazon, as another retail business, will be given a competitive advantage. Given the idea of not supporting a competitor, my client was interested to ensure that the solution chosen by us is fully portable to other cloud vendors.

From a technical perspective, ensuring that we have the ability to move our system from one platform to another is definitely desirable. With the advent of containerization, why would one be interested to be locked in a specific platform? A high lock-in cost is not something that we would like to show back to the business when we have decided to move another platform. We, therefore, need to make sure that the migration cost is as low as possible when this scenario happens. If I’m about to make a simple formula for lock-in cost with our current understanding, it would look like this:

Lock-in cost = Migration cost (?)

This formula is correct when we are looking at it only from a technical perspective. A business perspective, however, should not be overlooked. Remember that the technical solutions we deliver are always designed to solve business problems. Most of the times the business get a benefit when a particular technology is adopted. One of the significant benefits is a faster time to market. Faster time to market can be formulated into opportunity gain:

Lock-in cost = Migration cost - Opportunity gain

Opportunity gain is very difficult to measure because you are dealing with an unknown unknown. Migration cost can be analyzed and reasoned. Opportunity gain, in contrast, is not as easy to analyze. You can theorize and analyze how to migrate from one platform to another, but how would you calculate the gain of seizing your competitors’ market opportunity? By looking at your decision-making process from a holistic view, combining both the technical and business perspective, the lock-in decision you are taking might result in a profit.

Let’s have a look into an example of building an event-driven architecture. You will need to choose a distributed messaging system in the architecture. If you are already chosen AWS as your platform, you would have the option of vendor-specific services like Kinesis. These services are fully managed and you can get it running in no time, hence giving you an opportunity gain. In comparison with a vendor-agnostic system like Kafka, these vendor-specific services will incur a higher migration cost. Setting up your own distributed messaging system, however, will take more time to harden and for it to be made production ready, especially when you are not experienced in building such platform yet. Instead of looking at your decision from just migration cost, focus on how you can reduce the migration cost by making your system more adaptable. Especially in this example of using a cloud, this is a similar reason on why we recommend to avoid the practice of generic cloud usage.

Acknowledgements

Thanks to Chris Ford, Matt Newman, Luciano Ramalho, Tobias Vogel, Zhamak Dehghani, Kitson Kelly, and Peter Gillard-Moss for their inputs.

Special thanks to Martin Fowler for his support, suggestions, and time spent with the content and help with publishing.


IntegrationTest

16 January 2018

Integration tests determine if independently developed units of software work correctly when they are connected to each other. The term has become blurred even by the diffuse standards of the software industry, so I've been wary of using it in my writing. In particular, many people assume integration tests are necessarily broad in scope, while they can be more effectively done with a narrower scope.

As often with these things, it's best to start with a bit of history. When I first learned about integration testing, it was in the 1980's and the waterfall was the dominant influence of software development thinking. In a larger project, we would have a design phase that would specify the interface and behavior of the various modules in the system. Modules would then be assigned to developers to program. It was not unusual for one programmer to be responsible for a single module, but this would be big enough that it could take months to build it. All this work was done in isolation, and when the programmer believed it was finished they would hand it over to QA for testing.

The first part of testing would be unit testing, which would test that module on its own, against the specification that had been done in the design phase. Once that was complete, we then move to integration testing, where the various modules are combined together, either into the entire system, or into significant sub-systems.

The point of integration testing, as the name suggests, is to test whether many separately developed modules work together as expected. It was performed by activating many modules and running higher level tests against all of them to ensure they operated together. These modules could parts of a single executable, or separate.

Looking at it from a more 2010s perspective, these conflated two different things:

  • testing that separately developed modules worked together properly
  • test that a system of multiple modules worked as expected.

These two things were easy to conflate, after all how else would you test the frobile and twibbler modules without activating them both into a single environment and running tests that exercised both modules?

The 2010s perspective offers another alternative, one that was rarely considered in the 1980s. In the alternative, we test the integration of the frobile and twibbler modules by exercising the portion of the code in frobile that interacts with twibbler, executing it against a TestDouble of twibbler. Providing the test double is a faithful double of twibber, we can then test all the interaction behavior of twibbler without activating a full twibbler instance. This may not be a big deal if they are separate modules of a monolithic application, but is a big deal if twibbler is a separate service, which requires its own build tools, environments, and network connections. For services, such tests may run against an in-process test double, or against an over-the-wire double, using something like mountebank

An obvious catch with integration testing against a double is whether that double is truly faithful. But we can test that separately using ContractTests.

Using this combination of using narrow integration tests and contract tests, I can be confident of integrating against an external service without ever running tests against a real instance of that service, which greatly eases my build process. Teams that do this, may still do some form of end-to-end system test with all real services, but if so it's only a final smoke test with a very limited range of paths tested. It also helps to have a mature QA in Production capability, and if that is mature enough, there may be no end-to-end system testing done at all.

The problem is that we have (at least) two different notions of what constitutes an integration test.

narrow integration tests

  • exercise only that portion of the code in my service that talks to a separate service
  • uses test doubles of those services, either in process or remote
  • thus consist of many narrowly scoped tests, often no larger in scope than a unit test (and usually run with the same test framework that's used for unit tests)

broad integration tests

  • require live versions of all services, requiring substantial test environment and network access
  • exercise code paths through all services, not just code responsible for interactions

And there is a large population of software developers for whom “integration test” only means “broad integration tests”, leading to plenty of confusion when they run into people who use the narrow approach.

If your only integration tests are broad ones, you should consider exploring the narrow style, as it's likely to significantly improve your testing speed, ease of use, and resiliency. Since narrow integration tests are limited in scope, they often run very fast, so can run in early stages of a DeploymentPipeline, providing faster feedback should they go red.

All this is why I'm wary with “integration test”. When I read it, I look for more context so I know which kind the author really means. [1] If I talk about broad integration tests, I prefer to use “system test” or “end-to-end test”. I don’t have any better name for narrow integration tests, so I do use that (but with “narrow” to help signal to the reader the nature of these tests).

Acknowledgements

Birgitta Böckeler, Brian Oxley, Dave Rice, Deepti Mittal, Jonny Leroy, Kief Morris, Raimund Klein, Rogerio Chaves, and Tiago Griffo discussed drafts of this post on our internal mailing list.

Notes

1: Although I prefer to focus the definition on the interaction of separately built modules, I do occasionally see “integration test” used to mean anything bigger than a unit test. And for some users of solitary unit tests, I’ve seen them describe sociable unit tests as “integration tests”.


MachineJustification

14 November 2017

I remember in my teens being told of the wonderful things Artificial Intelligence (AI) would do in the next few years. Now several decades later, some of these seem to be happening. The most recent triumph was of computers teaching each other to play Go by playing against each other, rapidly becoming more proficient than any human, with strategies human experts could barely comprehend. It's natural to wonder what will happen over the next few years, will computers soon have greater intelligence than humanity? (Given some recent election results, that may not be too hard a bar to cross.)

But as I hear of these, I recall Pablo Picasso's comment about computers many decades ago: "Computers are useless. They can only give you answers". The kind of reasoning that techniques such as Machine Learning can result in are truly impressive in their results, and will be useful to us as users and developers of software. But answers, while useful, aren't always the whole picture. I learned this in my early days of school - just providing the answer to a math problem would only get me a couple of marks, to get the full score I had to show how I got it. The reasoning that got to the answer was more valuable than the result itself. That's one of the limitations of the self-taught Go AIs. While they can win, they cannot explain their strategies.

Given this world, one of the big challenges I see for AI is that while we may have figured out Machine Learning in order to teach them to get answers, we haven't got systems that can do Machine Justification for their answers. As AIs make more judgments for us, we'll increasingly run into situations where the answer isn't enough. An AI might be trained in such a way to rule on legal cases, but could we accept a judgment where the AI cannot explain its reasoning?

Given this it seems likely that we will need a new class of "programmer' in the future, one whose job is to figure out why AIs get the answer they do, to deduce the reasoning underlying the AIs skills. We could see many fields where AIs make opaque judgments that we can see are good, but need another approach for us to really learn the theory that underlies their decisions.

This problem is particularly acute since we've discovered that it's awfully easy for these machines to learn undesirable behaviors from their training data, such as discriminating against racial minorities when judging credit ratings.

Like many, I see much of the opportunity of computers is in collaboration with humans. Good use of computers is understanding where the computer is strong (rapidly doing constrained work) and where humans are better, and using a mix. Computers are, at their most intellectual, a tool for the mind. In programming I'm happy to lean on the compiler to help me find errors or suggest alternatives, a practice which I was scolded for as a young programmer. That boundary between where the two are strongest is fluid, and one of the fascinations of the future is how we can best take advantage of its movement.

Further Reading

MIT Technology Review looks at the broad topic of explainability for AI.

Some articles in the dangers of machine learning and undesirable bias from The Atlantic, NPR, and Tech Republic

Acknowledgements

Brandon Byars, Chris Ford, Christoph Windheuser, Danilo Sato, Dave Elliman, Ian Cartwright, Kent Rahman, Saleem Siddiqui, Sallie Walecka, Tito Sarrionandia, and Vishal Bardoloi discussed drafts of this post on our internal mailing lists.
Translations: Portuguese

SelfEncapsulation

9 March 2017

Data encapsulation is a central tenet in object-oriented style. This says that the fields of an object should not be exposed publicly, instead all access from outside the object should be via accessor methods (getters and setters). There are languages that allow publicly accessible fields, but we usually caution programmers not to do this. Self-encapsulation goes a step further, indicating that all internal access to a data field should also go through accessor methods as well. Only the accessor methods should touch the data value itself. If the data field isn't exposed to the outside, this will mean adding additional private accessors.

Here's an example of a reasonably encapsulated java class

class Charge…

  private int units;
  private double rate;

  public Charge(int units, double rate) {
    this.units = units;
    this.rate = rate;
  }
  public int getUnits() { return units; }
  public Money getAmount() { return Money.usd(units * rate); }

Both fields are immutable. The units field is exposed to clients of the class via a getter, but the rate field is only used internally, so doesn't need a getter.

Here is a version using self-encapsulation.

class ChargeSE…

  private int units;
  private double rate;

  public ChargeSE(int units, double rate) {
    this.units = units;
    this.rate = rate;
  }
  public int getUnits()    { return units; }
  private double getRate() { return rate; }
  public Money getAmount() { return Money.usd(getUnits() * getRate()); }

Self encapsulaton means that getAmount needs to access both fields through getters. This also means I have to add a getter for rate, which I should make private.

Encapsulating mutable data is generally a good idea. Update functions can contain code to execute validations and consequential logic. By restricting access through functions, we can support the UniformAccessPrinciple, allowing us to hide which data is computed and which is stored. These accessors allow us to modify the data structures while retaining the same public interface. Different languages differ in details of what is "outside" for an object by various kinds of AccessModifier, but most environments support data encapsulation to some degree.

I've come across a few organizations that mandated self-encapsulation, and whether to use it or not was a regular topic of debate since the 90's. Its advocates said that encapsulation was such a benefit, that you wanted to incorporate it to internal access too. Critics argued that it was unnecessary ceremony leading to unnecessary code that obscured what was going on.

My view on this is that most of the time there's little value in self-encapsulation. The value of encapsulation is proportional to the scope of the data access. Classes are usually small (at least mine are) so direct access isn't going to be an issue within that scope. Most accessors are simple assignments for the setter and retrieval for the getter, so there's little value in using them internally.

But there are common circumstances where self-encapsulation is worth the effort. If there is logic in the setter, then it's wise to consider it for any internal updates too. Another circumstance is when the class is part of an inheritance structure, in which case the accessors provide valuable hook points for subclasses to override behavior.

So my usual first move is to use direct access to fields, but refactor using Self Encapsulate Field should circumstances demand it. Often the forces that lead me to consider self-encapsulation I can resolve by extracting a new class.

Further Reading

Kent Beck discusses these trade-offs under the names Direct Access and Indirect Access in both Implementation Patterns and Smalltalk Best Practice Patterns

Acknowledgements

Ian Cartwright, Matteo Vaccari, and Philip Duldig commented on drafts of this post
Translations: Chinese