InfrastructureAsCode

delivery · microservices

tags:

Infrastructure as code is the approach to defining computing and network infrastructure through source code that can then be treated just like any software system. Such code can be kept in source control to allow auditability and ReproducibleBuilds, subject to testing practices, and the full discipline of ContinuousDelivery. It's an approach that's been used over the last decade to deal with growing CloudComputing platforms and will become the dominant way to handle computing infrastructure in the next.

I grew up in the Iron Age, when releasing a new server application meant finding some physical hardware to run it on, configuring that hardware to support the needs of the application, and deploying that application to the hardware. Getting hold of that hardware was usually expensive, but also long winded, usually a matter of months. But now we live the Cloud Age, where firing up a new server is a matter of seconds, requiring no more than an internet connection and a credit card. This is a dynamic infrastructure where software commands are used to create servers (often virtual machines, but can be installations on bare metal), provision them, and tear them down, all without going anywhere near a screwdriver.


Practices

Infrastructure as Code is based on a few practices:

  • Use Definition Files: all configuration is defined in executable configuration definition files, such as shell scripts, Ansible playbooks, Chef recipes, or Puppet manifests. At no time should anyone log into a server and make on-the-fly adjustments. Any such tinkering risks creating SnowflakeServers, and so should only be done while developing the code that acts as the lasting definition. This means that applying an update with the code should be fast. Fortunately computers execute code quickly, allowing them to provision hundreds of servers faster than any human could type.
  • Self-documented systems and processes: rather than instructions in documents for humans to execute with the usual level of human reliability, code is more precise and consistently executed. If necessary, other human readable documentation can be generated from this code.
  • Version all the things: Keep all this code in source control. That way every configuration and every change is recorded for audit and you can make ReproducibleBuilds to help diagnose problems.
  • Continuously test systems and processes: tests allow computers to rapidly find many errors in infrastructure configuration. As with any modern software system, you can set up DeploymentPipelines for your infrastructure code which allows you to practice ContinuousDelivery of infrastructure changes.
  • Small changes rather than batches: the bigger the infrastructure update, the more likely it is to contain an error and the harder it is to detect that error, particularly if several errors interact. Small updates make it easier to find errors and are easier to revert. When changing infrastructure FrequencyReducesDifficulty.
  • Keep services available continuously: increasingly systems cannot afford downtime for upgrades or fixes. Techniques such as BlueGreenDeployment and ParallelChange can allow small updates to occur without losing availability

Benefits

All of this allows us to take advantage of dynamic infrastructure by starting up new servers easily, and safely disposing of servers when they are replaced by newer configurations or when load decreases. Creating new servers is just a case of running the script to create as many server instances as needed. This approach is a good fit with PhoenixServers and ImmutableServers


Kief Morris's book is due to be published later this year


Using code to define the server configuration means that there is greater consistency between servers. With manual provisioning different interpretations of imprecise instructions (let alone errors) lead to snowflakes with subtly different configurations, which often leads to tricky faults that are hard to debug. Such difficulties are often made worse by inconsistent monitoring, and again using code ensures that monitoring is consistent too.

Most importantly using configuration code makes changes safer, allowing upgrades of applications and system software with less risk. Faults can be found and fixed more quickly and at worst changes can be reverted to the last working configuration.

Having your infrastructure defined as version-controlled code aids with compliance and audit. Every change to your configuration can be logged and isn't susceptible to faulty record keeping.

All of this increases in importance as you need to handle more servers, making infrastructure as code a necessary capability if you're moving to a serious adoption of microservices. Infrastructure as Code techniques scale effectively to manage large clusters of servers, both in configuring the servers and specifying how they should interact.

Further Reading

My colleague Kief Morris has spent the last year working on a book that goes into more detail about infrastructure as code, which is currently in the final stages of production. The list of practices is taken directly from this book.

Acknowledgements

This post is based on the writing and many conversations with Kief Morris.

Ananthapadmanabhan Ranganathan, Danilo Sato, Ketan Padegaonkar, Piyush Srivastava, Rafael Gomes, Ranjan D Sakalley, Sina Jahangirizadeh, and Srivatsa Katta discussed drafts of this post on our internal mailing list.

Translations: Spanish
Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

ListAndHash

language feature

tags:

It's now common in many programming environments to represent data structures as a composite of lists and hashmaps. Most major languages now provide standard versions of these data structures, together with a rich range of operations, in particular Collection Pipelines, to manipulate them. These data structures are very flexible, allowing us to represent most forms of hierarchy in a manner that's easy to process and manipulate. [1]

The essence of this data structure is that there are (usually) two composite data types:

The leaves of the tree can be any other element, commonly the basic primitives in the language (such as integers and strings), but also any other structure that isn't treatable as a list or hash.

In most cases there are separate data types for the list and hash, since their access operations differ. However, as any lisper can tell you, it's easy to represent a hash as a list of key-value pairs. Similarly you can treat a hash with numeric indexes as a list (which is what Lua's tables do).

A list 'n' hash structure is by default schemaless, the lists can contain disparate elements and the hashes any combination of keys. This allows the data structure to be very flexible, but we must remember that we nearly always have an implicit schema when we manipulate a schemaless data structure, in that we expect certain data to be represented with certain keys.

A strength of the list and hash structure is that you can manipulate it with generic operations which know nothing of the actual keys present. These operations can then be parameterized with the keys that you wish to manipulate. The generic operations, usually arranged into a collection pipeline, provide a lot of navigation features to allow you to pluck what you need from the data structure without having to manipulate the individual pieces.

Although the usual way is to use flexible hashes for records, you can take a structure that uses defined record structures (or objects) and manipulate it in the same way as a hash if those record structures provide reflective operations. While such a structure will restrict what you can put in it (which is often a Good Thing), using generic operations to manipulate it can be very useful. But this does require the language environment to provide the mechanism to query records as if they are hashes.

List and hash structures can easily be serialized, commonly into a textual form. JSON is a particularly effective form of serialization for such a data structure, and is my default choice for this. Often XML is used to serialize list 'n' hash structures, it does a serviceable job, but is verbose and the distinction between attributes and elements makes no sense for these structures (although it makes plenty of sense for marking up text).

Despite the fact that list 'n' hashes are very common, there are times I wish I was using a thoughtful tree representation. Such a model can provide richer navigation operations. When working with the serialized XML structures in Nokogiri, I find it handy to be able to use XPath or CSS selectors to navigate the data structure. Some kind of common path specification such as these is handy for larger documents. Another issue is that it can be more awkward than it should to find the parent or ancestors of a given node in the tree. The presence of rich lists and hashes as standard equipment in modern languages has been one of the definite improvements in my programming life since I started programming in Fortran IV, but there's no need to stop there.

Acknowledgements

David Johnston, Marzieh Morovatpasand, Peter Gillard-Moss, Philip Duldig, Rebecca Parsons, Ryan Murray, and Steven Lowe discussed this post on our internal mailing list.

Notes

1: I find it awkward that there's no generally accepted, cross-language term for this kind of data structure. I could do with such a term, hence my desire to make a Neologism

Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

EvolvingPublication

writing

tags:

When I was starting out on my writing career, I began with writing articles for technical magazines. Now, when I write article length pieces, they are all written for the web. Paper magazines still exist, but they are a shrinking minority, probably doomed to extinction. Yet despite the withering of paper magazines, many of the assumptions of paper magazines still exact a hold on writers and publishers. This has particularly risen up in some recent conversations with people working on articles I want to publish on my site.

Most web sites still follow the model of the Paper Age. These sites consist of articles that are grouped primarily due to when they were published. Such articles are usually written in one episode and published as a whole. Occasionally longer articles are split into parts, so they can be published in stages over time (if so they also may be written in parts).

Yet these are constraints of a paper medium, where updating something already published is mostly impossible. [1] There's no reason to have an article split over distinct parts on the web, instead you can publish the first part and revise it by adding material later on. You can also substantially revise an existing article by changing the sections you've already published.

I do this whenever I feel the need on my site. Most of the longer-form articles that I've published on my site in the last couple of years were published in installments. For example, the popular article on Microservices was originally published over nine installments in March 2014. Yet it was written and conceived as a single article, and since that final installment, it's existed on the web as a single article.

Our first rationale for publishing in installments is the notion that people tend to prefer reading shorter snippets these days, so by releasing a 6000 word article in nine parts, we could keep each new slug to a size that people would prefer to read. A second reason is that multiple publications allows for more opportunities to grab people's attention, so makes it more likely that an article will find interested readers.

When I publish in installments, I add an item to my news feed and tweet for each installment. Since I'm describing an update, I link with a fragment URL to take readers to the new section (in future I may link to a temporary explanatory box to highlight what's in the new installment).

But whatever the way the article is released to the world, it is still a single conceptual item, so its best permanent form is a single article. Many people have read the microservices article since that March, and I suspect hardly any of them knew or cared that it was originally published in installments.

In that case we wrote the entire article before we started the installment publishing, but there's no reason against writing it in stages too. For my collection pipelines article, I wrote and published the original article over five installments in July 2014. As I was writing it, I was conscious that there were additional sections I could add. I decided to wait to see how the article was received before I put the effort in to write those sections. Since it was pretty popular, I made a number of revisions, for each one I announced it with a tweet and an item on my feed.

Letting an article evolve like this is the kind of thing that's difficult in a print medium, but exactly the right thing to do on the web. A reader doesn't care that I revised the article to improve it, she just wants to read the best explanation of the topic at hand.

I do like to provide some traces of such revisions. At the end of each article, I include a revision history which briefly summarizes the changes. For a couple of revisions, such as the 2006 revision of my article on Continuous Integration, I made the original article available on a different URL with a link from the revised article. I don't think the original article is useful to most readers, only really to those tracing the intellectual history of the idea, so shifting the original to a new URL makes sense.

The role of the feed is important in this. The traditional blog reinforces the Paper Age model by encouraging people to match an article with its feed entry. For longer articles, I prefer to consider them as different things, the feed is a notice of a new article or revision, which links to the article concerned. That way I generate feed entries each installment where the feed summarizes what's been added.

The point of all this is that we should consider web articles as information resources, resources that can and should be extended and revised as our understanding increases and as time and energy allow. We shouldn't let the Print Age notions of how articles should be constructed dictate the patterns of the Internet Age.

Notes

1: There is a sort of an update mechanism, in that a series of articles might be republished as a single work. But that is relatively rare.

Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

RequiredInterface

API design · object collaboration design

tags:

A required interface is an interface that is defined by the client of an interaction that specifies what a supplier component needs to do so that it can be used in that interaction.

A good example of required interface is an interface commonly referred to as “comparable”. Such an interface is usually required by a sort function. Imagine I have a set of albums, and I want to sort them by title, but ignoring articles such as "The", "A", and "An". I can arrange them to be sorted in this way by implementing the required interface for any sort functions.

In Java it would look something like this.

class Album...

  public class Album implements Comparable<Album> {
    private String title;
  
    public Album(String title) {
      this.title = title;
    }
    public String getTitle() {
      return title;
    }
  
    @Override
    public int compareTo(Album o) {
      return this.sortKey().compareTo(o.sortKey());
    }
    private String sortKey() {
      return ignoreSortPrefixes(title).toLowerCase();
    }
    private static String ignoreSortPrefixes(String arg) {
      final String[] prefixes = {"an", "a", "the"};
      return Arrays.stream(prefixes)
              .map(s -> s + " ")
              .filter(s -> arg.toLowerCase().startsWith(s))
              .findFirst()
              .map(s -> arg.substring(s.length(), arg.length()))
              .orElse(arg)
              ;
    }

In this case Comparable is the required interface of the various Java sort functions. More complicated examples can have a richer interface with several methods defined on it.

Often people think about interfaces as a decision by the supplier about what to expose to clients. But required interfaces are specified (and often defined) by the client. You often get more useful interfaces by thinking about what clients require - leading towards thinking about RoleInterfaces.


Using an Adapter

A common problem comes up if I want to plug together two modules that have been defined independently. Here we can run into difficulties even if we get names that match.

Consider a task list with a required interface of tasks.

class TaskList...

  private List<Task> tasks;
  private LocalDate deadline;
  public LocalDate latestStart() {
    return deadline.minusDays(tasks.stream().mapToInt(t -> t.shortestLength()).sum());
  }
}

interface Task…

  int shortestLength();

Let's imagine I want to integrate it with an Activity class I got from a different supplier.

class Activity…

  public int shortestLength() {
    …

Even though the activity has a method whose signature happens to match the required interface's, I (rightly) can't create a task list of activities because the type definitions don't match. If I can't modify the activity class I need to use an adapter.

public class ActivityAdapter implements Task {
  private Activity activity;

  public ActivityAdapter(Activity activity) {
    this.activity = activity;
  }
  @Override
  public int shortestLength() {
    return activity.shortestLength();
  }
}

In the software world we use the term adapter pretty freely, but here I'm using strictly in the sense of the Gang of Four book. In this usage an adapter is an object that maps one object to the required interface of another.

In this case, I don't need an adapter if I'm using a dynamic language, but I do if the activity class used a method with a different signature.

Acknowledgements

Alexander Zagniotov and Bruno Trecenti commented on drafts of this post.
Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

PresentationDomainDataLayering

team organization · database · encapsulation · application architecture · web development

tags:

One of the most common ways to modularize an information-rich program is to separate it into three broad layers: presentation (UI), domain logic (aka business logic), and data access. So you often see web applications divided into a web layer that knows about handling http requests and rendering HTML, a business logic layer that contains validations and calculations, and a data access layer that sorts out how to manage persistant data in a database or remote services.

On the whole I've found this to be an effective form of modularization for many applications and one that I regularly use and encourage. It's biggest advantage (for me) is that it allows me to reduce the scope of my attention by allowing me to think about the three topics relatively independently. When I'm working on domain logic code I can mostly ignore the UI and treat any interaction with data sources as an abstract set of functions that give me the data I need and update it as I wish. When I'm working on the data access layer I focus on the details of wrangling the data into the form required by my interface. When I'm working on the presentation I can focus on the UI behavior, treating any data to display or update as magically appearing by function calls. By separating these elements I narrow the scope of my thinking in each piece, which makes it easier for me to follow what I need to do.

This narrowing of scope doesn't imply any sequence to programming them - I usually find I need to iterate between the layers. I might build the data and domain layers off my initial understanding of the UX, but when refining the UX I need to change the domain which necessitates a change to the data layer. But even with that kind of cross-layer iteration, I find it easier to focus on one layer at a time as I make changes. It's similar to the switching of thinking modes you get with refactoring's two hats.

Another reason to modularize is to allow me to substitute different implementations of modules. This separation allows me to build multiple presentations on top of the same domain logic without duplicating it. Multiple presentations could be separate pages in a web app, having a web app plus mobile native apps, an API for scripting purposes, or even an old fashioned command line interface. Modularizing the data source allows me to cope gracefully with a change in database technology, or to support services for persistance that may change with little notice. However I have to mention that while I often hear about data access substitution being a driver for separating the data source layer, I rarely hear of someone actually doing it.

Modularity also supports testability, which naturally appeals to me as a big fan of SelfTestingCode. Module boundaries expose seams that are good affordance for testing. UI code is often tricky to test, so it's good to get as much logic as you can into a domain layer which is easily tested without having to do gymnastics to access the program through a UI [1]. Data access is often slow and awkward, so using TestDoubles around the data layer often makes domain logic testing much easier and responsive.

While substitutability and testability are certainly benefits of this layering, I must stress that even without either of these reasons I would still divide into layers like this. The reduced scope of attention reason is sufficient on its own.

When talking about this we can either look at it as one pattern (presentation-domain-data) or split it into two patterns (presentation-domain, and domain-data). Both points of view are useful - I think of presentation-domain-data as a composite of presentation-domain and domain-data.

I consider these layers to be a form of module, which is a generic word I use for how we clump our software into relatively independent pieces. Exactly how this corresponds to code depends on the programming environment we're in. Usually the lowest level is some form of subroutine or function. An object-oriented language will have a notion of class that collects functions and data structure. Most languages have some form of higher level called packages or namespaces, which often can be formed into a hierarchy. Modules may correspond to separately deployable units: libraries, or services, but they don't have to.

Layering can occur at any of these levels. A small program may just put separate functions for the layers into different files. A larger system may have layers corresponding to namespaces with many classes in each.

I've mentioned three layers here, but it's common to see architectures with more than three layers. A common variation is to put a service layer between the domain and presentation, or to split the presentation layer into separate layers with something like Presentation Model. I don't find that more layers breaks the essential pattern, since the core separations still remain.

The dependencies generally run from top to bottom through the layer stack: presentation depends on the domain, which then depends on the data source. A common variation is to arrange things so that the domain does not depend on its data sources by introducing a mapper between the domain and data source layers. This approach is often referred to as a Hexagonal Architecture.

Although presentation-domain-data separation is a common approach, it should only be applied at a relatively small granularity. As an application grows, each layer can get sufficiently complex on its own that you need to modularize further. When this happens it's usually not best to use presentation-domain-data as the higher level of modules. Often frameworks encourage you to have something like view-model-data as the top level namespaces; that's ok for smaller systems, but once any of these layers gets too big you should split your top level into domain oriented modules which are internally layered.

Developers don't have to be full-stack but teams should be.

One common way I've seen this layering lead organizations astray is the AntiPattern of separating development teams by these layers. This looks appealing because front-end and back-end development require different frameworks (or even languages) making it easy for developers to specialize in one or the other. Putting those people with common skills together supports skill sharing and allows the organization to treat the team as a provider of a single, well-delineated type of work. In the same way, putting all the database specialists together fits in with the common centralization of databases and schemas. But the rich interplay between these layers necessitates frequent swapping between them. This isn't too hard when you have specialists in the same team who can casually collaborate, but team boundaries add considerable friction, as well as reducing an individual's motivation to develop the important cross-layer understanding of a system. Worse, separating the layers into teams adds distance between developers and users. Developers don't have to be full-stack (although that is laudable) but teams should be.

Further Reading

I've written about this separation from a number of different angles elsewhere. This layering drives the structure of P of EAA and chapter 1 of that book talks more about this layering. I didn't make this layering a pattern in its own right in that book but have toyed with that territory with Separated Presentation and PresentationDomainSeparation.

For more on why presentation-domain-data shouldn't be the highest level modules in a larger system, take a look at the writing and speaking of Simon Brown. I also agree with him that software architecture should be embedded in code.

I had a fascinating discussion with my colleague Badri Janakiraman about the nature of hexagonal architectures. The context was mostly around applications using Ruby on Rails, but much of the thinking applies to other cases when you may be considering this approach.

Acknowledgements

James Lewis, Jeroen Soeters, Marcos Brizeno, Rouan Wilsenach, and Sean Newham discussed drafts of this post with me.

Notes

1: A PageObject is also an important tool to help testing around UIs.

Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

AntiPattern

bad things · writing

tags:

Andrew Koenig first coined the term "antipattern" in an article in JOOP[1], which is sadly not available on the internet. The essential idea (as I remember it [2]) was that an antipattern was something that seems like a good idea when you begin, but leads you into trouble. Since then the term has often been used just to indicate any bad idea, but I think the original focus is more useful.

In the paper Koenig said

An antipattern is just like a pattern, except that instead of a solution it gives something that looks superficially like a solution but isn't one.

-- Andrew Koenig

This is what makes a good antipattern something separate to just a bad thing to point and laugh at. The fact that it looks like a good solution is its essential danger. Since it looks good, sensible people will take the path - only once you've put a lot of effort into it will you know it's bad result.

When writing a description of an antipattern it's valuable to describe how to get out of trouble if you've taken the bad path. I see that as useful but not necessary. If there's no good way to get out of it, that doesn't reduce the value of the warning.

It's useful to remember that the same solution can be a good pattern in some contexts and an antipattern in others. The value of a solution depends on the context that you use it.

Notes

1: Journal of Object-Oriented Programming, Vol 8, no. 1. March/April 1995. It was then reprinted in "The Patterns Handbook", edited by Linda Rising (Cambridge University Press)

2: I don't have a copy of the paper, so I'm going primarily off memory and some old notes.

Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement

AlignmentMap

team organization · project planning · collaboration

tags:

Alignment maps are organizational information radiators that help visualize the alignment of ongoing work with business outcomes. The work may be regular functionality addition or technical work such as re-architecting or repaying technical debt or improving the build and deployment pipeline. Team members use alignment maps to understand what business outcomes their day-to-day work is meant to improve. Business and IT sponsors use them to understand how ongoing work relates to the business outcomes they care about.

Here’s an example scenario (inspired by real life) that illustrates how these maps may be useful. A team of developers had inefficiently implemented a catalog search function as N+1 calls. The first call to the catalog index returned a set of SKU IDs. For each ID returned, a query was then made to retrieve product detail. The implementation came to the attention of an architect when it failed performance tests. He advised the team to get rid of the N+1 implementation.

“Search-in-one” was the mantra he offered the team as a way to remember their objective. Given the organizational boundary between architects and developers and the low frequency of communication between them, the mantra was taken literally. The team moved heaven and earth to implement a combined index query and detail query in a single call. They lost sight of the real objective of improving search performance and slogged away in an attempt to achieve acceptable performance in exactly one call. Funding ran out in a few months and after some heated discussions, the project was cancelled and the team disbanded.

The above example may seem absurd but sadly, enterprise IT is no stranger to architecture and business projects that are cancelled after a while because they lost sight of why they were funded in the first place. In the terminology of organizational design, these are problems of alignment.


Visualizing Alignment

Broadly, IT strategy has to align with business strategy and IT outcomes with desired business outcomes. A business outcome may be supported (in part) by one or more IT outcomes. Each IT outcome may be realized by one or more initiatives (program of work—architectural or business). At this point, it may also be useful to identify an owner for each initiative who then sponsors work (action items) across multiple teams as part of executing the initiative. Depending on the initiative the owner may be a product owner, architect, tech lead or manager. Here's an alignment map for the “search-in-one” case. Had it been in public display in the team’s work area, it might have prompted someone to take a step back and ask what their work was really meant to achieve.


Global Map

A global alignment map for the IT (appdev+ops) organization may look more like this (although real maps tend to be much larger).

As with all information radiators, such a map is a snapshot in time and needs to be updated regularly (say once a month). Each team displays a big printout of the global map in its work area.

Big organizations are likely to realize value early in this exercise by collaborating to come up with a version 1.0 of such a map that everyone agrees to. The discussions around who owns what initiatives and what outcomes an initiative contributes to leads to a fair bit of organizational clarity of what everyone is up to. Usually, the absence of well-articulated and commonly understood business and IT strategies come in the way of converging on a set of business and IT outcomes. Well-facilitated workshops with deep and wide participation across the relevant parts of the organization can help address this.


Tracing alignment paths

Once a global alignment map is in place, it allows us to trace alignment from either end. IT and business sponsors can trace what action items are in play under a given initiative. Development team members can trace through the map to understand the real purpose of items they are working on. In addition to in-progress items, we could also include action items that are planned, done or blocked.

As illustrated in the map above, each team highlights their section of the map on their copy of the global map.


Qualitative benefits validation

Once a month (or quarter), IT and business people get together to validate if all the IT activity has made any difference to business outcomes. Business people come to the meeting with red-amber-green (RAG) statues for business outcomes and IT people may come with RAG statues for their side of the map. Both parties need to be able to back up their RAG assessments with data and/or real stories from the trenches (narrative evidence).

These maps can be combined


Sriram's recent book explores how best to design an IT organization to be agile enough to survive in today's competitive jungle.


With this the group may realize that:

  • Some outcomes have turned green as compared to the previous meeting. Perhaps customer retention turned green after the last release of the responsive rewrite initiative.
  • Not all IT activity is making the expected difference to business outcomes. This provides an opportunity to discuss why this may be the case. Perhaps because:
    • It is a little early in the day. Other planned items need to complete before we can expect a difference. This is probably why, in the map above, customer acquisition is red even though site UX is green. Platform unbundling is still incomplete.
    • The initiatives and action items are sensible but a different execution approach is needed (this is the reason in case of “search-in-one”).
    • A different initiative or set of actions are required and existing ones are better cancelled. Something outside of IT has to fall into place before the business can realize value.
  • A few business outcomes are green even though the related IT initiatives aren’t. This probably means IT matters less to this outcome than other non-IT factors. In the map above, this is probably why customer retention is green even though site performance isn’t. Perhaps IT means to say that performance isn’t where it should be although it hasn’t affected retention just yet.

To summarize, alignment maps provide a common organization-wide tool to discuss the extent to which different IT initiatives are paying off. They could also improve the ability to make sense of ongoing work and bring it greater alignment with business objectives. I haven't used this technique enough yet to claim general effectiveness, although I do think it shows enough promise. If you try this out I'd be glad to hear about experiences with it.

Acknowledgements

Thanks to Jim Gumbley, Kief Morris and Vinod Sankaranarayanan for their inputs. Special thanks to Martin Fowler for his guidance with the content and help with publishing.

Further Reading

I describe other information radiators that help the cause of organizational agility in my book Agile IT Organization Design. My companion web site at www.agileorgdesign.com contains links to further writing and my talks.

Lars Barkman has posted details on how to construct an alignment map with graphviz.

Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement