Recent Changes

Here is a list of recent updates the site. You can also get this information as an RSS feed and I announce new articles on and Mastodon, X (Twitter), and LinkedIn.

I use this page to list both new articles and additions to existing articles. Since I often publish articles in installments, many entries on this page will be new installments to recently published articles, such announcements are indented and don’t show up in the recent changes sections of my home page.


Design Token-Based UI Architecture

Thu 12 Dec 2024 10:36

Design tokens are fundamental design decisions represented as data. Andreas Kutschmann explains how they work and how to organize them to balance scalability, maintainability and developer experience.

more…


Designing Data Products: next steps

Tue 10 Dec 2024 15:22

Once we've designed our initial data products, Kiran Prakash finishes his article by leading us through the next steps: identifying common patterns, improving the developer experience, and handling governance.

more…


Generalizing the design of data products

Wed 04 Dec 2024 10:36

Having got an initial data product, Kiran Prakash leads us through the next steps: covering similar uses cases to generalize the data product, determining which domains the products fit into, and considering service level objectives.

more…


Designing data products: Working backwards from use cases

Tue 03 Dec 2024 09:04

Increasingly the industry is seeing the value of creating data products as a core organizing principle for analytic data. Kiran Prakash has helped many clients design their data products, and shares what he's learned. In particular his methodical approach doesn't begin by thinking about some data that might be handy to share, but instead works from what consumers of a data product need.

more…


Exploring Gen AI: Copilot's new multi-file editing

Tue 19 Nov 2024 10:17

A very powerful new coding assistance feature made its way into GitHub Copilot at the end of October. This new “multi-file editing” capability expands the scope of AI assistance from small, localized suggestions to larger implementations across multiple files. Birgitta Böckeler tries out this new capability and finds out how useful its changes tend to be, and wonders about what feedback loops are needed with them.

more…


Posting on Bluesky, and other thoughts on social media

Wed 13 Nov 2024 12:33

With the recent uptick in tech activity on Bluesky, I've decided that I will start posting there in addition to my current locations.

I've also put together my general thoughts on the state of social media, and how I'm using it, now that it's two years since The Muskover.

more…


Assessing the results of using the Strangler Fig on a Mobile App

Tue 05 Nov 2024 08:55

Matthew Foster and John Mikel Amiel Regida finish their account of how they incrementally modernized a mobile application by looking at the results of their work. They achieved a significant shortening of time to new value, and found that changes in the new application could be prepared in about half the time it took on the old codebase.

more…


Diving deeper into using the Strangler Fig with Mobile Apps

Wed 30 Oct 2024 10:29

Matthew Foster and John Mikel Amiel Regida dive into the details of incrementally modernizing a legacy mobile application. They look at how to implant the strangler fig into the existing app, setting up bi-directional communication between the new app and the legacy, and ensuring effective regression testing on the overall system.

more…


Using the Strangler Fig with Mobile Apps

Tue 29 Oct 2024 10:34

My colleagues are often involved in modernizing legacy systems, and our approach is to do this in an incremental fashion. Doing this with a mobile application raises some specific challenges. Matthew Foster and John Mikel Amiel Regida share their experiences of a recent engagement to do this, shifting from a monolithic legacy application to one using a modular micro-app architecture.

more…


Interviewed by Book Overflow podcast on Refactoring

Fri 04 Oct 2024 09:16

I was interviewed on the Book Overflow podcast about the Refactoring book. We talked about the origins of the book, the relationship between refactoring, testing, and extreme programming, how refactoring is used in the wild, and the role of books and long-form prose today.

more…


Using GenAI to build a capability map and translate legacy systems

Tue 24 Sep 2024 09:51

Alessio Ferri, Tom Coggrave, and Shodhan Sheth complete their article on what they have learned from using GenAI with legacy systems. They describe how GenAI's ability to process unstructured information makes it much easier to build a capability map of a legacy system, tying the capabilities of a system to the relevant parts of the source code. GenAI is also useful for identifying areas of dead code, and has promise for better translations of a system between platforms and languages.

more…


Using GenAI to extract low-level details and high-level explanations from legacy systems

Wed 18 Sep 2024 08:08

Alessio Ferri, Tom Coggrave, and Shodhan Sheth use their combination of an AST-fueled knowledge graph and LLMs to gain understanding of legacy systems. They have found it aids them both in extracting the low-level details of the code and can provide high-level explanations to support human-centered approaches such as event storming.

more…


Legacy Modernization meets GenAI

Tue 17 Sep 2024 09:07

Most of the talk about the impact of GenAI on software development is about its ability to write (messy) code. But many of us think it's going to be much more useful to help us understand existing messy code, as part of a modernization effort. My colleagues Alessio Ferri, Tom Coggrave, and Shodhan Sheth have been considering how GenAI can do this, including building an internal tool to help explore the possibilities. The tool uses an LLM to enhance a knowledge graph based on the AST of the code base. It also uses an LLM to help users query this knowledge graph.

more…


Governing data products using fitness functions

Thu 05 Sep 2024 09:37

Decentralized data management requires automation to scale governance effectively. Fitness functions are a powerful automated governance technique my colleagues have applied to data products within the context of a Data Mesh. Since data products serve as the foundational building blocks of a data strategy, ensuring robust governance around them significantly increases the chances of success. Kiran Prakash explains how to do this, starting with simple tests for key architectural characteristics and moving on to leveraging metadata and Large Language Models.

more…


Bliki: Cycle Time

Wed 04 Sep 2024 00:00 BST

Cycle Time is a measure of how long it takes to get a new feature in a software system from idea to running in production. In Agile circles, we try to minimize cycle time. We do this by defining and implementing very small features and minimizing delays in the development process. Although the rough notion of cycle time, and the importance of reducing it, is common, there is a lot of variations on how cycle time is measured.

A key characteristic of agile software development is a shift from a Waterfall Process, where work is decomposed based on activity (analysis, coding, testing) to an Iterative Process where work is based on a subset of functionality (simple pricing, bulk discount, valued-customer discount). Doing this generates a feedback loop where we can learn from putting small features in front of users. This learning allows us to improve our development process and allows us to better understand where the software product can provide value for our customers. 1

1: It also avoids work being stuck in late activities such as testing and integration, which were notoriously difficult to estimate.

This feedback is a core benefit of an iterative approach, and like most such feedback loops, the quicker I get the feedback, the happier I am. Thus agile folks put a lot of emphasis on how fast we can get a feature through the entire workflow and into production. The phrase cycle time is a measure of that.

But here we run into difficulties. When do we start and stop the clock on cycle time?

The stopping time is the easiest, most glibly it's when the feature is put into production and helping its users. But there are circumstances where this can get muddy. If a team is using a Canary Release, should it be when used by the first cohort, or only when released to the full population? Do we count only when the app store has approved its release, thus adding an unpredictable delay that's mostly outside the control of the development team?.

The start time has even more variations. A common marker is when a developer makes a first commit to that feature, but that ignores any time spent in preparatory analysis. Many people would go further back and say: “when the customer first has the idea for a feature”. This is all very well for a high priority feature, but how about something that isn't that urgent, and thus sits in a triage area for a few weeks before being ready to enter development. Do we start the clock when the team first places the feature on the card wall and we start to seriously work on it?

I also run into the phase lead time, sometimes instead of “cycle time”, but often together - where people make a distinction between the two, often based on a different start time. However there isn't any consistency between how people distinguish between them. So in general, I treat “lead time” as a synonym to “cycle time”, and if someone is using both, I make sure I understand how that individual is making the distinction.

The different bands of cycle time all have their advantages, and it's often handy to use different bands in the same situation, to highlight differences. In that situation, I'd use a distinguishing adjective (e.g. “first-commit cycle time” vs “idea cycle time”) to tell them apart. There's no generally accepted terms for such adjectives, but I think they are better than trying to create a distinction between “cycle time” and “lead time”.

What these questions tell us is that cycle time, while a useful concept, is inherently slippery. We should be wary of comparing cycle times between teams, unless we can be confident we have consistent notions of their stop and start times.

But despite this, thinking in terms of cycle time, and trying to minimize it, is a useful activity. It's usually worthwhile to build a value stream map that shows every step from idea to production, identifying the steps in the work flow, how much time is spent on them, and how much waiting between them. Understanding this flow of work allows us to find ways to reduce the cycle time. Two commonly effective interventions are to reduce the size of features and (counter-intuitively) increase Slack. Doing the work to understand flow to improve it is worthwhile because the faster we get ideas into production, the more rapidly we gain the benefits of the new features, and get the feedback to learn and improve our ways of working.

Further Reading

The best grounding on understanding cycle time and how to reduce it is The Principles of Product Development Flow

Notes

1: It also avoids work being stuck in late activities such as testing and integration, which were notoriously difficult to estimate.

Acknowledgements

Andrew Harmel-Law, Chris Ford, James Lewis, José Pinar, Kief Morris, Manoj Kumar M, Matteo Vaccari, and Rafael Ferreira discussed this post on our internal mailing list

Rewriting Strangler Fig

Thu 22 Aug 2024 11:51

Two decades ago, I posted that I found that the strangler fig plant was an interesting metaphor for the gradual replacement of a legacy system. I didn’t refer to the metaphor since, but meanwhile it grew a life of its own. Other people increasingly referred to the strangler fig approach to modernization, and traffic to that post steadily increased: currently it gets about 5000 page views a month, one of the more popular pages on this site. So I decided I needed to update that page, and have rewritten it, focusing on the core activities we need to do to make a success of such a venture.

(This summarizes more detailed writing from Ian Cartwright, Rob Horn, and James Lewis.)


Onboarding to a “legacy” codebase with the help of AI

Thu 15 Aug 2024 10:32

Much of the attention to generative AI in software development is about generating code. But it may have a more useful role in helping us understand existing code. This is especially true for older codebases that are getting hard to maintain (“legacy”) or to improve onboarding in teams that have a lot of fluctuation.

Birgitta Böckeler demonstrates the possibilities here by picking an issue from an open-source hospital management system and exploring how AI could help her deal with it.

more…


Refresh of the PoEAA catalog page

Wed 31 Jul 2024 16:11

From time to time I take a look at my site analytics to see how much traffic various bits of this site get. When doing this I saw that I continue to get a lot of traffic to the Catalog of Patterns of Enterprise Application Architecture. I put this together not long after writing the book, and it’s rather minimal. Since it still gets traffic I felt it was time for some sprucing up. The content is the same, summaries of the patterns in the book. I’ve changed the main catalog page to include the pattern intents. I’ve also added deep links to the book on oreilly.com, so if you have a subscription to that, it will jump directly to that text. Hopefully these changes will make the catalog page a bit more useful.


Instead of restricting AI and algorithms, make them explainable

Tue 30 Jul 2024 12:10

There's a lot of discussion about using regulation to restrict the use of AI and other software algorithms. I think that the better regulation would be to ensure that decisions made by software must be explainable.

more…


Testing server calls in generated HTML

Wed 05 Jun 2024 15:00

Matteo Vaccari completes his article on testing template-generated HTML, by looking at how to use TDD with pages that make calls to the server.

more…


Testing the behavior of generated HTML

Thu 30 May 2024 08:42

In the story so far, Matteo Vaccari has shown how to test the behaviour of the HTML templates, by checking the structure of the generated HTML. That's good, but what if we want to test the behavior of the HTML itself, plus any CSS and JavaScript it may use?

more…


Parameterizing HTML template tests

Wed 29 May 2024 10:44

Testing templates for generating HTML leads to tests that are very similar. Matteo Vaccari wisely likes to separate the common elements of tests from those that vary. He continues his article to show how he does this by parameterizing the tests. The resulting tests are easier to write, and more importantly, faster to understand and modify.

more…


Prefetching in Single-Page Applications

Wed 29 May 2024 10:24

Juntao Qiu's completes his set of data fetching patterns for single-page applications. Prefetching involves fetching data before it's called for in the application flow. Although this can mean data is fetched unnecessarily, it reduces latency should the data be needed.

more…


Code Splitting in Single-Page Applications

Thu 23 May 2024 10:26

Single-Page Applications often require a lot of code to be downloaded to the browser, which can delay a page's initial appearance. Juntao Qiu's next pattern, Code Splitting, describes how this code can be divided up, so that modules are only loaded if they are going to be needed, and the dangers of doing so.

more…


A short note on how I use and render footnotes

Wed 22 May 2024 14:17

Last week I added a small feature to this website, changing the way it renders footnotes. That prompted me to write this quick note about how I use footnotes, and how that influences the best way to render them.

more…


Testing the contents of generated HTML

Wed 22 May 2024 10:14

Matteo Vaccari continues his testing of template-generated HTML by describing tests for the contents of that HTML. He shows how to gradually build up the template, using Test-Driven Development in Go and Java.

more…


Using markup for fallbacks when fetching data

Tue 21 May 2024 11:30

Juntao Qiu's next data fetching pattern looks at how to specify fallback behavior using markup. This allows developers to pull such declarations out of the JavaScript components and into the markup they use while laying out the rest of the page. Juntao's React example shows how this works with the Suspense element, with a similar approach in vue.js.

more…


Test-Driving HTML Templates

Tue 21 May 2024 11:06

When building a server-side rendered web application, it's valuable to test the HTML that's generated through templates. While these can be tested through end-to-end tests running in the browser, such tests are slow and more work to maintain than unit tests. My colleague Matteo Vaccari has written an article on how to use TDD to test drive these templates using xunit-style tools which can be run easily from the command line or as part of build scripts.

In this first installment Matteo describes how such tests can check the generated HTML for validity, with examples in Java and Go.

more…


Parallel Data Fetching

Wed 15 May 2024 10:47

The second pattern in Juntao Qiu's series on data fetching is on how to avoid the dreaded Request Waterfall. Parallel Data Fetching queries multiple sources in parallel, so that the latency of the overall fetch is largest of the queries rather than the sum of them.

more…


Data Fetching Patterns in Single-Page Applications

Tue 14 May 2024 09:49

Juntao Qiu is a thoughtful front-end developer experienced with the React programming environment. He's contributed a couple of useful articles to this site, describing helpful patterns for front-end programming. In this article he describes patterns for how single-page applications fetch data. This first installment describes how asynchronous queries can be wrapped in a handler to provide information about the state of the query.

more…