Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Technology Radar
Technology Radar
Volume 29 | September 2023

Technology Radar

An opinionated guide to today's technology landscape

The Technology Radar is a snapshot of tools, techniques, platforms, languages and frameworks based on the practical experiences of Thoughtworkers around the world. Published twice a year, it provides insights on how the world builds software today. Use it to identify and evaluate what’s important to you.

 
  • techniques quadrant with radar rings Adopt Trial Assess Hold Adopt Trial Assess Hold
  • platforms quadrant with radar rings Adopt Trial Assess Hold Adopt Trial Assess Hold
  • tools quadrant with radar rings Adopt Trial Assess Hold Adopt Trial Assess Hold
  • languages-and-frameworks quadrant with radar rings Adopt Trial Assess Hold Adopt Trial Assess Hold
  • New
  • Moved in/out
  • No change
Blips can be new to the volume or move between rings from a previous volume.

Download Technology Radar Volume 29

English | Español | Português | 中文

Stay informed about technology

 

Subscribe now

Themes for this volume

AI-assisted software development

To no one's surprise, AI-related topics dominated our conversation for this edition of the Radar. For the first time ever, we needed a visual guide to untangle the different categories and capabilities (something we never had to resort to even in the heyday of chaos in the JavaScript ecosystem). As a software consultancy with a history of pioneering engineering practices like CI and CD, one of the categories of particular interest to us is using AI to assist in software development. As part of the Radar, we therefore discussed many coding assistance tools, like GitHub Copilot, Tabnine and Codeium. We're also excited about how open-source LLMs for coding might shake up the tooling landscape, and we see great promise in the explosion of tools and capabilities for assistance beyond coding as well, such as user story writeup assistance, user research, elevator pitches and other language-based chores. At the same time, we hope developers use all of these tools responsibly and stay firmly in the driver's seat, with things like hallucinated dependencies being just one of the security and quality risks to be aware of.

How productive is measuring productivity?

Software development can sometimes seem like magic to non-technologists, which leads managers to strive to measure just how productive developers are at their mysterious tasks. Our chief scientist, Martin Fowler, wrote about this topic as long ago as 2003, but it hasn't gone away. We discussed many modern tools and techniques for this Radar that take more nuanced approaches to measuring the creative process of building software yet still remain inadequate. Fortunately, the industry has moved away from using lines of code as a measure of output. However, alternative ways to measure the A ("Activity") of the SPACE framework, such as number of pull requests or issues resolved, are still poor indicators of productivity. Instead, the industry has started focusing on engineering effectiveness: rather than measure productivity, we should measure things we know contribute to or detract from the flow. Instead of focusing on an individual's activities, we should focus on the sources of waste in the system and the conditions we can empirically show have an impact on the developer's perception of "productivity." New tools such as DX DevEx 360 address this by focusing on the developer experience rather than some specious measure of output. However, many leaders continue to refer to developer "productivity" in a vague, qualitative way. We suspect that at least some of this resurgence of interest concerns the impact of AI-assisted software development, which raises the inevitable question: is it having a positive impact? While measurements may be gaining some nuance, real measurements of productivity are still elusive.

A large number of LLMs

Large language models (LLMs) form the basis for many modern breakthroughs in AI. Much current experimentation involves prompting chat-like user interfaces such as ChatGPT or Bard. Fundamentally, the core competing ecosystems (OpenAI’s ChatGPT, Google's Bard, Meta's LLaMA, Amazon's Bedrock among others) featured heavily in our discussions. More broadly, LLMs are tools that can solve a variety of problems, ranging from content generation (text, images and videos) to code generation to summarization and translation, to name a few. With natural language serving as a powerful abstraction layer, these models present a universally appealing tool set and are therefore being used by many information workers. Our discourse encompasses various facets of LLMs, including self-hosting, which allows customization and greater control than cloud-hosted LLMs. With the growing complexity of LLMs, we deliberate on the ability to quantize and run them on small form factors, especially in edge devices and constrained environments. We touch upon ReAct prompting, which holds promise for improved performance, along with LLM-powered autonomous agents that can be used to build dynamic applications that go beyond question and answer interactions. We also mention several vector databases (including Pinecone) that are seeing a resurgence thanks to LLMs. The underlying capabilities of LLMs, including specialized and self-hosted capabilities, continues its explosive growth.

Remote delivery workarounds mature

Even though remote software development teams have leveraged technology to overcome geographic constraints for years now, the pandemic's impact fueled innovation in this area, solidifying full remote or hybrid work as an enduring trend. For this Radar, we discussed how remote software development practices and tools have matured, and teams keep pushing boundaries with a focus on effective collaboration in an environment that is more distributed and dynamic than ever. Some teams keep coming up with innovative solutions using new collaborative tools. Others continue to adapt and improve existing in-person practices for activities like real-time pair programming or mob programming, distributed workshops (e.g., remote Event Storming) and both asynchronous and synchronous communication. Although remote work offers numerous benefits (including a more diverse talent pool), the value of face-to-face interactions is clear. Teams shouldn’t let critical feedback loops lapse and need to be aware of the trade-offs they incur when transitioning to remote settings.

 

Contributors

 

The Technology Radar is prepared by the Thoughtworks Technology Advisory Board, comprised of:

 

Rebecca Parsons (CTO Emerita) • Rachel Laycock (CTO) • Martin Fowler (Chief Scientist) • Bharani Subramaniam • Birgitta Böckeler • Brandon Byars • Camilla Falconi Crispim • Erik Doernenburg • Fausto de la Torre • Hao Xu • Ian Cartwright • James Lewis • Marisa Hoenig • Maya Ormaza • Mike Mason • Neal Ford • Pawan Shah • Scott Shaw • Selvakumar Natesan • Shangqi Liu • Sofia Tania • Vanya Seth

Subscribe. Stay informed.

Sign up to receive emails about future Technology Radar releases and bi-monthly tech insights from Thoughtworks.

Marketo Form ID is invalid !!!

Thank you!

You have been subscribed to our Technology Radar content. Keep an eye on your inbox, we will be in touch soon.

Visit our archive to read previous volumes