5. Use test-driven development for early and continuous focus on verification. This practice can be summarized as “write your test before you write the system.” When there is an exclusive focus on “sunny-day” scenarios (a typical developer’s mindset), the project becomes overly reliant on extensive testing at the end of the project to identify overlooked scenarios and interactions. Therefore, be sure to focus on rainy-day scenarios (e.g., consider different system failure modes) as well as sunny-day scenarios. The practice of writing tests first, especially at the business or system level (which is known as acceptance test-driven development) reinforces the other practices that identify the more challenging aspects and properties of the system, especially quality attributes and architectural concerns (see architectural runway and quality-attribute scenarios practices above).
6. Use end-to-end testing for early insight into emerging system properties. To successfully derive the full benefit from test-driven development at scale, consider early and continuous end-to-end testing of system scenarios. When teams test only the features for which they are responsible, they lose insight into overall system behavior (and how their efforts contribute to achieving it). Each small team could be successful against its own backlog, but someone needs to be looking after broader or emergent system properties and implications. For example, who is responsible for the fault tolerance of the system as a whole? Answering such questions requires careful orchestration of development with verification activities early and throughout development. When testing end to end, take into account different operational contexts, environments, and system modes.
At scale, understanding end-to-end functionality requires its elicitation and documentation. This can be achieved through use of agile requirements management techniques such as stories as well as use of architecturally significant requirements. However, if there is a need to orchestrate multiple systems, a more deliberate elicitation of end-to-end functionality as mission/business threads should provide a better result.
7. Use continuous integration for consistent attention to integration issues. This basic Agile practice becomes even more important at scale, given the increased number of subsystems that must work together and whose development must be orchestrated. One implication is that the underlying infrastructure that developers will use day to day must be able to support continuous integration. Another is that developers focus on integration earlier, identifying the subsystems and existing frameworks that will need to integrate. This identification has implications for the architectural runway, quality-attribute scenarios, and orchestration of development and verification activities. Useful measures for managing continuous integration include rework rate and scrap rate. It is also important to start early in the project to identify issues that can arise during integration. What this means more broadly is that both integration and the ability to integrate must be managed in the Agile environment.
8. Consider technical debt management as an approach to strategically manage system development. The concept of technical debt arose naturally from use of Agile methods, where the emphasis on getting features out quickly often creates a need for rework later. At scale, there may be multiple opportunities for shortcuts, and understanding technical debt and its implications becomes a means for strategically managing the development of the system. For example, there might be cases, where to accelerate delivery, certain architectural selections are made that have long-term consequences. Such tradeoffs must be understood and managed based on both qualitative and quantitative measurements of the system. Qualitatively, architecture evaluations can be used as part of the product demos or retrospectives that Agile advocates. Quantitative measures are harder but can arise from understanding productivity, system uncertainty, and measures of rework (e.g., when uncertainty is greater, you might be more willing to take on more rework later). Several larger organizations have started to look into technical-debt management practices organizationally.
9. Use prototyping to rapidly evaluate and resolve significant technical risks. To address significant technical issues, teams employing Agile methods will sometimes perform what in Scrum is referred to as a technical spike, in which a team branches out from the rest of the project to investigate the specific technical issue, develop one or more prototypes to evaluate possible solutions, and bring back what was learned to the project so that it can proceed with greater likelihood of success. A technical spike may extend over multiple sprints, depending on the seriousness of the issue and how much time it takes to investigate the issue and bring back information that the project can use.
At scale, technical risks having severe consequences are typically more numerous, and so prototyping (and other approaches to evaluating candidate solutions such as simulation and demonstration) can be an essential early planning but also recurring activity. A goal of Agile methods is increased early visibility. From that perspective, prototyping is a valuable means of achieving visibility more quickly for technical risks and their mitigations. The Scrum of Scrums practice mentioned earlier has a role here, too, for helping to orchestrate bringing back what was learned from prototyping to the overall system.
10. Use architectural evaluations to ensure that architecturally significant requirements are being addressed. While not considered part of mainstream Agile practice, architecture evaluations have much in common with Agile methods in seeking to bring a project’s stakeholders together to increase their visibility into and commitment to the project, and to identify overlooked risks. At scale, architectural issues become even more important, and architecture evaluations thus have a critical role on the project. Architecture evaluation can be formal, as in the Software Engineering Institute’s Architecture Tradeoff Analysis Method, which can be performed, for example, early in the Agile project lifecycle before the project’s development teams are launched, or recurrently. There is also an important role for lighter weight evaluations in project retrospectives to evaluate progress against architecturally significant requirements.