How to excel in agile software development

You need to augment the agile process with a set of disciplines and technologies to get the full value of the agile methodology

How to excel in agile software development
Thinkstock

If you are leading or participating in an agile development process and have selected an agile model like the scrum methodology, you have a fundamental process to help align product owners with customer needs and teams on delivering results. You have the team’s responsibilities outlined, a meeting structure defined and scheduled, and an agile collaboration tool to manage the backlog.

All this structure, process, and collaboration helps teams of any kind execute. In fact, agile practices are applied to many other nontechnical disciplines such as agile marketing.

So, is the agile process itself sufficient to deliver good working software?

The answer is no. You need to augment the agile process with a set of disciplines, often technology-supported, to get the full value of the agile methodology. That is, to excel with the agile methodology in your enterprise.

Among the issues to address are:

  • What technical considerations should be considered to ensure agile collaboration tools can support key software development life cycle (SDLC) practices?
  • How does a team that’s developing software ensure that applications are production-ready and have a streamlined process to push changes into production and other computing environments?

Defining and addressing technical debt in agile development

Agile development often requires many compromises when trying to get a user story done. Functionality compromises are often debated up front when writing the user story, especially when teams use agile estimation practices to estimate story points or other measures.

Once a user story is committed to, the development team must make sound technical decisions around its implementation. Those decisions require implementation compromises even when there are strict technology standards in place. These compromises create a technical debt that must be fixed or improved later.

Technical debt may not be visible during the development process. It can be usage-driven when user behavior exposes a technical limitation in the implementation. It can be driven by performance or scalability. It can also be driven by the life cycle of any underlying software components that require upgrades.

It's a critical responsibility for developers on an agile team to record technical debt. For small issues, this can be done in code with todo comments that can be addressed when the next developer works on that code. Larger code issues that require refactoring should be itemized on the backlog. Application life cycle needs—such as upgrading the underlying software architecture and components that might require multiple code changes and additional testing—might be best captured as epics.

Disciplined agile teams will find ways to prioritize technical debt. When I lead agile programs, I ask product owners to dedicate at least 30 percent of their backlog to address technical debt. That target is based on a 20 to 30 percent average that software vendors charge on support contracts, but the target can be lower for new applications—and a lot higher for legacy ones.

You can address technical debt at several levels:

  • For larger application life cycle issues, it is often better to schedule one or more releases to perform these upgrades. In addition, I advise executing these upgrades in a release cycle that doesn’t introduce any new or changed functionality. This makes it easier for testing teams to identify issues with upgrade, and it avoids complications where finding root causes can be challenging.
  • In a release, the product owner works with the team to identify the technical debt that affects users the most, that has the most impact on developer productivity, or that exposes other risks. These are then prioritized and should be itemized as user stories in the backlog.
  • In a single user story, the team can recommend acceptance criteria so underlying technical debt can be addressed.

Highly disciplined agile teams measure technical debt and create indicators when the debt is exceeding accepted levels.

Enabling QA in agile development

One of the most common questions I get from agile development leaders is how to fit quality assurance (QA) practices in the agile development process, especially for teams using the scrum methodology. The implication of being “done” at the end of the sprint implies that QA can execute any new functional tests and regression tests within the sprint. But that’s not easy when the sprint duration is short and if the developers want to code all the way up to the time of the demo.

Fitting in QA is also not easy where the line of responsibilities between QA and development have been blurred with the emergence of unit testing, automation, and test-driven development practices. Furthermore, testing has many practices, including testing APIs, functionality, data, mobile interfaces, application security, performance, and scalability. All this testing can be hard to accomplish or even justify when the expectation is that agile will help bring new capabilities to market faster.

So, it’s important for agile leaders to remind everyone that the agile methodology needs to bring quality capabilities to market more safely. To do this, application development needs a quality assurance practice that aligns with risk, defines responsibilities between developers and QA, and requires that testing be scheduled into the agile development process.

To align testing with the development practice, consider when in the development process testing can be introduced. For example, in a scrum process:

  • Initial testing of the user stories should begin as they are being developed. But it’s important that developers try to complete the higher-risk stories and the ones that require more testing earlier in the sprint.
  • Additional testing is usually done at the end of the sprint. Depending on how long it takes to perform functional and regression testing, developers should schedule a code freeze several days before the end of the sprint to test and address the reported issues.
  • Security tests, code analysis, and performance tests can be scheduled to run on the code completed in the previous sprint. A final set of tests should be scheduled before the code is released to production.

To do all the QA and testing desired within the time constraints of sprint releases requires a good amount of automation. Test cases developed for functionality developed this sprint need to be automated and added to regression tests. A subset of these tests needs to be repurposed for performance testing. Strong teams measure themselves on test coverage and on the percentage of test cases that are automated.

Identifying a code-branching standard

A foundational agile development practice is the ability to version code, branch development into multiple tracks of activity, and package code for release into separate test and production environments.

Tools have improved significantly over the last several years, and many teams have learned how to use tools like Git for what used to be considered as advanced source code management practices. High-performance agile teams know how to use these tools to drive efficiency and enable a flexible development process for different types of development activities.

The heart of this flexibility is in the ability to branch and merge code. A branch provides multiple development tracks for different business needs where developers can work on independent copies of the code. Development teams can use a mix of permanent and episodic branches, and merge branches when it makes sense. Development teams often have standard branches to support development, testing, and production. But they can also create episodic branches to support:

  • Feature development, where a feature is developed in its own branch and then merged when completed.
  • Patches for when a production issue needs to be fixed and deployed without incorporating any code that is under development.
  • Component or architecture upgrades that require a significant amount of code changes and where testing can be done in separate branches until ready for a production release.

Instituting code reviews to improve quality and developer collaboration

Another aspect of using source-code management tools like Git is to formalize code reviews. In Git, developers typically work in a development or feature branch and commit code as needed or based on policies. When completed, the developer generates a pull request to move these changes out of a development branch or feature branch and into the testing branch. A second developer then can merge the pull request into the testing.

This collaboration process lets developers perform code reviews before merging branches. Although code reviews can help identify bad code or code that isn’t up to standards, it is provides a way to perform knowledge transfer to ensure the code is readable and understandable by a colleague. This is particularly important in agile development where you want the team members to share responsibilities and where you want senior developers to mentor junior ones to become more-productive contributors.

Implementing continuous integration and continuous delivery

The next place agile development teams look to improve the quality and efficiency of their process is in automating how code is packaged and delivered to runtime environments.

Historically, many applications were built and delivered with manual steps: Developers run a step in an integrated development environment (IDE), then package the output using scripts. They FTP the file to a repository where someone on the operations team, after receiving an approved change request, take multiple steps to deliver the code to the target runtime environment. This manual process is both error-prone and inefficient to business and technology teams that want to deploy changes to production environments on a frequent schedule.

The solution is to automate these steps. When the development and operations team collaborate on improving agility and operational stability, a devops practice emerges. Key devops practices include:

  • Continuous integration (CI), where branches are merged frequently and the application build process is automated.
  • Continuous delivery (CD), where the software is pushed to a selected runtime environment with a push of a button.

Continuous integration and delivery (CI/CD) require automated testing. CI/CD is part of a release management strategy that every organization should establish as part of its agile development process. The most advanced development organizations that want to deploy code very frequently (daily or even more frequently) take the final step to continuous deployment.

Using agile to drive automation and process improvement

It may sound magical to have this automation and collaboration practice in place, but I can assure you that development organizations don’t achieve these results overnight. They are developed over time and usually as a response to business issues.

The best way to get started is to think about the customer impact of having these solutions in place. Articulate the automation and collaboration needs as a user story and prioritize its deployment on the backlog as if it were a software development project. Then you can start implementing these solutions and developing a roadmap for future improvements.

Copyright © 2018 IDG Communications, Inc.