Testing with Continuous Delivery
Since fully automated (unit and integration) testing and quality assessment is a requirement for CD to provide any business value – otherwise, you're just shipping garbage quickly – QA teams can start assessing the work ahead of them. The best methodology I've seen is to tackle this on two different fronts. On one, start acculturating the teams to the necessity of writing automated unit or integration tests for each and every filed defect and then start writing them in your most critical components. This will start chipping away at uncertainty and regression risks from bugs that have been found.
On the other front, integration and functional tests can be written that test not for aberrant behaviors, but for the intended ones. Many of these tests may currently be manual or require human intervention, so the focus is on making them fully automated in such a way that no humans are involved in the execution of the tests. In many cases, this requires the QA team to evaluate and communicate test environment requirements to the team responsible for the CI infrastructure or if they operate their own test infrastructure, go through the above operational exercises as well. For certain types of software, dedicating time to the creation of automated "fuzz tests" can also prove very beneficial and provide a lot of value in a CD context.
When starting to analyze what work may need to be done on your software's build process, looking at the endpoints of the build/release process is particularly useful:
Is the coupling between the source control system and the CI infrastructure, the root of the CD pipeline, stable? Sounds like a silly question, but in the world of cloud-based source control and many tiny Git repos instead of one monolithic repo, I run into all sorts of failure modes where a commit doesn't reliably kick off a build when one would be expected. In addition to source control system, are packaged dependencies managed reliably?
On the other end of the build process, is the final packaging produced entirely consumable, by either customers or the deployment automation? Do those artifacts reside in a place that is easy to automate deployments out to a production environment or to the place where customers can get at it? What is the artifact retention and management story for those builds?
As a fan of Law and Order, I often dub these questions, collectively, as the "Build Chain of Evidence." Environments still exist today where there is no clear way to figure out what commit(s) went into a particular artifact, where the test data that illustrated a critical regression is, or what build that data relates to, and no one can tell whether or not a particular artifact is important and should be kept. A CD pipeline relies on this chain containing all of the important (meta-)data and (obviously) that "chain of custody" not breaking at any point within the pipeline.
While all of these suggestions might sound like good practice, one might ask themselves why investing so much effort in CI is worth it. "We want CD! Why spend time on CI?!" The answer: because they build on each other and that old analogy about foundations is relevant when building your "CD house."
That is why investment in making your organization's CI infrastructure rock-solid will not only pay dividends as you work toward CD, but is actually a requirement if you are to build a CD pipeline that won't spring leaks and burst open in times of increased pressure and development flow. Once you have a good CI foundation built, you can start looking at the next steps to move toward CD in your organization. And building that foundation already knowing that it is indeed the foundation – not the house itself – puts you and your team at an advantage.