As software development moves from the traditional “waterfall” style of development to the more continuous improvement “agile” style, another important aspect of software development has had to change as well: Software testing. Moreover, as testing becomes more complex and frequent, companies and their developers have been looking at ways to automate the process.

Traditionally developers wrote programs. When programs were “done” they were tossed over the wall for testing by the Quality Assurance team. But this process took a long time, particularly when organizations started working to make their software development more nimble. In addition, that style of testing doesn’t work too well combined with frequent updates, A/B testing, and other more modern software development techniques.

In addition, having separate “silos” for development and testing didn’t always work that well. Testers might discover something during the QA process that indicated that there had been a problem writing out the program specifications in the first place.

Consequently, one of the first changes companies are making in software testing is to bring the testing team into the process earlier. “Get the programmer, tester, and product owner in a room to talk about what they need to be successful, to create examples, to define what the automation strategy will be, and to create a shared understanding,” writes Matthew Heusser in TechBeacon.

This technique is also known as Acceptance Test Driven Development (ATDD), writes testing author Elizabeth Hendrickson. “[ATDD] is a practice in which the whole team collaboratively discusses acceptance criteria, with examples, and then distills them into a set of concrete acceptance tests before development begins,” she writes. “It’s the best way I know to ensure that we all have the same shared understanding of what it is we’re actually building.”

In that way, the tester can actually act as a liaison between the business group and the development team. “If they get that agreement before actually building, doing it in little pieces, then that helps build the right thing better,” development consultant George Dinwiddie tells Todd Charron with InfoQ. “So the business knows more or less what they are looking for, what advantage they are trying to get, the programmers know something about how to achieve this, what’s possible. The testers are the ones bringing the point of view of what can go wrong, what are the edge cases, where are the things to watch out for.”

Bringing the tester into the process earlier also helps ensure that testing is considered as a project criterion from the beginning, writes Hans Buwalda in TechWell. “When a plan is made for a system or a feature, one of the first questions should be ‘how do we test this?’” he writes. “Incorporating a testing and automation strategy early in the life cycle pays off down the line.”

In fact, some organizations, such as Yahoo!, are eliminating separate testing organizations and staff altogether. That doesn’t mean, of course, that the need for testing goes away. Indeed, because of the increasingly complex environments in which programs interact—just look at the number of potential Android smartphone platforms, for example—and the speed in which bad news about a product can travel, testing is more important than ever.

Testing Automation?

Instead, some organizations are looking at automating some aspects of the software testing process, using open source or commercial tools to write modules that can systematically exercise the different parts of a program and report back on the results.

Test automation doesn’t work for everything, of course. Aspects such as the user interface, where a testing program might not be capable of noticing a problem such as a transparent button, and performance, where a testing program doesn’t get frustrated by latency, are best performed by humans.

In fact, some suggest having a separate testing pass for functionality—first, make sure everything works—and performance—second, make sure it’s acceptable to people who will be using it. For example, in the book Beautiful Testing, Heusser makes a distinction between “checking,” or ensuring that the code does as it’s supposed to, and “investigating,” or looking for things like memory leaks that can cause a program to fail intermittently.

In addition, spending a lot of time developing test cases to run in a program might not be the best use of a developer’s time, compared with hiring a less costly staffer to manually test components that aren’t used often, writes QA manager Lena Kalz in TechBeacon.

Components that change often, including the user interface, are probably best tested manually rather than continually having to write new test programs.  In fact, a number of testing experts, like consultant Yvette Francino, recommend testing the user interface last, after making sure that all the internal components work first.

But automating test components that are performed frequently can save a lot of time, Heusser writes. “Look at the layers of the application. See what processes are actually repetitive and how much time you’d save by automating them,” he writes. “Review your playbook for test automation and pick something that could give you a big win for small effort.”

One simple function might be performed hundreds or thousands of times in a test suite, so automating it results in a significant improvement in maintenance and readability, writes Chris McMahon in TechTarget. Examples of such actions include logging in, selecting all the elements of a list, or checking for a set of errors, he writes.

Chances are, no programming organization worth its salt will ever feel that it’s been able to do as much testing as it would like. But by bringing the testing functionality into the development process earlier, and developing a systematic, automated way of testing program functionality and performance, it can come closer.

Related Posts