Categories
Quality Testing

Myths of test automation – debunked!

By Jim Grey (about)

wrote a post last year criticizing test automation when it’s used to cover for piles of technical debt and poor development practices. But I still think there’s a place for automation in post-development testing. There are two keys to using it well: knowing what it’s good at, and counting the costs. Without those keys it’s easy to fall prey to several myths of test automation. I aim to debunk them here.

Myth: Automation is cheap and easy

It is seductive to think that just by recording your manual tests you can build a comprehensive regression-test suite. But it never seems to really work that way. Every time I’ve used record and playback, the resulting scripts wouldn’t perfectly execute the test, and I’ve had to write custom code to make it work.

St. Paul's Episcopal Church

What I’ve found is that it takes 3 to 10 times longer to automate one test than to execute it manually. And then, especially for automation that exercises the UI, the tests can be brittle: you have to keep modifying scripts to keep them running as the system under test changes.

I’ve done straight record and playback. I’ve created automated modules that can be arranged into specific checks. I’ve led a team that created tests on a keyword-driven framework. And I currently lead a team that writes code that directly exercises a product’s API. The amount of maintenance has decreased with each successive approach.

A side note: given the cost of automating one test, can you see that you want to automate only what you are going to run over and over again, because otherwise the investment doesn’t pay?

Myth: Automation can test anything, and is as good as human testing

Automation is really good at repeating sets of actions, performing calculations, iterating over many data sets, addressing APIs, and doing database reads and writes. I love to automate these things, because humans executing them over and over is a waste of their potential.

This gets at a whole philosophical discussion about what testing is. I think that running predetermined scripts, whether automated or not, is just checking, as in, “Let me check whether clicking Save actually saves the record.” This subset of testing just evaluates the software based on predefined criteria that were determined in the past, presumably based on the state of the software and/or its specification or set of user stories as they were then.

The rest of testing involves human testers experimenting and learning, evaluating the software in its context now. This is critical work if for no other reason than the software and its context (environment, hardware, related software, customer needs, business needs, and so on) changes. An exploring human can find critical problems that no automated test can.

I want human testers to be free to test creatively and deeply. I love automated checks because they take this boring, repetitive work away from humans so they have more time to explore.

Myth: When the automation passes, you can ship!

It’s seductive to think that if testing is automated, that passing automation is some sort of Seal of Approval that takes out all the risk. It’s as if “tested” is a final destination, an assurance that all bets are covered, a promise that nothing will go wrong with the software.

But automation is only as good as its coverage. And if nobody outside your automation team understands what the automation covers, saying “the automation passed” has no fixed meaning.

It’s hard to overcome this myth, but to the extent I have, it’s because as an automation lead and manager I’ve required engineers to write detailed coverage statements into each test. I’ve then aggregated them into broad, brief coverage statements over all of the parts of the software under test. Then I’ve shared that information — sometimes in meetings with PowerPoint decks, always in a central repository that others can access and to which I can link in an email when I inevitably need to explain why passing automation isn’t enough. Keeping this myth at bay takes constant upkeep and frequent reminders.

Myth: Automation is always ready to go

Hope Rescue Mission

“Hey, we want to upgrade to the next version of the database in the sandbox environment. Can you run the automation against that and see what happens?”

My answer: “Let’s assume I can even run the automation in sandbox. If it passes, what do you think you will know about the software?” The answer almost always involves feelings: “Well, I’ll feel like things are basically okay.” See “When the automation passes, you can ship!” above.

Automation is software, full of tradeoffs aimed at meeting a set of implicit and explicit goals. Unless one of those goals was “must be able to run against any environment,” it probably won’t run in sandbox. The automation might count on particular test data existing (or not existing). It might not clean up after itself, leaving lots of data behind, and that might not be welcome in the target environment. It might depend on a particular configuration of the product and its environment that isn’t present.

Even in the environment the automation usually runs in, it might not be ready to go at a moment’s notice. Another goal would need to be, “must be able to run at any time.” There are often setup tasks to perform before the automation can run: a reset of the database the automation uses, or the execution of scripts that seed data that the automation needs.

Myth: Just running the automation is enough

When I run automated tests, part of me secretly hopes they all pass. That’s because when there’s a failure, I have to comb through the automation logs to find what happened, figure out what the automation was doing when it failed, and log into the software myself and try to recreate the problem manually. Sometimes the automation finds just the tip of a bug iceberg and I spend hours exploring to fully understand the problem. Some portion of the time, the failure is a bug in the automation that must be fixed. When it’s a legitimate product bug, then I have to write the bug in the bug tracker.

I am endlessly amused by how often I’ve had to explain that just running the automation isn’t the end of it: that if there are any failures, the automation doesn’t automatically generate bug reports. The standard response is some variation of “What? …ohhhhhh,” as it dawns on them. So far, thankfully, it has always dawned on them.

Myth: Automated tests can make up for years of bad development practices

I’ve just got to restate my point from my older post on this subject. If your development team doesn’t follow good practices such as writing lots of automated unit tests (to achieve about 80% code coverage), code reviews, paired testing, or test-driven development, automation from QA is not going to fix it. You can’t test in quality — you have to build it in.

If you’re sitting on a messy legacy codebase, one where your test team plays whack-a-mole with bugs every time you make changes to it, you are far, far better served investing in the code itself. Refactor, and write piles of automated unit tests.

You want on the order of magnitude of thousands of automated unit tests, hundreds of automated business-rule tests (which hopefully directly exercise an API, rather than exercising a UI, for resiliency and maintainability), and tens of automated checks to make sure the UI is functioning.

I’ll belabor this point: Invest in better code and better development practices first. When you deliver better quality to QA, you’ll keep the cost of testing as low as possible and more easily and reliably deliver better quality to your customers and users.

Categories
Quality Testing

When test automation is nothing more than turdpolishing

By Jim Grey (about)

I used to think that writing a fat suite of automated regression tests was the way to hold the line on software quality release over release. But after 12 years of pursuing that goal at various companies, I’ve given up. It was always doomed to fail.

In part, it’s because I’ve always had to automate tests through a UI. When I did straight record-and-playback automation, the tests were enormously fragile. Even when I designed the tests as reusable modules, and even when I worked with a keyword-driven framework, the tests were still pretty fragile. My automation teams always ended up spending more time maintaining the test suite than building new tests. It’s tedious and expensive to keep UI-level test automation running.

But the bigger reason is that I’ve made a fundamental shift in how I think about software quality. Namely, you can’t test in quality – you have to build it in. Once code reaches the test team, it’s garbage in, garbage out. The test team can’t polish a turd.

Writing an enormous pile of automated tests through the UI? Turdpolishing.

I’ve worked in some places where turdpolishing was the best that could be done. Company leadership couldn’t bear the thought of spending the time and money necessary to pay down years of technical debt, and hoped that building out a big pile of automated tests would hold the line on quality well enough. I’ve led the effort at a couple companies to do just that. We never developed the breadth and depth of coverage necessary to prevent every critical bug from reaching customers, but the automation did find some bugs and that made company leadership feel better. So I guess the automation had some value.

But if you want to deliver real value, you have to improve the quality of the code that reaches your test team. Even if the software you’re building is sitting on a mountain of technical debt, better new code can be delivered to the test team starting today. I’m a big believer in unit testing. If your software development team writes meaningful unit tests for all new code that cover 60, 70, 80 percent of the code, you will see initial code quality skyrocket. Other practices such as continuous integration, pair programming, test-driven development, and even good old code reviews can really help, too.

But whatever you do, don’t expect your software test team to be a magic filter through which working software passes. You will always be disappointed.

Categories
Quality Testing

Giving testers less to do

By Jim Grey (about)

I hear legends of companies who hire nothing but programmers in their test departments, and rely almost entirely on code-based tests in their software development methodologies. I’ve never seen such a shop in person. Out here in the Midwest, most testing involves humans directly exercising user interfaces.

Eric Jacobson recently wondered aloud on his blog whether testers are simply too busy finding bugs through the UI to move into more technical or programmatic testing. My typical experience has been that Development doesn’t deliver software to QA that is solid enough that testers didn’t have to spend the bulk of their time making sure the UI and the immediate interface with the database are working.

For years my schtick was to join a company when it is transitioning from small to mid-sized, when it was feeling crushed by quality problems caused by mounting technical and defect debt. They had focused on getting to market fast but had grown a tangled mess of code. My response was always to grow the QA team, primarily by hiring automation engineers to build out large automated regression suites.

After doing that at a couple companies, I lost interest in the strategy. It was expensive, it took too much time, and it never moved the quality needle enough. It just seems absurd to me now to prop up years of thin development practices with more post-development testing, especially given that automated tests in QA generally work through the UI and are therefore brittle and slow.

flood
The view from my deck after a particularly heavy rain. You’d better believe the sump pump was running.

It’s like when my home’s crawl space used to flood after each heavy rain. A company that specialized in drying out crawl spaces recommended $6,000 in a French drain, multiple sump pumps, and encapsulation to move the water out and keep the moisture from seeping up into the house. But a buddy of mine who builds houses said, “You’ve got a negative grade around your foundation. Buy $300 in topsoil and a couple cases of beer. Invite all your friends over and issue them shovels. Fix the grading and you’ll keep the water from getting in.” I went with the topsoil and the friends. I also put in one sump pump, just in case. It runs pretty much only when the rain is torrential.

In case my admittedly imperfect metaphor isn’t obvious, the graded topsoil is the unit testing, and the sump pump is the lightweight QA automation solution. Let’s try preventing the bugs from getting in as much as we can, shall we? But let’s still check for the odd and extreme cases that are bound to get by.

This is the hierarchy of testing similar to the one Mike Kelly recommends in this blog post. (He’s building on the work of Brian Marick, by the way, who gives a framework for testing in this blog post.) Mike recommends building on the order of thousands of automated unit and component tests, hundreds of automated business-logic tests, and tens of UI-level automated tests. The unit tests run fast and frequently. The inherently slow UI automated tests run far less often.

That’s what I resolved to try after I lost my will to build huge automated regression suites in QA. I deliberately took a QA leadership role with a company transitioning out of its startup phase. The product wasn’t yet too large and too saddled with technical and defect debt; I felt like we could make up the lost ground. After some encouragement from me, the fellow who ran engineering began insisting that developers write automated unit tests for all new code. We started building each release into an environment where developers could perform rudimentary testing on it themselves with realistic data. Their goal was to make sure all the happy paths work, so that when my testers get in there they are not immediately stymied by obvious critical bugs.

Before we started to make this transition, I quietly started tracking a simple little metric. I counted the defects QA found, plus the defects created in the release that were found in production, and divided by the number of development hours in the release. The metric was a little mushy because I was working with estimated and not actual hours, and because I was having to make judgment calls about which defects in production were caused by the release and which were latent bugs. But it’s hard to ignore the order-of-magnitude improvement we got on this metric. We were tracking to about .3 defects per development hour in the couple releases before we made these changes, and within two releases we dropped to about .05-.09 defects per development hour and held steady.

This had incredible impact. Initial quality went way up in QA, meaning initial quality went way up in production. Just adding these two steps was like flipping a switch not only on many of the quality challenges we faced, but also on the amount of chaos and churn we experienced as an overall engineering team.

A side benefit was that developers seemed happier. It wasn’t that writing tests made them happy – it didn’t. They would rather have built more new stuff. But delivering better code into QA meant that they spent less time in the fix-test cycle and were interrupted by way fewer production crises. They told me that finally they could focus.

The reason why I titled this post as I did – and it’s meant to be tongue in cheek, by the way – is because my strategy means hiring more developers and fewer testers. But the benefit to testers is that they get to do far more interesting work, going deeper, thinking more creatively, and exploring more technical kinds of testing.

I can’t imagine ever moving to an all-code testing strategy. All automated testing can do is repeat series of actions. Skilled human testers can cope with complexity and adapt to change, gain and synthesize knowledge and apply it to their testing, know from experience where the product is likely to be broken, and explore the system creatively. The kinds of products I’ve always delivered and am likely to keep delivering will always need that.