Creating automated tests can be very difficult, especially when the code has gotten long in the tooth and was not created with automated tests to begin with. Many product development teams don’t invest in automated tests. They think they cannot afford them. They think their product is different and can’t be tested automatically. This thinking is flawed.
Back in the products younger days, manual test was not too time consuming. But slowly that changed. The system grows, the manual test effort grows. Eventually, it seems that no amount of manual test effort finds all the problems.
In this article I show a simple model that illustrates why manual test is unsustainable and that a sustainable software product development effort must include considerable test automation.
For starters, let’s look at a simple model that illustrates a development process that relies mostly manual tests. Let’s make an assumption, the assumption depicted in this graph:
Explaining the graph, if we spent 10 units of development effort to create a set of new features, let’s say that it only takes 5 units of effort to test the new functionality. Said another way, test effort is proportional to development effort by some constant factor.
Wait a second; we know it’s not that simple. The coefficient is certainly wrong, and to think that all development effort is equal is wrong, but its a simple model the can help us explore the folly of manual testing. In addition, the simple model is implicit in the workings of most companies.
Companies tend to have fixed resources, I mean people, building and testing their products. There’s the developers and the testers. Or maybe there are developers that switch hats and spend some of their time testing. Either way, there is a de facto develop:test coefficient in practice at your workplace.
What’s the problem?! Everyone knows that software, once you make it work, stays working without any further effort. [Huh, what did he say?] Please don’t quote me out of context, I’m just being sarcastic. Let’s pull a couple of bits of wisdom from The Systems Bible to make you more afraid of changing code that is not covered by automated tests.
The Systems Bible says “If a system is working, leave it alone.” “Systmes don’t like being fiddled with.” Well put John Gall. I guess all us programmers are out of work. Our business is to change and evolve software. We need to find a way to accept the warnings from The Systems Bible, and continue to advance our systems.
RB Grady says that 25% of the bugs in your bug list are the consequence of someone adding a new feature or fixing a bug, while not realizing that they broke some other seemingly random thing. So, we can’t just assume that prior iterations’ features are going to work; we have to retest them. Let’s refine the model.
Every iteration, not only does the new functionality have to be tested, but the prior iterations’ working features have to be retested too. No problem, just add some more resources; I mean plug compatible testing units; I mean people with the needed skills and knowledge.
The reality of the situation is that we don’t have unlimited budgets and if we did, you could not hire people, with the needed skills and knowledge, at a fast enough rate to keep the product completely tested. And that means that we have to face the reality of the ever growing untested code gap.
The untested code gap kills productivity and predictability. We don’t choose where the bugs go making them easy to find; they choose their hiding places, and have no mercy on us. They flare up and put teams into fire fighting mode.
Are we are beat? Do we have to put up with buggy products and long test and fix cycles? No, we do not! We must accept that a product test strategy based on manual test is unsustainable. Unsustainable systems eventually collapse. We can’t afford not to automate much of your software test.
Well, simply put: if development effort grows like N(t), testing effort (due to regression tests) grows at least like N(t)^2.
I wrote N(t) because N is a function depending on the time ‘t’ argument.
Then, it’s easy to see that manual testing is not sustainable.
Please do consider the context of the application you’re talking about. Please do not over generalize the software testing effort.
Jaejin, Please say more about your objection. Of course I am generalizing. I also am not saying that manual test is not needed. My purpose with the post is to provide a simple model to that helps to argue for investing in test automation. To not do so, means that defects will sneak in through the untested code gap.
Of course I fully endorse these statements, it is an absolute fact. However, most of our projects have up until now relied on getting the systems to market a different way. This then makes the justification of changing the development process harder. Time and budget constraints in the past have put more emphasis on integration testing. If it passes the integration test then we’re good right ? (not all the time as it happens, but a good %).
So this year we will automate our testing, it’s been mandated. I can’t see anything but good that will come out of this !
Pingback: Line by Line » Technical Excellence: Why you should TDD!
I think you’re over simplifying the matter here.
As systems become larger and more complex the development effort needed would also grow at a steady pace. Especially with dependent code structures. This would increase the amount of automated testing & manual testing by default.
There are many things that automated tests cannot test for without significant updates and maintenance and while I don’t disagree that you can do everything with manual testing, the need for manual testing will always be there.
Automated tests aren’t tests. They are checks. Checks to be sure that something is still behaving like it used to. When systems change these checks will often fail just due to the fact that what they were checking no longer behaves the same way.
This adds time to the automation process as well due to maintenance considerations to keep the checks up to date.
As your percentage of automated checks grow closer to 100% of your testing the reliability of those checks and their coverage approaches zero unless you continuously have manual tests supporting new automated checks (by adding to them).
However, a good practice of developers building good unit tests around their code adds to the sustainability and reliability of the product.
To say that manual testing is unsustainable is irresponsible without the proper context surrounding the statement. I don’t feel you have provided the proper context in either statements or quantity.
Yes, it is a simplification. Typically, models are simplifications.
Having done all the things you are talking about, I’ll stand by my statement. I’ll clarify it a bit too. A product development team that relies on manual tests as their way to assure that the product is working, is unsustainable.
Does this mean that there are no products that can and are being tested manually? That is silly. Or that no manual tests are needed? I never claimed that one. It also is silly. We agree that manual tests are needed even though I did not state it. I thought it could go without saying.
My motivation in writing the article was to show a simple model that illustrates that a manual test strategy leads to an ever increasing hunger for more manual tests. We cannot satisfy that hunger. So we’ll get bugs, and then more bugs. It seems that way too many organizations have come to the conclusion that they cannot afford to automate tests, unit or otherwise. I am suggesting that they cannot afford not to automate.
Thanks for the comments.
I agree with Dez. Very well said, manual testing needs will always be there. I would like get some comments from someone who has expertise in manual software testing, he/ she can certainly confirm how manual testing is always required. We can use automation but not every where and anywhere.
I think you overlook 2 key points:
1. Automated tests are more than just checks; they help the developers create a better design. By creating unit tests, it will be in the developers’ interest to create more decoupled code. It’s in their own interest to make a design that is easier to test. As a result, all bugfixes are easier in the future, because the code is flatter and simpler. If something is hard to test, it means there are too many dependencies in the architecture.
2. You are going too far in saying that James’ statement on manual testing is irresponsible, given that it was just part of a thought experiment. Manual testing is useful primarily because it gives the output a fresh look. Developers are usually too close to the code to spot certain types of bugs, and this can’t be automated at least initially, because you can’t fix a bug you don’t see. However, any bug found in manual testing could theoretically be included in an automation suite if it makes sense. I do agree with you though, that having zero manual testing is not a good idea.
As has been said, the context of the post is key here.
To get a modern software testing view of this issue I would implore people to read the following article describing what software testers view as the difference between testing and checking http://www.satisfice.com/blog/archives/856
Just to call out one comparison from the article, checking is to testing as compiling is to programming.
One of the great benefits of TDD is that it has removed a lot of the “old-school mindless factory style” testing from the software testers workload, allowing us to focus on utilising our inteligence testing the areas that developers probably would never think of.
One of the great tragedies of TDD is that it seems to have become viewed as the be all and end all. It is not.
Pingback: Start Writing Tests – They Don’t Have to be Perfect | Karl L. Hughes
I agree with dez as well. This is overall a poor blog post and does not really give the reader much. Thanks anyway..
Pingback: James Grenning’s Blog