Testing software is an important step within the software development lifecycle. However, sometimes I think it’s overdone. A lot of time and effort, which translates into money, is wasted in testing for every corner case. The problem with software is, if there are N variables and each on average has M values, then there are N^M variations and it simply is not possible to test all these cases. It is possible to identify some critical test cases that cover this exponential space. I mean, I am not talking about the more test cases, the more the coverage. Instead, it is likely to be possible that carefully picking a set of distinct test cases can actually cover a broader space. However, this is a difficult task and requires a much more coordination between developers and QA teams and obviously the problem is that this seldom happens.
Usually large corporations can afford to spend a lot of money on quality testing. Or may be that’s not correct given the quality of software we see from some of the large corporations. On the otherhand, perhaps they are actually spending a lot of money and effort, but not to do it the right way.
So, the question really is, what if you are a small company or just an individual service provider? Can you afford to spend so much time on testing? Given that the software testing is often wrongly linked to construction industry and people keep questioning why can’t there be building blocks that can be easily plug-and-played, I am going to take an analogy from the construction industry as well.
Say someone got a new house built. When they move in, would they put some sand down their toilet and get it clogged? The point is, there is an expected behavior and then there are all sorts of “I-want-to-break-this-thing” actions. Obviously the builder has not taken any special steps such that the home owner doesn’t shoot his foot by doing something stupid and clogging his toilet.
Yet, there is a lot of emphasis on corner case testing or in some cases monkey testing where any random data is fed into the system. Part of the problem for this is that the QA team is considered doing it’s job based on the number of bugs discovered. However, the goal should be not on just how many bugs are found, but how many bugs are found by customers after the product is released. What good is if the customers are able to find bugs inspite of all those zillion test cases verified, filed as bugs and fixed by developers? Why did the test cases that the customer is interested in not tested?
For small companies and ISPs, the best thing is to focus more on the features that help the customer and less on fixing every possible incorrect data scenario. Customers are not interested in putting a string where a number is required, they are only interested in using the software to do their business, or homework or their freelancing gig. Whatever the case may be, your software is just yet another tool to get things done for the day. They have no passion in discovering issues with either their toilet or your software.