Noteworthy Firm News

All Non-Trivial Software Has Bugs:
Yet, Misconceptions Still Exist that Interfere with Preventing, Finding & Fixing them?

By Warren S. Reid, Managing Director, WSR Consulting Group, LLC
Copyright © 1997, 2006, 2017. All Rights Reserved

While the testing of computer systems and software has certainly improved somewhat over the last two decades, ten mainstay misunderstandings/causes of testing failure continue which, taken together, help explain why “two-thirds of systems projects are scrapped or challenged.”

In my experience, QA, QC, Defect Prevention, Static Testing, Dynamic Testing, User Acceptance Testing, and Defect Removal tasks, if done properly on large-scale systems projects (including new development, ERP, and new systems to legacy systems integration) will take up to 40-50% of the total planned/actual budgets for the work to be performed.

Testing Misconceptions

The Statement The Reality
1 Best testing methodologies and practices are only a guideline to be used at the Tester’s discretion. False! Best testing methods, metrics & measures have been a long time coming. Once your organization commits to them, follow them. Deviance from these requires full explanation/documentation, and approval of QC, QA and Test Manager.
2 Test metrics are simply counting the number of high priority defects left and fixing them. False! Finding defects is tricky and requires a lot of thought.
- Errors cluster – and should guide the types and levels of testing needed. You will need to determine what level of “test coverage” is acceptable.
- Dynamic testing only captures up to 85% of the defects at best.
- Static testing (e.g., inspections, reviews, walkthroughs, etc.) are needed to flag the other 15%.
3 You don’t need industry or platform experience to manage a large, industry specific, testing project/phase. False! You do need industry and platform experience to manage test development. Also, the test teams should have specific training in testing, and be certified as required. Programmers are not testers. Programmers are “makers;” testers are “breakers”. Each requires its own mindset.
4 You can manage large test efforts and test defect logs with Excel, Word, PowerPoint, email. False! There are many good automated testing tools available that provide many more metrics and allow more analyses: e.g., time to correct defects; duplicate defects; old allegedly fixed defects popping up again; normalcy of the mix of error severities and priorities; where the errors are coming from (a module? design team? programming team? standards? etc.).
5 We know the status of our testing efforts – thus we don’t need to manage risks. False! The given status of the project moving towards “success” assumes (1) low level of controllable risk coming up, (2) basis of the system scope, functionality, quality, schedule is fixed, and (3) assignment of proper team resources and automated test tools at the right time is assured.
6 “Aggregate” testing (i.e., overlapping test phases) will shorten the test cycle. Rarely! This is a very poor technique and very, very hard to pull off successfully. It almost always increases cost and time to complete.
7 We only have 30 moderately critical errors left down from 1,000 – it’s time to Go-live! In any event, we can fix them once we go-live. Not necessarily! The Go-Live decision is not about the number of defects fixed or left -- but about the impact of the remaining defects (known, latent, hidden) on system’s ability to meet the agreed to organizational, departmental and user/manager goals and needs: e.g., on improving customer service, being more productive, increasing employee efficiency, shortening delivery times to customers, meeting new regulations, etc.
8 On large ERP installations, software customization is the vendor’s problem; software configuration is the customer’s problem. False! ERP installations are very difficult. Almost 50% of the larger ones fail outright, i.e., are never installed, or are uninstalled shortly after Go-Live.
9 Its not possible to estimate the number of “potential defects before Go-Live” and the number of defects that will be found post Go-Live based on “Defect Removal Efficiency.” WRONG! Modern methods data, and research now allow savvy IT and Testing professionals to calculate both strong accuracy based upon the quality of the system development efforts before Go-Live and the effectiveness of the Testing and defect removal process post-Go-Live. So far as I know, only WSRcg has used this in litigation matters successfully.
10 A defect is a defect is a defect … (Part 1) WRONG! There are many causes of system and testing errors including: functional errors (system doesn’t do what it’s supposed to, and system does NOT do what it’s NOT supposed to) calculation errors, training/user errors, data errors, ambiguous requirements errors, command errors, inconsistent GUI errors, and many more. Understanding the root causes will not only allow one to address a single specific error, but also address systems-wide errors earlier.
11 A defect is a defect is a defect … (Part 2) False! Defects have priority levels.
Priority is the order in which defect should be fixed. Higher priority defects, are fixed first. Errors leaving system unusable are given higher priority.
12 A defect is a defect is a defect … (Part 3) False! Defects are typically classified by severity levels.
- Critical: defect affects critical functionality or critical data. No workarounds. E.g. Failed install, complete feature failure.
- Major: defect affects major functionality or major data. Has a workaround but not obvious & difficult. E.g. feature not functional from 1 module is doable if 5 complex, indirect steps are followed in another module/s.
- Minor: defect affects minor functionality or non-critical data. Easy workarounds available. E.g. minor feature that is not functional in one module, but the same task is easily doable from another module.
- Trivial: defect does not affect functionality or data. It does not even need a workaround or impact productivity or efficiency. It’s an inconvenience. E.g. Petty layout discrepancies, spelling/grammatical errors.
Bonus Point below:
13 Testing is the most underestimated and least understood of the SDLC phases. TRUE! Especially when left to the end of the project, testing gets shortened or worse, cut, to meet a promised Go-Live date – oftentimes by someone without heavy test planning experience who doesn’t understand the level of risk the client organization is willing to take.

Older News:

Please Click Here to Review Archived News