Book Review: Perfect Software and Other Illusions About Testing

Perfect Software Review

I was also recommended this book, Perfect Software and Other Illusions About Testing by Gerald M. Weinburg, which was an excellent book about software testing. I noticed that his books were extensively cited by the authors of the previous book I read, Lessons Learned in Software Testing: A Context-Driven Approach by Cem Kaner, James Bach and Bret Pettichord, so I am looking to read at least another one of Weinburg’s books.

The reason I liked this book is because of the simplicity of the language and examples used. He really tries to appeal to a broad audience and explains by using understandable scenarios. I enjoyed chapters 11 through 14 which deal heavily with information gathering and the psychology of getting, interpreting and acting on data. A lot of what’s in this book were in the lessons in Lessons Learned in Software Testing: A Context-Driven Approach by Cem Kaner, James Bach and Bret Pettichord.

The ‘scams’ section/chapter were also useful teaching tools/stories. One story that stuck with me was the one where he tells the reader people who gamed a bugfest. They colluded with one another, one added in bugs and the other ‘found’ them. They apparently made enough money to buy a powerboat. Or another case where people were being rewarded for being the fastest bug-clearers, so a QA person would tell the developer about the bugs ahead of time in order to give the developer extra time to fix the bug before reporting it.

Chapter 8, which is about what makes a good test, I found pretty helpful to keep in mind

Common Mistakes
1) Not thinking about what information you’re after: Testing is difficult enough when you do think about what you’re after, and more or less impossible when you don’t. You won’t often know in advance what information you seek—in fact, in most instances, you’ll have only an approximate idea. So, you need to think about your ultimate testing goals and about how you’re going to learn what other information you’re going to want.
2) Measuring testers by how many bugs they find: Testers will respond to this kind of measurement, but probably not the way you intend them to. The quantity of bugs found will increase, but the quality of information harvested will diminish.
3) Believing you can know for sure how good a test is: Be vigilant and skeptical when evaluating the accuracy and appropriateness of a test. If you aren’t, you’re going to get slapped down by the nature of the universe, which doesn’t favor perfectionism.
4) Failing to take context into account: There are few, if any, tests that are equally significant in all circumstances. If you don’t take potential usage patterns into account, your tests will be ineffectual.
5) Testing without knowledge of the product’s internal structure: There are an infinite number of ways to replicate specific behavior on a finite series of tests. Knowing about the structure of the software you’re testing can help you to identify special cases, subtle features, and important ranges to try—all of which help narrow the inference gap between what the software can do and what it will do during actual use. Charles Babbage, the maker of the very first computer, knew this almost 200 years ago, so there’s no reason for you not to know it.
6) Testing with too much knowledge of the product’s internal structure: It’s too easy to make allowances for what you think you know is going on inside the black box. Typical users probably won’t know enough to do that, so some of your tests had better simulate activities likely to be performed by naive users.
7) Giving statistical estimates of bugs as if the numbers were fixed, certain numbers: Always give a range when stating the estimated number of bugs (for example, say, “There are somewhere in the range of thirty to forty bugs in this release.”). Even better, give a statistical distribution, or a graph.
8) Failing to apply measures of “badness” to your tests: Use a checklist, asking questions such as the ones in this chapter.
9) Not ensuring that development is done well: Poorly developed code needs good testing but usually receives poor testing, thus compounding problems. What use are good tests of shoddy code?
[..]



Chapter 8