David Norton, Executive Director, CISQ
Note: This blog first appeared on Dave's LinkedIn on September 3, 2019
In the course of running the CISQ survey on Software Quality Analysis, I have had some very interesting discussions on the pros and cons of static and dynamic analysis. Discussions have touched on the problems of false negatives or positives, the cost of tooling, etc. but one subject has come up repeatedly - the “Chlorinated Chicken” argument!!
At this point you may be asking, what does chlorinated chicken have to do with quality analysis? Well, let me explain.
In the US, chicken is washed with chlorine to kill salmonella and other bacteria at the end of the poultry production process; it’s a safe process and effective, after all, we have chlorine in our water. In Europe, washing chicken in chlorine is banned. The EU argues it could lead to poor hygiene in production processes upstream of chlorine washing, for example, in practices at farms or abattoirs. The EU’s thinking is if farmers and abattoir workers know the final chlorine wash will kill the bacteria, they will not be as careful with hygiene in their part of the process.
Now, ignoring all the arguments about “is it really food hygiene or EU protectionism?” and “let’s just stop eating chickens anyway,” washing chicken with chlorine poses an interesting question - does improvement in quality practices in one area lead to a drop in quality practices in another?
So, back to software quality analysis. A number of the scrum masters I have spoken to have said they did not like static code analysers as it gave their team a false sense of security, and good developers should be able to cut quality code without a tool telling them if it is good code or not. Yet others have told me they insist on their team using static code analysers during each sprint and integrating it into the toolchain, but still expect the developers to cut good code in the first place.
What surprises me is that this “Chlorinated Chicken” polarization exists at all. Afterall, using TDD or BDD with high levels of automation, i.e., tools, is an established best practice, so why the debate with code quality analysis? Are the number of false results too high? Do the tools take too long to run? Does it really lead to lazy programming? Or is it something more fundamental – our general attitude towards non-functional requirements.
For me, coming from a defence and safety critical background, I will take all the help I can get, and at my age I can’t remember 800+ CWEs. (I celebrate if I go into the next room and remember why).
What do you think?
The survey is still open, so please take it and help us improve software quality practices and standards.
https://www.it-cisq.org/state-of-the-nation-survey.htm
Comments