Not planning is for Losers

Only the ignorant don’t plan their code pathways before they write them.  Unless you are implementing classes of only getter and setter routines code needs to be planned. We talk about The Path Least Traveled The total number of pathways through a software system grow so quickly that it is very hard to imagine their total number. If a function X() with 9 pathways calls function Y() which has 11 pathways then the composition function X() ° Y() will have up to 9 x 11 = 99 possible pathways. If function Y() calls function Z() with 7 pathways, then X()  ° Y()  ° Z()  will have up to 693 = 9 x 11 x 7 pathways. The numbers add up quickly, for example a call depth of 8 functions each with 8 pathways means 10.7 million different paths; the number of possible pathways in a system is exponential with the total depth of the call tree. Programs, even simple ones, have hundreds if not thousands (or millions) of pathways through them.

Negative vs Positive Assurance

Quality assurance can only come from the developers, not the testing department.  Testing is about negative assurance, which is only a statement that “I don’t see anything wrong”; it doesn’t mean that everything is correct, just that they can’t find a problem.  Positive assurance which is guaranteeing that the code will execute down the correct pathways and only the developer can do that. Quality assurance comes from adopting solid practices to ensure that code pathways are layed down correctly the first time

Any Line of Code can be Defective

If there are 10 pathways through a function then there there must be branching statements based on variable values to be able to direct program flow down each of the pathways. Each pathway may compute variable values that may be used in calculations or decisions downstream. Each downstream function can potentially have its behavior modified by any upstream calculation. When code is not planned then errors may cause execution to compute a wrong value.  If you are unlucky that wrong value is used to make a decision which may send the program down the wrong pathway.  If you are really unlucky you can go very far down the wrong pathways before you even identify the problem.  If you are really, really, really unlucky not only do you go down the wrong pathway but the data gets corrupted and it takes you a long time to recognize the problem in the data. It takes less time to plan code and write it correctly than it takes to debug complex pathways.

Common Code Mistakes

Defects are generally caused either because of one of the following conditions:

  1. incorrect implementation of an algorithm
  2. missing pathways
  3. choosing the wrong pathway based on the variables

1) Incorrect implementation of an algorithm will compute a wrong value based on the inputs.  The damage is localized if the value is computed in a decision statement, however, if the value is computed in a variable then damage can happen everywhere that value is used.  Example, bad decision at node 1 causes execution to flow down path 3 instead of 2. 2) Missing pathways have to deal with conditions.  If you have 5 business conditions and only 4 pathways then one of your business conditions will go down the wrong pathway and cause problems until you detect the problem.  Example, there was really 5 pathways at node 1, however, you only coded 4. 3a) The last problem is that the base values are correct but you select the wrong pathway.  This can lead to future values being computed incorrectly.  Example: at node 10 you correctly calculate that you should take pathway 11 but end up going down 12 instead. 3b) You might also select the wrong pathway because insufficient information existed at the time that you needed to make a decision.  Example:  insufficient information at node 1 causes execution to flow down path 3 instead of 4. The last two issuses (2 or 3) can either be a failure of development or of requirements.  In both cases somebody failed to plan…

What if it is too late to Plan

Whenever you are writing a new section of code you should take advantage of the ability to plan the code before you write it.  If you are dealing with code that has already been written then you should take advantage of inspections to locate and remove defects.  Don’t wait for defects to develop, proactively inspect all code, especially in buggy modules and fix all of the code pathways. Code inspections can raise productivity by 20.8% and quality by 30.8%

Code Solutions

The Personal Software Process (PSP) has a specific focus that every code section should be planned by the developer before it is implemented.  That means that you sit down and plan your code pathways using paper or a white board before using the keyboard.  Ideally you should spend the first part of your day planning with your colleagues on how best to write your code pathways.  The time that you spend planning will pay you dividends in time saved. PSP can raise productivity by 21.2% and quality by 31.2% If you insist on writing code at the keyboard then you can use pair programming to reduce errors.  By having a second pair of eyes looking at your code algorithmic mistakes are less likely and incorrect decisions for conditions are looked at by two people.  The problem is that pair programming is not cost effective overall. Pair Programming can raise productivity by 2.7% and quality by 4.5% Studies confirm that code sections of high cyclomatic complexity have more defects than other code sections.   At a minimum, any code section that will have a high cyclomatic complexity should be planned by two or more people.  If this is not possible, then reviewing sections of high cyclomatic complexity can reduce downstream defects. Automated cyclomatic complexity analysis can raise productivity by 14.5% and quality by 19.5% Design Solutions All large software projects benefit from planning pathways at the macroscopic level.  The design or architectural planning is essential to making sure that the lower level code pathways will work well. Formal architecture for large applications can raise productivity by 15.7% and quality by 21.8% Requirements Solutions Most pathways are not invented in development.  If there is insufficient information to choose a proper pathway or there are insufficient pathways indicated then this is a failure of requirements.  Here are several techniques to make sure that the requirements are not the problem. Joint application design (JAD) brings the end-users of the system together with the system architects to build the requirements.  By having end-users present you are unlikely to forget a pathway and by having the architects present you can put technical constraints on the end-users wish list for things that can’t be built. The resulting requirements should have all pathways properly identified along with their conditions. Joint application design can raise productivity by 15.5% and quality by 21.4% Requirements inspections are the best way to make sure that all necessary conditions are covered and that all decisions that the code will need to make are identified before development.  Not inspecting requirements is the surest way to discovering that there is a missing pathway or calculation after testing. Requirement inspections can raise productivity by 18.2% and quality by 27.0% Making sure that all pathways have been identified by requirements planning is something that all organizations should do.  Formal requirements planning will help to identify all the code pathways and necessary conditions, however, formal requirements planning only works when the business analysts/product managers are skilled (which is rare 🙁 ). Formal requirements analysis can raise productivity by 16.3% and quality by 23.2%


Other articles in the “Loser” series

Want to see more sacred cows get tipped? Check out

Moo?

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

References

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Defects are for Losers

A developer is responsible for using any and all techniques to make sure that he produces defect free code.  The average developer does not take advantage of all of the following opportunities to prevent and eliminate defects:

  1. Before the code is written
  2. As the code is written
  3. Writing mechanisms for early detection
  4. Before the code is executed
  5. After the code is tested

The technique that is used most often is #5 above and will not be covered here.  It involves the following:

  1. Code is delivered to the test department
  2. The test department identifies defects and notifies development
  3. Developer’s fire up the debugger and try to chase down the defect

Like the ‘rinse and repeat‘ process on a shampoo bottle, this process is repeated until the code is cleaned or until you run out of time and are forced to deliver.

The almost ubiquitous use of #5 leads to CIOs and VPs of Engineering assuming that the metric of one tester to two developers is a good thing.  Before assuming that #5 is ‘the way to go‘ consider the other techniques and statistical evidence of their effectiveness.

Before the Code is Written



A developer has the most options available to him before the code is written.  The developer has an opportunity to plan his code, however, there are many developers who just ‘start coding’ on the assumption that they can fix it later.

How much of an effect can planning have?  Two methodologies that focus directly on planning at the personal and team level are the Personal Software Process (PSP) and the Team Software Process (TSP) invented by Watts Humphrey.

PSP can raise productivity by 21.2% and quality by 31.2%

TSP can raise productivity by 20.9% and quality by 30.9%

Not only does the PSP focus on code planning, it also makes developers aware of how many defects they actually create.  Here are two graphs that show the same group of developers and their defect injection rates before and after PSP training.

Before PSP training After PSP training

The other planning techniques are:

  • Decision tables
  • Proper use of exceptions

Both are covered in the article Debuggers are for Losers and will not be covered here.

As the Code is Written

Many developers today use advanced IDEs to avoid common syntax errors from occurring   If you can not use such an IDE or the IDE does not provide that service then some of the techniques in the PSP can be used to track your injection of syntax errors and reduce those errors.

Pair Programming

One technique that can be used while code is being written is Pair Programming.  Pair programming is heavily used in eXtreme Programing (XP).  Pair programming not only allows code to be reviewed by a peer right away but also makes sure that there are two people who understand the code pathways through any section of code.

Pair programming is not cost effective overall (see Capers Jones).  For example, it makes little sense to pair program code that is mainly boiler plate, i.e. getter and setter classes. What does make sense is that during code planning it will become clear which routines are more involved and which ones are not.  If the cyclomatic complexity of a routine is high (>15) then it makes sense for pair programming to be used.

If used for all development, Pair Programming can raise productivity by 2.7% and quality by 4.5%

Test Driven Development




Test driven development (TDD) is advocated by Kent Beck and stated in 2003 that TDD encourages simple designs and inspires confidence.  TDD fits into the category of automated unit testing.

Automated unit testing  can raise productivity by 16.5% and quality by 23.7%

Writing Mechanisms for Early Detection

Defects are caused by programs either computing wrong values, going down the wrong pathway, or both.  The nature of defects is that they tend to cascade and get bigger the further in time and space between the source of the defect and the noticeable effects of the defect.

Design By Contract

One way to build checkpoints into code is to use Design By Contract (DbC), a technique that was pioneered by the Eiffel programming language   It would be tedious and overkill to use DbC in every routine in a program, however, there are key points in every software program that get used very frequently.

Just like the roads that we use have highways, secondary roads, and tertiary roads — DbC can be used on those highways and secondary roads to catch incorrect conditions and stop defects from being detected far away from the source of the problem.

Clearly very few of us program in Eiffel.  If you have access to Aspect Oriented Programming (AOP) then you can implement DbC via AOP. Today there are AOP implementations as a language extension or as a library for many current languages (Java, .NET, C++, PHP, Perl, Python, Ruby, etc).

Before the Code is Executed

Static Analysis

Most programming languages out there lend themselves to static analysis.  There are cost effective static analysis for virtually every language.

Automated static analysis can raise productivity by 20.9% and quality by 30.9%

Inspections


Of all the techniques mentioned above, the most potent pre-debugger technique is inspections. inspections are not sexy and they are very low tech, but the result of organizations that do software inspections borders on miraculous.The power of software inspections can be seen in these two articles:

Code inspections can raise productivity by 20.8% and quality by 30.8%

Design inspections can raise productivity by 16.9% and quality by 24.7%

From the Software Inspections book on p.22.

In one large IBM project, one half million lines of networked operating system, there were 11 development stages (document types: logic, test, user documentation) being Inspected.  The normal expectation at IBM, at that time, was that they would be happy only to experience about 800 defects in trial site operation.  They did in fact experience only 8 field trial defects.

Evidence suggests that every 1 hour of code inspection will reduce testing time by 4 hours

Conclusion

Overworked developers rarely have time to do research, even though it is clear that there is a wealth of information available on how to prevent and eliminate defects. The bottom line is that if your are only using technique #5 from the initial list, then you are not using every technique available to you to go after defects.My opinion only, but:

A professional software developer uses every technique at his disposal to prevent and eliminate defects

Other articles in the “Loser” series

Want to see more sacred cows get tipped? Check out:

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

References

Gilb, Tom and Graham, Dorothy. Software Inspections

Jones, Capers. SCORING AND EVALUATING SOFTWARE METHODS, PRACTICES, AND RESULTS. 2008.

Radice, Ronald A. High Quality Low Cost Software Inspections.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

NO Experience Necessary!!!

Did you know that we have never found a relationship between a developer’s years of experience and code quality or productivity?

The original study that found huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968).

This study has been repeated at least 8 times over 30 years and the results have not changed! (see below)

Sackman et al studied professional programmers with an average of 7 years’ experience and found that:

  • the ratio of initial coding time was about 20 to 1
  • the ratio of debugging times over 25 to 1
  • program execution speed about 10 to 1
  • program size 5 to 1

They found no relationship between a programmer’s number of years of experience and code quality or productivity.  That is there was NO correlation between experience and productivity (i.e. ability to produce code) and there was NO correlation between experience and quality (i.e. minimizing defects) .

Think about that for a minute…

That is the worst programmers and the best programmers made distinct groups and each group had people of low and high experience levels.  Whether training helps developers or not is not indicated by these findings, only that years of experience do not matter.

Without considering legality, this means that it is simpler to get rid of expensive poor performers with many years of experience and hire good performers with few years of experience!

Results Have Been Confirmed for 30 Years!

There were flaws in the study, however, even after accounting for the flaws, their data still shows more than an order of magnitude difference between the best programmers and the worst, and that difference was not related to experience.  In years since the original study, the general finding that “There are order-of-magnitude differences among programmers” has been confirmed by many other studies of professional programmers (full references at the end of the article):

  • Curtis 1981
  • Mills 1983
  • DeMarco and Lister 1985
  • Curtis et al. 1986
  • Card 1987
  • Boehm and Papaccio 1988
  • Valett and McGarry 1989
  • Boehm et al 2000

Technology is More Sophisticated, Developers are not

You might  think that we know much more about software development today than we knew in 1968, after all today:

  • we have better computer languages
  • we have more sophisticated technology
  • we have better research on effective software patterns
  • we have formal software degrees available in university

It turns out that all these things are true, but we still have order of magnitude differences among programmers and the difference is not related to years of experience.  That means that there is some other x-factor that drives productive developers;  that x-factor is probably the ability to plan and make good decisions.

The bad news is that if you are not a productive developer writing quality code  then you will probably not get better simply because of years of experience.

Developers face making decisions on how to structure their code every day.  There is always a choice when it comes to:

  • laying out code pathways
  • packaging functions into classes
  • packaging classes into packages/modules

Because developers face coding decisions, many of which are complex, the best developers will plan their work and make good decisions.  Bad developers just ‘jump in’; they assume that they can always rewrite code or make up for bad decisions later. Bad developers are not even aware that their decision processes are poor and that they can become much better by planning their work.

Solution might be PSP and TSP

Watts Humphrey tried to get developers to understand the value of estimating, planning development, and making decisions in the Personal Software Process (PSP) for individuals and the Team Software Process (TSP) for teams, but only a handful of organizations have embraced it.  Capers Jones has done analysis of over 18,000 projects and discovered that1:

PSP can raise productivity by 21.2% and quality by 31.2%
TSP can raise productivity by 20.9% and quality by 30.9%

All of these findings should have a profound effect on the way that we build our teams. Rather than having large teams of mediocre developers, it makes much more sense to have smaller teams of highly productive developers that know how to plan and make good decisions.  The PSP and TSP do suggest that the best way to rehabilitate a poor developer is to teach them how to make better decisions.

Be aware, there is a difference between knowledge of technologies which is gained over time and the ability to be productive and write quality code.

Conclusion

We inherently know this, we just don’t do it.  If the senior management of organizations only knew about these papers, we could make sure that the productive people get paid what they are worth and the non-productive people could seek employment in some other field.  This would not only reduce the cost of building software but also increase the quality of the software that is produced.

Unfortunately, we are doomed to religious battles where people debate methodologies, languages, and technologies in the foreseeable future.  The way that most organizations develop code makes voodoo look like a science!

Eventually we’ll put the ‘science’ back in Computer Science, I just don’t know if it will be in my lifetime…

Check out Stop It! No… Really stop it. to learn about 5 worst practices that need to be stopped right now to improve productivity and quality.

Bibliography

Boehm, Barry W., and Philip N. Papaccio. 1988. “Understanding and Controlling Software Costs.” IEEE Transactions on Software Engineering SE-14, no. 10 (October): 1462-77.

Boehm, Barry, et al, 2000. Software Cost Estimation with Cocomo II, Boston, Mass.: Addison Wesley, 2000.

Card, David N. 1987. “A Software Technology Evaluation Program.” Information and Software Technology 29, no. 6 (July/August): 291-300.

Curtis, Bill. 1981. “Substantiating Programmer Variability.” Proceedings of the IEEE 69, no. 7: 846.

Curtis, Bill, et al. 1986. “Software Psychology: The Need for an Interdisciplinary Program.” Proceedings of the IEEE 74, no. 8: 1092-1106.

DeMarco, Tom, and Timothy Lister. 1985. “Programmer Performance and the Effects of the Workplace.” Proceedings of the 8th International Conference on Software Engineering. Washington, D.C.: IEEE Computer Society Press, 268-72.

1Jones, Capers. SCORING AND EVALUATING SOFTWARE METHODS, PRACTICES, AND RESULTS. 2008.

Mills, Harlan D. 1983. Software Productivity. Boston, Mass.: Little, Brown.

Valett, J., and F. E. McGarry. 1989. “A Summary of Software Measurement Experiences in the Software Engineering Laboratory.” Journal of Systems and Software 9, no. 2 (February): 137-48.

VN:F [1.9.22_1171]
Rating: 4.3/5 (6 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Stop It! No… really stop it.

Stop, this means you!There are 5 worst practices that if stopped immediately will  improve your productivity by a minimum of 12% and improve quality by a minimum of 15%.  These practices are so common that people assume that they are normal — they are not, they are silent killers wherever they are present.

We often hear the term best practices enough to know that we all have different definitions for it.  Even when we agree on best practices we then disagree on how to implement and measure them. A best practice is one that increases the chance your project will succeed.

How often do we talk about worst practices?  More importantly, what about those worst practices in your organization that you don’t do anything about?

CatConeFail

When it comes to a worst practice, just stop it.

If your company is practicing even one worst practice in the list below it will kill all your productivity and quality. It will leave you with suboptimal and defective software solutions and canceled projects.

To make matters worse, some of the worst practices will cause other worst practices to come into play.   Capers Jones had statistics on over 18,000 projects and has hard evidence on the worst practices1.  The worst practices and their effect on productivity and quality are as follows:

Worst Practice Productivity Quality
Friction/antagonism among team members -12.0% -15.0%
Friction/antagonism among management -13.5% -18.5%
Inadequate communications with stakeholders -13.5% -18.5%
Layoffs/loss of key personnel -15.7% -21.7%
Excessive schedule pressure -16.0% -22.5%

Excessive Schedule Pressure

Excessive schedule pressure is present whenever any of the following are practiced:

Excessive schedule pressure causes the following to happen:

This alone can create a Death March project and virtually guarantee project failure.

Effect of excessive schedule pressure is that productivity will be down 16% and quality will be down 22%

Not only is excessive schedule pressure one of the worst practices it tends to drive the other worst practices:

  • Friction amongst managers
  • Friction amongst team members
  • Increases the chance that key people leave the organization

If your organization has a habit of imposing excessive schedule pressure — leave!

Friction Between People

Championship TrophySoftware development is a team activity in which we transform our intangible thoughts into tangible working code.  Team spirit and collaboration is not an option if you want to succeed.  The only sports teams that win championships are those that are cohesive and play well together.

You don’t have to like everyone on your team and you don’t have to agree with all their decisions.  However, you must understand that the team is more important than any single individual and learn to work through your differences.

Teams only work well when they are hard on the problem, not each other

Fighting ManagersFriction among managers because of different perspectives on resource allocation, objectives, and requirements.  It is much more important for managers to come to a consensus than to fight for the sake of fighting. Not being able to come to a consensus will cave in projects and make ALL the managers look bad. Managers win together and lose together.

Effect of management friction is that productivity will be down 13.5% and quality will be down 18.5%

Team FrictionFriction among team members because of different perspectives on requirementsdesign, and priority.  It is also much more important for the team to come to a consensus than to fight for the sake of fighting.  Again, everyone wins together and loses together — you can not win and have everyone else lose.

Effect of team friction is that productivity will be down 12% and quality will be down 15%

Any form of friction between managers or the team is deadly.

Inadequate Stakeholder Communication

Inadequate stakeholder communication comes in several forms:

  • Not getting enough information on business objectives
  • Not developing software in a transparent manner

If you have insufficient information on the business objectives of a project then you are unlikely to capture the correct requirements.  If you are not transparent in how you are developing the project then you can expect excessive schedule pressure from senior management.

Effect of inadequate stakeholder communication is that productivity will be down 13.5% and quality will be down 18.5%

Loss of Key Personnel

To add insult to injury, any of the other four worst practices above will lead to either:

  • Key personnel leaving your organization
  • Key personnel being layed off

I Quit!!Badly managed organizations and projects will cause the most competent people to leave the organization, simply because they can more easily get a job in another organization.

When organizations experience financial distress from late projects they will often cut key personnel because they are expensive.  The reality is that laying off key personnel will sandbag your ability to get back on track.  The correct thing to do is to find your least effective personnel and let them go.

Effect of layoffs/loss of key personnel is that productivity will be down 15.7% and quality will be down 21.7%

The loss of key personnel has a dramatic effect on team productivity and morale and a direct effect on product quality.

Conclusion

Any of the worst practices mentioned above will cause a project to be late and deliver defective code. Even worse, the worst practices tend to feed each other and cause a negative spiral. If you are in an organization that habitually practices any of these worst practices then your only real option is to quit.

The most deadly situation is when there is the following cascading of worst practices:

  • Excessive schedule pressure (leads to)
  • Management and team friction (leads to)
  • Loss of key personnel

If you are in senior management then none of these practices can be allowed if you want to avoid canceled projects or highly defective products.


1Jones, Capers. SCORING AND EVALUATING SOFTWARE METHODS, PRACTICES, AND RESULTS. 2008.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Testing departments are for Losers

Loser, smallI understand that this is a very strong statement and I’m definitely not trying to insult anyone; so apologies in advance.  I’m trying to challenge the belief that testing is mandatory and that there should be one testing resource for every two developers.  Quality development is about quality assurance and zero defects not about having testing departments.

One testing resource for every two developers is not a good solution

Quality Assurance (QA) is about making a positive statement about product quality.  QA is about positive assurance, which is stating,  “We are certain that there are few, if any, defects in this product.”  Airplanes are a product with quality assured, the manufacturers will stand by their quality and the statistics back them up.  Contrast this with this article or this article which wonders what would happen if Microsoft made airplanes — would you fly in them?

The reality is that most testing departments simply discover defects and forward them back to the engineering department to fix.  By the time the software product gets released we are basically saying, “We got rid of as many defects as we could find before management forced us to release this product, however, we really have no idea how many other defects are in the code”.  This is not assuring quality; at best you get negative assurance out of this.

Everyone understand that buggy software kills sales (and start-ups :-)), however, testing is often an after thought in many organizations.  When software products take longer than expected they are  forwarded to the testing department.  The testing department is often expected to test and bless code in less time than allocated.

To compound problems, many testing departments don’t even receive proper requirements against which to test the code and/or sufficient tools to work with. Large testing departments and/or large amounts of manual testing are not healthy or efficient.

Humphrey Watts was emphatic that calling defects “bugs” trivializes the issue and downplays the negative impact that defects cause on a development organization.

Calling defects “bugs” trivializes an important issue

Goblins and Elves

Defects are not introduced into software by goblins and elves.  Defects are injected into the code by developers that:

  • don’t understand the requirements or architecture
  • misunderstand how to use their peer’s components
  • misunderstand 3rd party libraries
  • having a bad day because of home troubles or work environment
  • are careless because someone else will test their code

Defects are injected by the team

No one is more aware of how code can break down than the developer who writes it.   Any line of code that is written without concentration and planning becomes a potential defect.  It is impossible for testers to understand every pathway through the code and make sure that every possible combination of variables is properly taken care of.

There are many techniques that can increase code quality and dramatically reduce the amount of testing that is necessary:

Test Driven Development

Properly written tests require a developer not only to think about what a code section is supposed to do but also plan how the code will be structured.  If you know that there are  five pathways through the code then you will write five tests ahead of time.  A common problem is that you have coded n paths through the code when there are n+1 conditions.

TDD is white box testing and can reach every pathway that the developer codes.  TDD is proactive and can test pathways from end to end, it does not just have to be used for unit testing.  When TDD is hooked up to a continuous integration engine then defects are located and fixed before they make it to testing.

Database Driven Testing

Using actual test data to test existing routines during development is an excellent way to make sure that there are fewer production problems.  The test data needs to be a copy (or subset) of production data.

Database driven testing can also be hooked up to a continuous integration engine and prevent defects from getting to testing.

Design By Contract

The Eiffel programming language introduced design by contract (DbC).  DbC isDbC orthogonal to TDD because its goal is to ensure that the contract defined by the preconditions and postconditions for each function call is not violated.  DbC can be used in virtually any language for with their is an Aspect Oriented Programming (AOP) solution.

During development, the minute a developer violates the expected contract of any function (his or a peers) then the developer will get feedback to fix the problem before it gets to testing.

Inspections

Since the 1970s we have statistical evidence that one of the best ways to eliminate defects from code is through inspections.  Inspections can be applied to the requirements, design, and code artifacts and projects that use inspections can eliminate 99% of the defects injected into the code.  Se Inspections are not Optional and Software Professionals do Inspections.

Each hour of inspections will save you 4 hours of testing

Pair Programming

Pair programming can be selectively used to prevent and eliminate defects from code.   When developers work in pairs they not only review code as quickly as possible but also learn productivity techniques from each other.  Pair programming should only be done on complex sections of code.

Pair programming not only eliminates defects but allows developers to get enough feedback that they can prevent defects in the future.

Minimizing Cyclomatic Complexity

There is evidence that routines with high cyclomatic complexity will have more latent defects than other routines.   This makes sense because the number of code pathways goes up dramatically as cyclomatic complexity increases and increases the chance that the developer does not handle all of them.   In most cases, testing departments can not reproduce all of the pathways in routines of high cyclomatic complexity.

Use Dynamic and Static Code Checking

There are many code problems caused by a careless use of pointers and other powerful language constructs.  Many of these problems can be detected by having the development team use dynamic and static code checking problems.

Proper Code Planning Techniques

There are developers that try to write code at the keyboard without planning, which is neither efficient nor effective.  This is like having to do errands in 5 locations and driving to the locations randomly — you might get your errands done, but odds are it won’t be efficient.

Watts Humphrey talks directly to the idea of planning in the Personal Software Process.  In addition techniques like diagramming with UML or using decision tables can go a long way to thinking through code structure before it is implemented.

Conclusion

Developers are the ones who inject defects into the code and therefore they are the best line of defense to remove them.  The developer has the best information on what needs to be tested in his code at the time that he writes it.  The longer it takes for testing or a customer to discover a code defect the longer the developer will spend in a debugger chasing down the problem.

PreventEliminateDefectsDevelopers need to be trained in preventing and eliminating defects.  Developers who learn to get the code correct the first time will reduce and eliminate effort in testing.

The goal of development should be to catch defects early; this is the only way to assure quality.  Hence quality assurance starts and finishes in the development department, not the testing department.

VN:F [1.9.22_1171]
Rating: 4.7/5 (3 votes cast)
VN:F [1.9.22_1171]
Rating: -2 (from 2 votes)

Software Professionals do Inspections

Are you a software professional or not?

I’m not talking about having some kind of official certification here.  I’m asking whether creating high quality code on a repeatable basis is your top priority.

Professionals do everything possible to write quality code. Are you and your organization doing everything possible to write quality code?  Of course, whether you are a professional or not can only be answered by your peers.

If you are not doing software inspections then you are not doing everything possible to improve the quality of your code.  Software inspections are not the same as code walk throughs, which are used to inform the rest of the team about what you have written and are used mainly for educational purposes.  Walk throughs will find surface defects, but most walk throughs are not designed to find as many defects as possible.

How do defects get into the code?  It’s not like there are elves and goblins that come out at night and put defects into your code.  If the defects are there it is because the team injected them.

Many defects can be discovered and prevented before they cause problems for development.  Defects are only identified when you go looking for them, and that is typically only in QA.

Benefits of Inspections

Inspections involve several people and  require intense preparation before conducting the review. The purpose of inspections is to find defects and eliminate them as early as possible.  Inspections apply to every artifact of software development:

  • Requirements (use cases, user stories)
  • Design (high level and low level, UML diagrams)
  • Code
  • Test plans and cases

Inspections as a methodology have been around since the 1970s and certainly well codified since M. E. Fagin wrote a paper in the IEEE in 1986.  The idea behind inspections is to find defects as early as possible in the software development process and eliminate them.  Without inspections, defects accumulate in the code until testing when you discover all the defects from every phase of development simultaneously.

This diagram from Radice shows that defects will accumulate until testing begins.  Your quality will be limited by the number of defects that you can find before you ship your software.

With inspections, you begin to inspect your artifacts (use cases, user stories, UML diagrams, code, test plans, etc) as they are produced.  You attempt to eliminate defects before they have a chance to cascade and cause other phases of software development to create defects.  For example, a defect during requirements or in the architecture can cause coding problems that are detected very late (see Inspections are not Optional).

With inspections the defect injection and removal curve looks like this:

When effective inspections are mandatory, the quality gap shrinks and the quality of the software produced goes up dramatically.  In the Economics of Software Quality, Capers Jones and  Olivier Bonsignour show that defect removal rates rarely top 80% without inspections but can get to 97% with inspections.

Why Don’t We Do Inspections?

There is a mistaken belief that inspections waste time.  Yet study after study shows that inspections will dramatically reduce the amount of time in quality assurance.  There is no doubt that inspections require an up-front effort, but that up-front effort pays back with dividends. The hidden effect of inspections is as follows:

The issue is that people know that they make mistakes but don’t want to admit it, i.e. who wants to admit that they put the milk in the cupboard? They certainly don’t want their peers to know about it!

Many defects in a software system are caused by ignorance, a lack of due diligence, or simply a lack of concentration.  Most of these defects can be found by inspection, however, people feel embarrassed and exposed in inspections because simple errors become apparent to everyone.

For inspections to work, they must be conducted in a non-judgmental environment where the goal is to eliminate defects and improve quality.  When inspections turn into witch hunts and/or the focus is on style rather than on substance then inspections will fail miserably and they will become a waste of time.

Professional software developers are concerned with high quality code.  Finding out as soon as possible how you inject defects into code is the fastest way to learn how to prevent those defects in the future and become a better developer.

Professionals are always asking themselves how they can become better, do you?

Conclusion

Code inspections have been done for 40 years and offer conclusive proof that they greatly improve software quality without increasing cost or time for delivery.  If you are not doing inspections then you are not producing the best quality software possible

Bibliography


VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Inspections are not Optional

Every developer is aware that code inspections are possible, some might have experienced the usefulness of code inspections, however, the fact is that inspections are not optional.

Without inspections your defect removal rate will stall out at 85% of defects removed; with inspections defect removal rates of 97% have been achieved.

Code inspections are only the most talked about type of inspection; the reality is that all artifacts from all phases of development should be inspected prior to being used.  Inspections are necessary because software is intangible and it is not once everything is coded that you want to notice problems.

In the physical world it is easier to spot problems because they can be tangible.  For example, if you have specified marble tiles for your bathroom and you see the contractor bring in a pile of ceramic tiles then you know something is wrong.  You don’t need the contractor to install the ceramic tiles to realize that there is a problem.

In software, we tend to code up an entire set of functionality, demonstrate it, and then find out that we have built the wrong thing!  If you are working in a domain with many requirements then this is inevitable, however, many times we can find problems through inspection before we create the wrong solutions

Let’s look at some physical examples and then discuss their software equivalents.

Requirement Defects

The requirements in software design are equivalent to the blueprints that are given to a contractor.  The requirements specify the system to be built, however, if those requirements are not inspected then you can end up with the following:

Balcony no Door Missing Landing Chimney Covering Window
Stairs Displaced Stairs to Ceiling Door no Balcony/th>

All of the above pictures represent physical engineering failures.  Every one of these disasters could have been identified in the blueprints if a simple inspection had been done. Clearly it must have become clear to the developers that the building features specified by the requirements were incompatible, but they completed the solution anyways.

Balcony no Door

This design flaws can be caused by changing requirements; here there is a balcony feature that has no access to it.  In Balcony no Door it is possible that someone noticed that there was sufficient room for a balcony on the lower floor and put it into one set of plans. The problem was that the developers that install the sliding doors did not have their plans updated.

Here the changed requirement did not lead to an inspection to see if there was an inconsistency introduced by the change.

Door no Balcony

In Door no Balcony something similar probably happened, however, notice that the two issues are not symmetric.  In Balcony no Door represents a feature that is inaccessible because no access was created, i.e. the missing sliding door.  In Door no Balcony we have a feature that is accessible but dangerous if used.

In this case a requirements inspection should have turned up the half implemented feature and either: 1) the door should have been removed from the requirements, or 2) a balcony should have been added.

Missing Landing

The Missing Landing occurs because the requirements show a need for stairs, but does not occur to the architect that there is a missing landing.  Looking at a set of blueprints gives you a two dimensional view of the plan and clearly they drew in the stairs.  To make the stairs usable requires a landing so that changing direction is simple.  This represents a missing requirement that causes another feature to be only partially usable.

This problem should have been caught by the architect when the blueprint was drawn up. However, barring the architect locating the issue a simple checklist and inspection of the plans would have turned up the problem.

Stairs to Ceiling

The staircase goes up to a ceiling and therefore is a useless feature.  Not only is the feature incomplete because it does not give access to the next level but also they developers wasted time and effort in building the staircase.

If this problem had been caught in the requirements stage as an inconsistency then either the staircase could have been removed or an access created to the next floor.  The net effect is that construction starts and the developers find the inconsistency when it is too late.

At a minimum the developers should have noticed that the stairway did not serve any purpose and not build the staircase which was a waste of time and materials.

Stairs Displaced

Here we have a clear case of changed requirements.  The stairs were supposed to be centered under the door, in all likelihood plans changed the location of the door and did not move the dependent feature.

When the blueprint was updated to move the door the designer should have looked to see if there was any other dependent feature that would be impacted by the change.

Architectural Defects

Architectural defects come from not understanding the requirements or the environment that you are working with.  In software, you need to understand the non-functional requirements behind the functional requirements — the ilities of the project (availability, scalability, reliability, customizability, stability, etc).

Architectural features are structural and connective.  In a building the internal structure must be strong enough to support the building, i.e. foundation and load bearing walls.

insufficient foundation Insufficient structure Connectivity problem

Insufficient Foundation

Here the building was built correctly, however, the architect did not check the environment to see if the foundation would be sufficient for the building.  The building is identical to the building behind it, so odds are they just duplicated the plan without checking the ground structure.

The equivalent in software is to design for an environment that can not support the functionality that you have designed.  Even if that functionality is perfect, if the environment doesn’t support it you will not succeed.

Insufficient Structure

Here the environment is sufficient to hold up the building, however, the architect did not design enough structural strength in the building.

The equivalent in software design is to choose architectural components that will not handle the load demanded by the system.  For example, distributed object technologies such as CORBA, EJB, and DCOM provided a way to make objects available remotely, however, the resulting architectures did not scale well under load.

Connectivity Problem

Here a calculation error was made when the two sides of the bridge were started.  When development got to the center they discovered that one side was off and you have an ugly problem joining the two sides.

The equivalent for this problem is when technologies don’t quite line up and require awkward and ugly techniques to join different philosophical approaches.

In software, a classic problem is mapping object-oriented structures into relational databases.  The philosophical mismatch accounts for large amounts of code to translate from one scheme into the other.

Coding Defects

Coding defects are better understood (or at least yelled about :-), so I won’t spend too much time on them.  Classic coding defects include:

  • Uninitialized data
  • Uncaught exceptions
  • Memory leaks
  • Buffer overruns
  • Not freeing up resources
  • Concurrency violations
  • Insufficient pathways, i.e. 5 conditions but only 4 coded pathway

Many of these problems can be caught with code inspections.

Testing Defects

Testing defects occur when the test plan flags a defect that is a phantom problem or a false positive. This often occurs when requirements are poorly documented and/or poorly understood and QA perceives a defect when there is none.

The reverse also happens where requirements are not documented and QA does not perceive a defect, i.e. false negative.

Both false positives and negatives can be caught by inspecting the requirements and comparing them with the test cases.

False positives slow down development. False negatives can slip through to your customers…

Root Cause of Firefighting

When inspections are not done in all phases of software development there will be fire-fighting in the project in the zone of chaos.  Most software organizations only record and test for defects during the Testing phase.  Unfortunately, at this point you will detect defects in all previous phases at this point.

QA has a tendency to assume that all defects are coding defects — however, the analysis of 18,000+ projects does not confirm this.  In The Economics of Software Quality, Capers Jones and Olivier Bonsignour show that defects fall into different categories. Below we give the category, the frequency of the defect, and the business role that will address the defect.

Note, only the bolded rows below are assigned to developers.
Defect Role Category Frequency Role
Requirements defect 9.58% BA/Product Management
Architecture or design defect 14.58% Architect
Code defect 16.67% Developer
Testing defect 15.42% Quality Assurance
Documentation defect 6.25% Technical Writer
Database defect 22.92% DBA
Website defect 14.58% Operations/Webmaster

Notice that fully 25% of the defects (requirements, architecture) occur before coding even starts.  These defects are just like the physical defects shown above and only manifest themselves once the code needs to be written.

It is much less expensive to fix requirements and architecture problems before coding.

Also, only about 54% of defects are actually resolvable by developers, so by assigning all defects to the developers you will waste time 46% of the time when you discover that the developer can not resolve the issue.

Fire-fighting is basically when there need to be dozens of meetings that pull together large numbers of people on the team to sort out inconsistencies.  These inconsistencies will lie dormant because there are no inspections.  Of course, when all the issues come out, there are so many issues from all the phases of development that it is difficult to sort out the problem!

Learn how to augment your Bug Tracker to help you to understand where your defects are coming from in Bug Tracker Hell and How to Get Out!

Solutions

There are two basic solutions to reducing defects:

  1. Inspect all artifacts
  2. Shorten your development cycle

The second solution is the one adopted by organizations that are pursuing Agile software development.  However, shorter development cycles will reduce the amount of fire-fighting but they will only improve code quality to a point.

In The Economics of Software Quality the statistics show that defect removal is not effective in most organizations.  In fact, on large projects the test coverage will drop below 80% and the defect removal efficiency is rarely above 85%.   So even if you are using Agile development you will still not achieving a high level of defect removal and will be limited in the software quality that you can deliver.

Agile development can reduce fire-fighting but does not address defect removal

Inspect All Artifacts

Organizations that have formal inspections of all artifacts have achieved defect removal efficiencies of 97%!  If you are intent on delivering high quality software then inspections are not optional.

Of course, inspections are only possible for phases in which you have actual artifacts. Here are the artifacts that may be associated with each phase of development:

Phase Artifact
Requirements use case, user story, UML Diagrams (Activity, Use Case)
Architecture or design UML diagrams (Class, Interaction, Deployment)
Coding UML diagrams (Class, Interaction, State, Source Code)
Testing Test plans and cases
Documentation Documentation
Database Entity-Relationship diagrams, Stored Procedures

Effective Inspections

Inspections are only effective when the review process involves people who know what they are looking for and are accountable for the result of the inspection.  People must be trained to understand what they are looking for and effective check lists need to be developed for each artifact type that you review, e.g. use case inspections will be different than source code reviews.

Inspections must have teeth otherwise they are a waste of time.  For example, one way to put accountability into the process is to have someone other than the author be accountable for any problems found.  There are numerous resources available if you decide that you wish to do inspections.

Conclusion

The statistics overwhelming suggest that inspections will not only remove defects from a software system but also prevent defects from getting in.  With inspections software defect removal rates can achieve 97% and without inspections you are lucky to get to 85%.

Since IBM investigated the issue in 1973, it is interesting to note that teams trained in performing inspections eventually learn how to prevent injecting defects into the software system.  Reduced injection of defects into a system reduces the amount of time spent in fire-fighting and in QA.

You can only inspect artifacts that you take the time to create.  Many smaller organizations don’t have any artifacts other than their requirements documents and source code. Discover which artifacts need to be created by augmenting your Bug Tracker (see Bug Tracker Hell and How To Get Out!).   Any phase of development where significant defects are coming from should be documented with an appropriate artifact and be subject to inspections.


Good books on how to perform inspections:


All statistics quoted from The Economics of Software Quality by Capers Jones and Olivier Bonsignour:

Capers Jones can be reached by sending me an email: Dalip Mahal


VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Bug Tracker Hell and How To Get Out!

Whether you call it a defect or bug or change request or issue or enhancement you need an application to record and track the life-cycle of these problems.  For brevity, let’s call it the Bug Tracker.

Bug trackers are like a roach motel, once defects get in they don’t check out!  Because they are append only, shouldn’t we be careful and disciplined when we add “tickets” to the bug tracker?  We should, but in the chaos of a release (especially start-ups :-)) the bug tracker goes to hell.

Bug Tracker Hell happens when inconsistent usage of the tool leads to various problems such as duplicate bugs, inconsistent priorities and severities.  While 80% of defects  are straight forward to add to the Bug Tracker, it is the other 20% of the defects that cause real problems.

The most important attribute of a defect is its DefectLifecycleStatus; not surprisingly every Bug Tracker makes this the primary field for sorting.   This primary field is used to generate reports and to manage the defect removal process.  If we manage this field carefully we can generate reports that not only help the current version but also provide key feedback for post-mortem analysis.

Every Bug Tracker has at least the states Open, Fixed, and Closed, however, due to special cases we are tempted to create new statuses for problems that have nothing to do with the life cycle.  The creation of life cycle statuses that are not life cycle states is what caused inconsistent usage of the tool because then it becomes unclear how to enter a defect.

It is much easier to have consistent life cycle states than to have a 10 page manual on how to enter a defect.

(This color is used to indicate a defect attribute, and this color is used to indicate a constant.)

What Life Cycle States Do We Need?

Clearly we want to know how many Open defects need to be fixed for the  current release; after all, management is often breathing down our neck to get this information.

Ideally we would get the defects outstanding report by finding out how many defects are Open. Unfortunately, there are numerous open defects that will not be fixed in the current release (or ever 🙁 ) and so we seek ways to remove those defects from the defects outstanding.

Why complicate life?

In particular we are tempted to create states like Deferred,  WontFix, and FunctionsAsDesigned, to remove defects from the  defects outstanding.  These states have the apparent effect of simplifying the defects outstanding report but will end up complicating other matters.

For example, Deferred is simply an open defect that is not getting fixed in the current release; WontFix is an open defect that  the business has decided not to fix; and FunctionsAsDesigned indicates that either the requirements were faulty or QA saw a phantom problem, but once this defect gets into the Bug Tracker you can’t get it out.

All three states above variants of the Open life cycle state and creating these life cycle states will create more problems than they solve. The focus of this article is on how to fix the defect life cycle, however, other common issues are addressed.

 

Life cycle states for Deferred, WontFix, or FunctionsAsDesigned is like a “Go directly to Bug Tracker Hell” card!

Each Defect Must Be Unambiguous

The ideal state of a Bug Tracker is to be able to look at any defect in the system and have a clear answer to each of the following questions.

  • Where is the defect in the life-cycle?
  • Has the problem been verified?
  • How consistently can the problem be reproduced or is it intermittent?
  • Which team role will resolve the issue? (team role, not person)

The initial way to get out of hell is to be consistent with the life cycle state.

Defect Life Cycle

All defects go through the following life cycle (DefectLifecycleStatus) regardless of whether we track all of these states or not:

  • New
  • Verified
  • Open
  • Work in Process
  • Work complete
  • Fixed
  • Closed

Anyone should be able to enter a New defect, but just because someone thinks “I tawt I taw a defect!” in the system doesn’t mean that the defect is real.  In poorly specified software systems QA will often perceive a defect where there is none, the famous functions as designed (FAD)  issue.

Since there are duplicate and phantom issues that are entered into the Bug Tracker, we need to kick the tires on all New defects before assigning them to someone.  It is much faster and cheaper to verify defects than to simply throw them at the development team and assume that they can fix them.

Trust But Verify

New defects not entered by QA should be assigned to the QA role.  These defects should be verified by QA before the life cycle status is updated to Verified.  QA should also make sure that the steps to reproduce the defect are complete and accurate before moving the defect to the Verified life cycle status.  Ideally even defects entered by QA should be verified by someone else in QA to make sure that the defect is entered correctly.

By introducing a Verified  state you separate out potential work from actual work. If a bug is a phantom then QA can mark it as Closed  it before we assign it to someone and waste their time.  If a bug is a duplicate then it can be marked as such, linked to the other defect, and Closed.

The advantage of the Verified status is that the intermittent bugs get more attention to figure out how to reproduce them.  If QA discovers that a defect is intermittent then a separate field in the Bug Tracker, Reproducibility, should be populated with one of the following values:

  • Always (default)
  • Sometimes
  • Rare
  • Can’t reproduce

Note: This means that bugs that can not be reproduced stay in the New state until you can reproduce them.  If you can’t reproduce them then you can mark the issue as Closed without impacting the development team.

Assign the Defect to a Role

QA has a tendency to assume that all defects are coding defects — however, the analysis of 18,000+ projects does not confirm this.  In The Economics of Software Quality, Capers Jones and Olivier Bonsignour show that defects fall into different categories. Below we give the category, the frequency of the defect, and the business role that will address the defect.

Note, only the bolded rows below are assigned to developers.

Defect Role Category Frequency Role
Requirements defect 9.58% BA/Product Management
Architecture or design defect 14.58% Architect
Code defect 16.67% Developer
Testing defect 15.42% Quality Assurance
Documentation defect 6.25% Technical Writer
Database defect 22.92% DBA
Website defect 14.58% Operations/Webmaster

Defect Role Categories are important to accelerating  your overall development speed!

Even if all architecture, design, coding, and database defects are handled by the development group this only represents 54% of all defects.  So assigning any New defect to the development group without verification is likely to cause problems inside the team.

Note, 25% of all defects are caused by poor requirements and bad test cases, not bad code.  This means that the business analysts and QA folks are responsible for fixing them.

Given that 46% of all defects are not resolved by the development team there needs to be a triage before a bug is assigned to a role.  Lack of bug triages is the Root cause of ‘Fire-Fighting’ in Software Projects.

The Bug Tracker should be extended to record the DefectRole in addition to the assigned attribute.  Just this attribute will help to straighten out the Bug Tracker!

Non-development Defects

Most Bug Tracking systems have a category called enhancement.  Enhancements are simply defects in the requirements and should be recorded but not specified in the Bug Tracker; the defect should be Open with a DefectRole of ProductManagement.

Enhancements need to be assigned to product managers/BAs who should document and include a reference to that documentation in the defect.  The description for the defect is not the proper place to keep requirements documentation.  The life cycle of a product requirement is generally very different from a code defect because the requirement is likely to be deferred to a later release if you are late in your product cycle.

Business requirements may have to be confirmed with the end users and/or approved by the business.  As such, they generally take longer to become work items than code defects.

QA should not send enhancements to development without involvement of product management.

Note that  15.42% of the defects are a QA problem and are fixed in the test plans and test cases.

Bug Triage

The only way to correctly assign resources to fix a defect is to have a triage team meet regularly that can identify what the problem is.  A defect triage team needs to include a product manager, QA person, and developer.   The defect triage team should meet at least once a week during development and at least once a day during releases.  Defect triages save you time because only 54% of the defects can be fixed by the developers; correctly assigning defects avoids miscommunication.

Effective bug triage meetings are efficient when the only purpose of the meeting is to correctly assign defects.  Be aggressive and keep design discussions out of triages. 

Defects should be assigned to a role and not a specific person to allow maximum flexibility in getting the work done; they should only be assigned to a specific person when there is only one person who can resolve an issue.

Assigning unverified and intermittent defects to the wrong person will start your team playing the blame game.

As the defects are triaged, product management (not QA) should set the priority and severity as they represent the business.  With a multi-functional team these two values will be set consistently.  In addition the triage team should set the version that the defect will be fixed.  Some teams like to put the actual version number where a defect gets fixed(i.e. ExpectedFixVersion) I prefer to use the following:

  • Next bug fix
  • Next minor release
  • Next major  release
  • Won’t fix

Expected FixI like ExpectedFixVersion because it is conditional, it represents a desire.  Like it (or not) it is very hard to guess when every defect will be fixed.  The reality is that if the release date gets pulled in or the work turns out to be more involved than expected the fix version could be deferred (possibly indefinitely).  If you guess wrong then you will spend a considerable amount of time changing this field.

Getting the Defect Resolved

Once the defects are in the system each functional role can assign the work to its resources.  At that point the defect life cycle state is Work In Progress.

All Work complete means is that the individual working on the defect believes that it is resolved.  When the work is resolved the FixVersion should be set as the next version that will be released.  Note, if you use release numbers in the ExpectedFixVersion field then you should update that field if it is wrong 🙂

Of course the defect may or may not be resolved, however, the status of Work complete acts a signal that someone else has work to do.

If a requirements defect is fixed then the issue should be moved to Fixed and assigned to the development manager that will give the work to his team.  Once the team has verified their understanding of the requirement the defect can move from Fixed to Closed.

Work complete means that the fixer  believes that problem is resolved, Fixed means that the team has acknowledged the fix!

For code defects the Work complete status is a signal to QA to retest the issue.  If QA establishes that the defect is fixed they should move the issue to Fixed.  If the issue is not fixed at all then the defect should move back to Open; if the defect is partially fixed then the defect should move to Verified so that it goes back through the bug triage process (i..e severity and priority may have changed).

Once a release is complete, all Fixed items can be moved to Closed.

Tracking Defects Caused by Fixing Defects

Virtually all Bug Trackers allow you to link one or more issues together.  However, it is extremely important to know why bugs are linked, in most cases you link bugs because they are duplicates.

Bugs can be linked together because fixing one defect may cause another.  On average this happens for every 14 defects fixed but in the worst organizations can happen every 4 defects fixed.  Keeping a field called ResultedFromDefect where you link the number of the other defect allows you to determine how new defects are the result of fixing other defects.

Summary

Let’s recap how the above mechanisms will help you get out of hell.

  1. By introducing the Verified step you make sure that bugs are vetted before anyone get pulled into a wild goose chase.
    1. This also will catch intermittent defects and give them a home while you figure out how often they are occurring and work out if there is a reliable way to produce them.
    2.  If you can’t reproduce a defect then at least you can annotate it as Can’t Reproduce, i.e. status stays as New and it doesn’t clog the system
  2. By conducting triage meetings with product management, QA, and development you will end up with very consistent uses of priority and severity
  3. Bug triages will end up categorizing defects according to the role that will fix them which will reduce or eliminate:
    1. The blame game
    2. Defects being assigned to the wrong people
  4. By having the ExpectedFixVersion be conditional you won’t have to run around fixing version numbers for defects that did not get fixed in a particular release.  It also gives you a convenient way to tag a defect as Won’t Fix, the status should go back to Verified.
  5. By having the person who fixes a defect set the FixVersion then you will have an accurate picture of when defects are fixed
  6. When partially fixed defects go back to Verified the priority and severity can be updated properly during the release.

Benefits of the Process

By implementing the defect life cycle process above you will get the following benefits:

  • Phantom bugs and duplicates won’t sandbag the team
  • Intermittent bugs will receive more attention to determine their reproducibility
    • Reproducible bugs are much easier to fix
  • Proper triages will direct defects to the appropriate role
  • You will discover how many defects you create by fixing other defects

By having an extended set of life cycle states you will be able to start reporting on the following:

  • % of defects introduced while fixing defects (value in ResultedFromDefect)
  • % of New bugs that are phantoms or duplicates, relates to QA efficiency
  • % of defects that are NOT development problems, relates to extended team efficiency (i.e. DefectRole <> Development)
  • % of requirements defects which relates to the efficiency of your product management (i.e. DefectRole = ProductManagement)
  • % of defects addressed but not confirmed (Work Completed)
  • % of defects fixed and confirmed (Fixed)

It may sound like too much work to change your existing process, but if you are already in Bug Tracker hell, what is your alternative?

Need help getting out of Bug Tracker hell?  Write to me at dmahal@AcceleratedDevelopment.ca

Appendix: Importance of Capturing Requirements Defects

Shift HappensThe report on the % of requirements defects is particularly important because it represents the amount of scope shift (creep) in your project.   You can see this in the blog Shift Happens.  Also, if the rates of scope shift of 2% per month are strong indicators of impending swarms of bugs and project failure. Analysis shows that the probability of a project being canceled is highly correlated with the amount of scope shift.  Simply creating enhancements in the Bug Tracker hides this problem and does not help the team.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

SR&ED and Eligibility

Most Canadian corporations know that the Canada Revenue Agency (CRA) gives out tax credits for Scientific Research and Experimental Development (SR&ED) work done by technical companies. This tax credit works out to 35% of the qualifying work and can be as high as 68% when overhead is factored in.

The Scientific Research part of the name makes some organizations assume that unless you are curing cancer, building rockets, or building a better mouse-trap that they must be engaged in rocket science to qualify for the tax credit — nothing could be further from the truth.

For organizations involved in software development the key to claiming SR&ED is the Experimental Development part of the title.  Assuming that you have a project which has technical uncertainty then you will qualify for the tax credit.

So what qualifies as technical uncertainty?

Technical uncertainty occurs when you face a business problem that is well specified with skilled resources and it is unclear how to proceed.  Examples of technical uncertainty would be needing to:

  • double the number of transactions that you currently process
  • increase the efficiency of an compression algorithm
  • implement a security model that does not exist

Experimental development occurs when you hit a fork in the proverbial development road and it is unclear which direction to take.  

Sometimes you will know that there are multiple design alternatives and have to create prototypes for the different alternatives to determine the best solution. Sometimes you will choose a design alternative and have to abandon the choice and back-up and take another path.  In both cases there is a clear decision point where code needs to be tested for multiple alternatives.

There is actually an easy way to know if you are facing technical uncertainty and facilitate applying for SR&ED tax credits.  Most developers do not like reinventing the wheel; when faced with a requirement that is technically challenging most developers will Google it to see if there is a solution to the problem.  If your developers are regularly looking for technical solutions odds are that you have SR&ED eligible work.

Saving technical searches is the easiest way to figure out how much SR&ED eligible work you have.

If you search for a technical solution to a challenge and discover:

  1. There is a solution for the problem but it is proprietary
  2. There is an available solution that would cost too much to acquire

Then this work will be SR&ED eligible if it leads to experimental development.  The CRA does not require that you be the first to solve a technical problem, only that you search for public solutions before executing experimental development.

What isn’t technical uncertainty?

There are a few issues which can masquerade as technical uncertainty and the CRA will not pay SR&ED credits for them:

  • Training
  • Poor requirements

If a COBOL programmer starts to develop software in Java then you will end up with quite a bit of experimental development as the programmer learns the new language.  However, the CRA will not pay for you to train developers.   Experimental development only occurs when developers who are familiar with the technologies that you are using (language, O/S, IDE, API) and then engage in experimental development.

To be explicit, the following would not qualify:

  • Language, Java developers needing to do C#
  • O/S, Developers familiar with Windows development developing on Android tablets
  • IDE, Developers familiar with Eclipse needing to use Sun’s NetBeans
  • API, Developers familiar with one SQL database switching to the API of another SQL database

Only competent developers that hit technical uncertainty and face experimental development qualify for SR&ED.

The CRA will not pay for you to figure out what your requirements are.  While you are working out your “business rules” you may look like you are resolving a technical challenge as you attempt multiple alternative paths.  However, creating code to solve a business problem does not qualify for SR&ED.

Changing requirements because of a technical challenge does qualify

How do I know if I did Experimental Development?

The CRA gives you up to 18 months from a fiscal year end to claim your tax credits.  The problem is that if you wait this long none of your developers will remember what they did!

If your year end was March 31, 2011 then as of today (August 27, 2012) you can still claim your tax credits from 2011.  The problem is that your developers will have trouble remembering what they did from March 31, 2010 to March 31, 2011.

Frankly speaking, you would be lucky to have your developers recall what they did last month. When looking back over time, there are two kinds of development that easily qualify for SR&ED:

  1. Work abandoned for a technical reason
  2. Building multiple prototypes to solve a technical problem

If you were trying to accomplish something, let’s say implementing a fine grain security model in a database and were forced to abandon the work for a technical reason then this will qualify for SR&ED.  If you abandoned the work because you no longer had the requirement, i.e. a business reason, then the work would not qualify for SR&ED.

If you ran up against a technical challenge and there were multiple design alternatives that lead to multiple prototypes being tried, then the work qualifies for SR&ED.  Even if the multiple design alternatives involved 3rd party software, as long as there were multiple prototypes and you had to write code then this work should qualify.

Document your technical challenges right away!

How do I Simplify the SR&ED Process?

The easiest way to simplify the SR&ED process is to track experimental development as it happens.  Once your developers solve a problem and use that solution for a few months then they will forget how difficult it was to solve the problem.

There are several techniques to help in the documentation of your SR&ED claim for the next year:

  1. Save your technical searches
  2. Tag your tasks in your project management system
  3. Train project managers to recognize SR&ED tasks
When the developers search for technical solutions and find none have them save the search (PDF, web page, etc).   If you work on several projects simultaneously then create a directory under each project where the developer will save the search.  In your project management system then have the developer document this information.

In your project management system (JIRA, Redmine, etc) have a tag for SR&ED so that tasks can be tagged for SR&ED.  As the developer or project manager discovers SR&ED tasks you can tag these tasks so that computing the hours for SR&ED next year will be easy.

Train your project managers to look for SR&ED tasks.  Inevitably, if a developer has a task that expands for a technical reason then he will have to notify the project manager about the event.  That will be the best time for the project manager to recognize SR&ED tasks and update the project management system.

Summary

All companies should make sure to have SR&ED trained people help you to make your claim.   The number of companies making SR&ED claims has increased strongly in the last few years and the CRA is more strict with regards to which projects qualify.
Do not be afraid to claim your SR&ED tax credits, if you have technical challenges that involve experimental development then they are yours.  Also, keep in mind that the earlier that you document your technical challenges the easier (and cheaper) it will be to make your SR&ED claim.

VN:F [1.9.22_1171]
Rating: 5.0/5 (3 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Efficiency is for Losers

Focusing on efficiency and ignoring effectiveness is the root cause of most software project failures.

Effectiveness is producing the intended or expected result. Efficiency is the ability to accomplish a job with a minimum expenditure of time and effort.

Effective software projects deliver code that the end users need; efficient projects deliver that code with a minimum number of resources and time.

Sometimes, we become so obsessed with things we can measure, i.e. project end date, kLOC, that we somehow forget what we were building in the first place.  When you’re up to your hips in alligators, it’s hard to remember you were there to drain the swamp.

Efficiency only matters if you are being effective.

After 50 years, the top three end-user complaints about software are:

  1. It took too long
  2. It cost too much
  3. It doesn’t do what we need
Salaries are the biggest cost of most software projects, hence if it takes too long then it will cost too much, so we can reduce the complaints to:

  1. It took too long
  2. It doesn’t do what we need

The first issue is a complaint about our efficiency and the second is a complaint about our effectiveness. Let’s make sure that we have common  definitions of these two issues before continuing to look at the interplay between efficiency and effectiveness.

Are We There Yet?

Are you late if you miss the project end date? 

That depends on your point of view; consider a well specified project (i.e. good requirements) with a good work breakdown structure that is estimated
by competent architects to take a competent team of 10 developers at least 15 months to build. Let’s consider 5 scenarios where this is true except as stated below:

Under which circumstances is a project late?

A. Senior management gives the team 6 months to build the software.
B. Senior management assigns a team of 5 competent developers instead of 10.
C. Senior management assigns a team of 10 untrained developers
D. You have the correct team, but, each developer needs to spend 20-35% of their time maintaining code on another legacy system
E. The project is staffed as expected

Here are the above scenarios in a table:

#
Team
Resource
Commitment
Months Given
Result
A
10 competent developers
100%
6
Unrealistic estimate
B
5
competent developers
100%
15
Under staffed
C
10 untrained developers
100%
15
Untrained staff
D
10 competent developers
65-80%
15
Team under committed
E
10 competent developers
100%
15
Late


Only the last project (E) is late because the estimation of the end date was consistent with the project resources available.

Other well known variations which are not late when the end date is missed:

  • Project end date is a SWAG or management declared
  • Project has poor requirements
  • You tell the end-user 10 months when the estimate is 15 months.

If any of the conditions of project E are missing then you have a problem in estimation.  You may still be late, but not based on the project end date computed with bad assumptions

Of course, being late may be acceptable if you deliver a subset of the expected system.

It Doesn’t Work



“It doesn’t do what we need” is a failure to deliver what the end user needs. How so we figure out what the end user needs?

The requirements for a system come from a variety of sources:

  1. End-users
  2. Sales and marketing (includes competitors)
  3. Product management
  4. Engineering

These initial requirements will rarely be consistent with each other. In fact, each of these constituents will have a different impression of the requirements. You would expect the raw requirements to be contradictory in places. The beliefs are like the 4 circles to the left, and the intersection of their beliefs would be the black area.

The different sources of requirements do not agree because:

  • Everyone has a different point of view
  • Everyone has a different set of beliefs about what is being built
  • Everyone has a different capability of articulating their needs
  • Product managers have varying abilities to synthesize consistent requirements
It is the job of product management to synthesize the different viewpoints into a single set of consistent requirements. If engineering starts before
requirements are consistent then you will end up with many fire-fighting meetings and lose time.

Many projects start before the requirements are consistent enough. We hope the initial requirements are a subset of what is required.
In practice, we have missed requirements and included requirements that are not needed (see bottom of post, data from Capers Jones)

The yellow circle represents what we have captured, the black circle represents the real requirements.

We rarely have consistent requirements when we start a project, that is why there are different forms of the following cartoon lying around on the Internet.

If you don’t do all the following:

  • Interview all stakeholders for requirements
  • Get end-users to articulate their real needs by product management
  • Synthesize consistent requirements

Then you will fail to build the correct software.  So if you skip any of this work then you are guaranteed to get the response, “It doesn’t do what we need”.

Effectiveness vs. Efficiency

So, let’s repeat our user complaints:
  1. It took too long
  2. It doesn’t do what we need

It’s possible to deliver the correct software late.

It’s impossible to deliver on-time if the software doesn’t work

Focusing on effectiveness is more important than efficiency if a software project is to be delivered successfully.


Ineffectiveness Comes from Poor Requirements

Most organizations don’t test the validity or completeness of their requirements before starting a software project.
The requirements get translated into a project plan and then the project manager will attempt to execute the project plan. The project plan becomes the bible and everyone marches to it. As long as tasks are completed on time everyone assumes that you are effective, i.e. doing the right thing.

That is until virtually all the tasks are jammed at 95% complete and the project is nowhere near completion.

At some point someone will notice something and say, “I don’t think this feature should work this way”. This will provoke discussions between developers, QA, and product management on correct program behavior. This will spark a series of fire-fighting meetings to resolve the inconsistency, issue a defect, and fix the problem. All of the extra meetings will start causing tasks on the project plan to slip.

We discussed the root causes of fire-fighting in a  previous blog entry.

When fire-fighting starts productivity will grind to a halt. Developers will lose productivity because they will end up being pulled into the endless meetings. At this point the schedule starts slipping and we become focused on the project plan and deadline. Scope gets reduced to help make the project deadline; unfortunately, we tend to throw effectiveness out the window at this point.

With any luck the project and product manager can find a way to reduce scope enough to declare victory after missing the original deadline.

The interesting thing here is that the project failed before it started. The real cause of the failure would be the inconsistent requirements.But, in the chaos of fire-fighting and endless meetings, no one will remember that the requirements were the root cause of
the problem.

What is the cost of poor requirements? Fortunately, WWMCCS has an answer.  As a military organization they must tracks everything in a detailed fashion and perform root cause analysis for each defect (diagram).

This drawing shows what we know to be true.

The longer a requirement problem takes to discover, the harder and more expensive it is to fix!  A requirement that would take 1 hour to fix will take 900 hours to fix if it slips to system testing.

Conclusion

It is much more important to focus on effectiveness during a project than efficiency. When it becomes clear that you will not make the project end date, you need to stay focused on building the correct software.
Are you tired of the cycle of:
  • Collecting inconsistent requirements?
  • Building a project plan based on the inconsistent requirements?
  • Estimating projects and having senior management disbelieve it?
  • Focusing on the project end date and not on end user needs?
  • Fire-fighting over inconsistent requirements?
  • Losing developer productivity from endless meetings?
  • Not only miss the end date but also not deliver what the end-users need?

The fact that organizations go through this cycle over and over while expecting successful projects is insanity – real world Dilbert cartoons.

How many times are you going to rinse and repeat this process until you try something different? If you want to break this cycle, then you need to start collecting consistent requirements.

Think about the impact to your career of the following scenarios:

  1. You miss the deadline but build a subset of what the end-user needs
  2. You miss the deadline and don’t have what the end-user needs
You can at least declare some kind of victory in scenario 1 and your resume will not take a big hit. It’s pretty hard to make up for scenario 2 no matter how you slice it.
Alternatively, you can save yourself wasted time by making sure the requirements are consistent before you start development. Inconsistent requirements will lead to fire-fighting later in the project.
As a developer, when you are handed the requirements the team should make a point of looking for inconsistent requirements.
The entire team should go through the requirements and look for inconsistencies and force product management to fix them before you start developing.
It may sound like a waste of time but it will push the problem of poor requirements back into product management and save you from being in endless meetings. Cultivating patience on holding out for good requirements will lower your blood pressure and help you to sleep at night.
Of course, once you get good requirements then you should hold out for proper project estimates 🙂

Want to see other sacred cows get tipped?Check out:

Moo?

Courtesy of Capers Jones via LinkedIn on 6/22

Customers themselves are often not sure of their requirements.

For a large system of about 10,000 function points, here is what might be seen for the requirements.

This is from a paper on requirements problems – send an email to capers.jones3@gmail.com if you want a copy.

Requirements specification pages = 2,500
Requirements words = 1,125,000
Requirements diagrams = 300
Specific user requirements = 7,407
Missing requirements = 1,050
Incorrect requirements = 875
Superfluous requirements = 375
Toxic harmful requirements = 18

Initial requirements completeness = < 60%

Total requirements creep = 2,687 function points

Deferred requirements to meet schedule = 1,522

Complete and accurate requirements are possible < 1000 function points. Above that errors and missing requirements are endemic.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)