Who needs Formal Measurement?

We all know the expression “You can’t manage what you can’t measure“, but do we really understand it?

After execution, feedback is an essential part of all processes.  Just think about how difficult it would be to drive from home to work wearing a blindfold.  Without your sense of sight to give you feedback on the traffic signals and the locations of other cars you would crash your car.  Yet we develop software systems without instituting formal measurement programs all the time and wonder why we succeed so rarely?  (for success rates see Understanding your chances of having a successful software project)

You can’t manage what you can’t measure

No measurement means no feedback, which means your chances of success are minimized. Success is possible without formal measurement but it is much easier with formal measurement.

Formal measurement raises productivity by 20.0% and quality by 30.0%

A best practice is one that increases your chance of succeeding, it does not guarantee it. It has been established that formal measurement is a best practice, so why do so few people do it?

Measurement has a cost and organizations are petrified of incurring costs without incurring benefits. After all what if you institute a measurement program and things don’t improve?  In some sense managers are correct that measurement programs cost money to develop and unless measurement is executed correctly it will not yield any results.  But is there a downside to avoiding measurement?

Inadequate progress tracking reduces productivity by 16.0% and quality by 22.5%

Failure to estimate requirements changes reduces productivity by 14.6% and quality by 19.6%

Inadequate measurement of quality reduces productivity by 13.5% and quality by 18.5%

So there are costs to not having measurement.  Measurement is not optional, measurement is a hygiene process, that is, essential to any process but especially to software development where the main product is intangible.

A hygiene process is one which can prevent very bad things from happening. Hygiene processes are rarely fun and take time, i.e. taking a shower, brushing your teeth, etc.  But history has show that it is much more cost effective to execute a hygiene process than take a chance of something very bad from happening, i.e. disease or your teeth falling out.

There are hygiene practices that we use every day in software development without even thinking about it:

  • Version control
  • Defect tracking

Version control is not fun, tracking defects is not fun; but the alternative is terrible.  Only the most broken organizations think that they can develop software systems without these tools.  These tools are not fun to use and virtually everyone complains about them, but the alternative is complete chaos.

Formal measurement is a best practice and a hygiene practice

The same way that developers understand that version control and defect tracking is necessary, an organization needs to learn that  measurement is necessary.
Is Formality Necessary?

The reality is that informal measurement is not comprehensive enough to give consistent results. If measurement is informal then when crunch time comes then people will stop measuring things when you need the data the most.

When you don’t have enough formality then processes take longer and by extension cost more.  When you have too much formality then you have process for processes sake and things will also take a long time.  Any organization that implements too much formality is wasting their time, but so is any organization that does not implement enough.

When you suggest any formal process people immediately imagine the most extreme form of that process; which would be ridiculous if it is implemented that way. We have all been in organizations that implement processes that make no sense, but without measurement how do you get rid of these processes that make no sense? For every formal process that makes sense, there is a spectrum of implementations. The goal is to find the  minimum formality that reduces time and costs. When you find the minimum amount of formal measurement you will accelerate your development by giving yourself the feedback that you need to drive your development.

What to Measure

It seems obvious, but incorrect measurement and/or poor execution leads to useless results.  For example, trying to measure productivity by measuring the hours that the developers sit at their machines is as useful as measuring productivity by the number of cups of coffee that the developers drink.Another useless measure is lines of code (LOC), in fact, Capers Jones believes that anyone using LOC as a measurement should be tried for professional malpractice!Measuring the the three things mentioned above will improve productivity and quality because there will not be a negative effect on your organization:

  • Measuring progress tracking (productivity +16.0%, quality +22.5%)
  • Estimating requirements changes  (productivity +14.6%, quality +19.6%)
  • Measurement of quality (productivity +13.5%, quality +18.5%)

Other things to measure are:

  • Activity based productivity measures
    (productivity +18.0%, quality by 6.7%)
  • Automated sizing tools (function points)
    (productivity +16.5%, quality by 23.7%)
  • Measuring requirement changes  (productivity +15.7%, quality by 21.9%)

So to answer the question: who needs formal measurement?

We all need formal measurement

 


References

N.B. All productivity and quality percentages were derived over 15,000+ actual projects


Articles in the “Loser” series

Want to see sacred cows get tipped? Check out:

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Not planning is for Losers

Only the ignorant don’t plan their code pathways before they write them.  Unless you are implementing classes of only getter and setter routines code needs to be planned. We talk about The Path Least Traveled The total number of pathways through a software system grow so quickly that it is very hard to imagine their total number. If a function X() with 9 pathways calls function Y() which has 11 pathways then the composition function X() ° Y() will have up to 9 x 11 = 99 possible pathways. If function Y() calls function Z() with 7 pathways, then X()  ° Y()  ° Z()  will have up to 693 = 9 x 11 x 7 pathways. The numbers add up quickly, for example a call depth of 8 functions each with 8 pathways means 10.7 million different paths; the number of possible pathways in a system is exponential with the total depth of the call tree. Programs, even simple ones, have hundreds if not thousands (or millions) of pathways through them.

Negative vs Positive Assurance

Quality assurance can only come from the developers, not the testing department.  Testing is about negative assurance, which is only a statement that “I don’t see anything wrong”; it doesn’t mean that everything is correct, just that they can’t find a problem.  Positive assurance which is guaranteeing that the code will execute down the correct pathways and only the developer can do that. Quality assurance comes from adopting solid practices to ensure that code pathways are layed down correctly the first time

Any Line of Code can be Defective

If there are 10 pathways through a function then there there must be branching statements based on variable values to be able to direct program flow down each of the pathways. Each pathway may compute variable values that may be used in calculations or decisions downstream. Each downstream function can potentially have its behavior modified by any upstream calculation. When code is not planned then errors may cause execution to compute a wrong value.  If you are unlucky that wrong value is used to make a decision which may send the program down the wrong pathway.  If you are really unlucky you can go very far down the wrong pathways before you even identify the problem.  If you are really, really, really unlucky not only do you go down the wrong pathway but the data gets corrupted and it takes you a long time to recognize the problem in the data. It takes less time to plan code and write it correctly than it takes to debug complex pathways.

Common Code Mistakes

Defects are generally caused either because of one of the following conditions:

  1. incorrect implementation of an algorithm
  2. missing pathways
  3. choosing the wrong pathway based on the variables

1) Incorrect implementation of an algorithm will compute a wrong value based on the inputs.  The damage is localized if the value is computed in a decision statement, however, if the value is computed in a variable then damage can happen everywhere that value is used.  Example, bad decision at node 1 causes execution to flow down path 3 instead of 2. 2) Missing pathways have to deal with conditions.  If you have 5 business conditions and only 4 pathways then one of your business conditions will go down the wrong pathway and cause problems until you detect the problem.  Example, there was really 5 pathways at node 1, however, you only coded 4. 3a) The last problem is that the base values are correct but you select the wrong pathway.  This can lead to future values being computed incorrectly.  Example: at node 10 you correctly calculate that you should take pathway 11 but end up going down 12 instead. 3b) You might also select the wrong pathway because insufficient information existed at the time that you needed to make a decision.  Example:  insufficient information at node 1 causes execution to flow down path 3 instead of 4. The last two issuses (2 or 3) can either be a failure of development or of requirements.  In both cases somebody failed to plan…

What if it is too late to Plan

Whenever you are writing a new section of code you should take advantage of the ability to plan the code before you write it.  If you are dealing with code that has already been written then you should take advantage of inspections to locate and remove defects.  Don’t wait for defects to develop, proactively inspect all code, especially in buggy modules and fix all of the code pathways. Code inspections can raise productivity by 20.8% and quality by 30.8%

Code Solutions

The Personal Software Process (PSP) has a specific focus that every code section should be planned by the developer before it is implemented.  That means that you sit down and plan your code pathways using paper or a white board before using the keyboard.  Ideally you should spend the first part of your day planning with your colleagues on how best to write your code pathways.  The time that you spend planning will pay you dividends in time saved. PSP can raise productivity by 21.2% and quality by 31.2% If you insist on writing code at the keyboard then you can use pair programming to reduce errors.  By having a second pair of eyes looking at your code algorithmic mistakes are less likely and incorrect decisions for conditions are looked at by two people.  The problem is that pair programming is not cost effective overall. Pair Programming can raise productivity by 2.7% and quality by 4.5% Studies confirm that code sections of high cyclomatic complexity have more defects than other code sections.   At a minimum, any code section that will have a high cyclomatic complexity should be planned by two or more people.  If this is not possible, then reviewing sections of high cyclomatic complexity can reduce downstream defects. Automated cyclomatic complexity analysis can raise productivity by 14.5% and quality by 19.5% Design Solutions All large software projects benefit from planning pathways at the macroscopic level.  The design or architectural planning is essential to making sure that the lower level code pathways will work well. Formal architecture for large applications can raise productivity by 15.7% and quality by 21.8% Requirements Solutions Most pathways are not invented in development.  If there is insufficient information to choose a proper pathway or there are insufficient pathways indicated then this is a failure of requirements.  Here are several techniques to make sure that the requirements are not the problem. Joint application design (JAD) brings the end-users of the system together with the system architects to build the requirements.  By having end-users present you are unlikely to forget a pathway and by having the architects present you can put technical constraints on the end-users wish list for things that can’t be built. The resulting requirements should have all pathways properly identified along with their conditions. Joint application design can raise productivity by 15.5% and quality by 21.4% Requirements inspections are the best way to make sure that all necessary conditions are covered and that all decisions that the code will need to make are identified before development.  Not inspecting requirements is the surest way to discovering that there is a missing pathway or calculation after testing. Requirement inspections can raise productivity by 18.2% and quality by 27.0% Making sure that all pathways have been identified by requirements planning is something that all organizations should do.  Formal requirements planning will help to identify all the code pathways and necessary conditions, however, formal requirements planning only works when the business analysts/product managers are skilled (which is rare 🙁 ). Formal requirements analysis can raise productivity by 16.3% and quality by 23.2%


Other articles in the “Loser” series

Want to see more sacred cows get tipped? Check out

Moo?

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

References

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Defects are for Losers

A developer is responsible for using any and all techniques to make sure that he produces defect free code.  The average developer does not take advantage of all of the following opportunities to prevent and eliminate defects:

  1. Before the code is written
  2. As the code is written
  3. Writing mechanisms for early detection
  4. Before the code is executed
  5. After the code is tested

The technique that is used most often is #5 above and will not be covered here.  It involves the following:

  1. Code is delivered to the test department
  2. The test department identifies defects and notifies development
  3. Developer’s fire up the debugger and try to chase down the defect

Like the ‘rinse and repeat‘ process on a shampoo bottle, this process is repeated until the code is cleaned or until you run out of time and are forced to deliver.

The almost ubiquitous use of #5 leads to CIOs and VPs of Engineering assuming that the metric of one tester to two developers is a good thing.  Before assuming that #5 is ‘the way to go‘ consider the other techniques and statistical evidence of their effectiveness.

Before the Code is Written



A developer has the most options available to him before the code is written.  The developer has an opportunity to plan his code, however, there are many developers who just ‘start coding’ on the assumption that they can fix it later.

How much of an effect can planning have?  Two methodologies that focus directly on planning at the personal and team level are the Personal Software Process (PSP) and the Team Software Process (TSP) invented by Watts Humphrey.

PSP can raise productivity by 21.2% and quality by 31.2%

TSP can raise productivity by 20.9% and quality by 30.9%

Not only does the PSP focus on code planning, it also makes developers aware of how many defects they actually create.  Here are two graphs that show the same group of developers and their defect injection rates before and after PSP training.

Before PSP training After PSP training

The other planning techniques are:

  • Decision tables
  • Proper use of exceptions

Both are covered in the article Debuggers are for Losers and will not be covered here.

As the Code is Written

Many developers today use advanced IDEs to avoid common syntax errors from occurring   If you can not use such an IDE or the IDE does not provide that service then some of the techniques in the PSP can be used to track your injection of syntax errors and reduce those errors.

Pair Programming

One technique that can be used while code is being written is Pair Programming.  Pair programming is heavily used in eXtreme Programing (XP).  Pair programming not only allows code to be reviewed by a peer right away but also makes sure that there are two people who understand the code pathways through any section of code.

Pair programming is not cost effective overall (see Capers Jones).  For example, it makes little sense to pair program code that is mainly boiler plate, i.e. getter and setter classes. What does make sense is that during code planning it will become clear which routines are more involved and which ones are not.  If the cyclomatic complexity of a routine is high (>15) then it makes sense for pair programming to be used.

If used for all development, Pair Programming can raise productivity by 2.7% and quality by 4.5%

Test Driven Development




Test driven development (TDD) is advocated by Kent Beck and stated in 2003 that TDD encourages simple designs and inspires confidence.  TDD fits into the category of automated unit testing.

Automated unit testing  can raise productivity by 16.5% and quality by 23.7%

Writing Mechanisms for Early Detection

Defects are caused by programs either computing wrong values, going down the wrong pathway, or both.  The nature of defects is that they tend to cascade and get bigger the further in time and space between the source of the defect and the noticeable effects of the defect.

Design By Contract

One way to build checkpoints into code is to use Design By Contract (DbC), a technique that was pioneered by the Eiffel programming language   It would be tedious and overkill to use DbC in every routine in a program, however, there are key points in every software program that get used very frequently.

Just like the roads that we use have highways, secondary roads, and tertiary roads — DbC can be used on those highways and secondary roads to catch incorrect conditions and stop defects from being detected far away from the source of the problem.

Clearly very few of us program in Eiffel.  If you have access to Aspect Oriented Programming (AOP) then you can implement DbC via AOP. Today there are AOP implementations as a language extension or as a library for many current languages (Java, .NET, C++, PHP, Perl, Python, Ruby, etc).

Before the Code is Executed

Static Analysis

Most programming languages out there lend themselves to static analysis.  There are cost effective static analysis for virtually every language.

Automated static analysis can raise productivity by 20.9% and quality by 30.9%

Inspections


Of all the techniques mentioned above, the most potent pre-debugger technique is inspections. inspections are not sexy and they are very low tech, but the result of organizations that do software inspections borders on miraculous.The power of software inspections can be seen in these two articles:

Code inspections can raise productivity by 20.8% and quality by 30.8%

Design inspections can raise productivity by 16.9% and quality by 24.7%

From the Software Inspections book on p.22.

In one large IBM project, one half million lines of networked operating system, there were 11 development stages (document types: logic, test, user documentation) being Inspected.  The normal expectation at IBM, at that time, was that they would be happy only to experience about 800 defects in trial site operation.  They did in fact experience only 8 field trial defects.

Evidence suggests that every 1 hour of code inspection will reduce testing time by 4 hours

Conclusion

Overworked developers rarely have time to do research, even though it is clear that there is a wealth of information available on how to prevent and eliminate defects. The bottom line is that if your are only using technique #5 from the initial list, then you are not using every technique available to you to go after defects.My opinion only, but:

A professional software developer uses every technique at his disposal to prevent and eliminate defects

Other articles in the “Loser” series

Want to see more sacred cows get tipped? Check out:

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

References

Gilb, Tom and Graham, Dorothy. Software Inspections

Jones, Capers. SCORING AND EVALUATING SOFTWARE METHODS, PRACTICES, AND RESULTS. 2008.

Radice, Ronald A. High Quality Low Cost Software Inspections.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

NO Experience Necessary!!!

Did you know that we have never found a relationship between a developer’s years of experience and code quality or productivity?

The original study that found huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968).

This study has been repeated at least 8 times over 30 years and the results have not changed! (see below)

Sackman et al studied professional programmers with an average of 7 years’ experience and found that:

  • the ratio of initial coding time was about 20 to 1
  • the ratio of debugging times over 25 to 1
  • program execution speed about 10 to 1
  • program size 5 to 1

They found no relationship between a programmer’s number of years of experience and code quality or productivity.  That is there was NO correlation between experience and productivity (i.e. ability to produce code) and there was NO correlation between experience and quality (i.e. minimizing defects) .

Think about that for a minute…

That is the worst programmers and the best programmers made distinct groups and each group had people of low and high experience levels.  Whether training helps developers or not is not indicated by these findings, only that years of experience do not matter.

Without considering legality, this means that it is simpler to get rid of expensive poor performers with many years of experience and hire good performers with few years of experience!

Results Have Been Confirmed for 30 Years!

There were flaws in the study, however, even after accounting for the flaws, their data still shows more than an order of magnitude difference between the best programmers and the worst, and that difference was not related to experience.  In years since the original study, the general finding that “There are order-of-magnitude differences among programmers” has been confirmed by many other studies of professional programmers (full references at the end of the article):

  • Curtis 1981
  • Mills 1983
  • DeMarco and Lister 1985
  • Curtis et al. 1986
  • Card 1987
  • Boehm and Papaccio 1988
  • Valett and McGarry 1989
  • Boehm et al 2000

Technology is More Sophisticated, Developers are not

You might  think that we know much more about software development today than we knew in 1968, after all today:

  • we have better computer languages
  • we have more sophisticated technology
  • we have better research on effective software patterns
  • we have formal software degrees available in university

It turns out that all these things are true, but we still have order of magnitude differences among programmers and the difference is not related to years of experience.  That means that there is some other x-factor that drives productive developers;  that x-factor is probably the ability to plan and make good decisions.

The bad news is that if you are not a productive developer writing quality code  then you will probably not get better simply because of years of experience.

Developers face making decisions on how to structure their code every day.  There is always a choice when it comes to:

  • laying out code pathways
  • packaging functions into classes
  • packaging classes into packages/modules

Because developers face coding decisions, many of which are complex, the best developers will plan their work and make good decisions.  Bad developers just ‘jump in’; they assume that they can always rewrite code or make up for bad decisions later. Bad developers are not even aware that their decision processes are poor and that they can become much better by planning their work.

Solution might be PSP and TSP

Watts Humphrey tried to get developers to understand the value of estimating, planning development, and making decisions in the Personal Software Process (PSP) for individuals and the Team Software Process (TSP) for teams, but only a handful of organizations have embraced it.  Capers Jones has done analysis of over 18,000 projects and discovered that1:

PSP can raise productivity by 21.2% and quality by 31.2%
TSP can raise productivity by 20.9% and quality by 30.9%

All of these findings should have a profound effect on the way that we build our teams. Rather than having large teams of mediocre developers, it makes much more sense to have smaller teams of highly productive developers that know how to plan and make good decisions.  The PSP and TSP do suggest that the best way to rehabilitate a poor developer is to teach them how to make better decisions.

Be aware, there is a difference between knowledge of technologies which is gained over time and the ability to be productive and write quality code.

Conclusion

We inherently know this, we just don’t do it.  If the senior management of organizations only knew about these papers, we could make sure that the productive people get paid what they are worth and the non-productive people could seek employment in some other field.  This would not only reduce the cost of building software but also increase the quality of the software that is produced.

Unfortunately, we are doomed to religious battles where people debate methodologies, languages, and technologies in the foreseeable future.  The way that most organizations develop code makes voodoo look like a science!

Eventually we’ll put the ‘science’ back in Computer Science, I just don’t know if it will be in my lifetime…

Check out Stop It! No… Really stop it. to learn about 5 worst practices that need to be stopped right now to improve productivity and quality.

Bibliography

Boehm, Barry W., and Philip N. Papaccio. 1988. “Understanding and Controlling Software Costs.” IEEE Transactions on Software Engineering SE-14, no. 10 (October): 1462-77.

Boehm, Barry, et al, 2000. Software Cost Estimation with Cocomo II, Boston, Mass.: Addison Wesley, 2000.

Card, David N. 1987. “A Software Technology Evaluation Program.” Information and Software Technology 29, no. 6 (July/August): 291-300.

Curtis, Bill. 1981. “Substantiating Programmer Variability.” Proceedings of the IEEE 69, no. 7: 846.

Curtis, Bill, et al. 1986. “Software Psychology: The Need for an Interdisciplinary Program.” Proceedings of the IEEE 74, no. 8: 1092-1106.

DeMarco, Tom, and Timothy Lister. 1985. “Programmer Performance and the Effects of the Workplace.” Proceedings of the 8th International Conference on Software Engineering. Washington, D.C.: IEEE Computer Society Press, 268-72.

1Jones, Capers. SCORING AND EVALUATING SOFTWARE METHODS, PRACTICES, AND RESULTS. 2008.

Mills, Harlan D. 1983. Software Productivity. Boston, Mass.: Little, Brown.

Valett, J., and F. E. McGarry. 1989. “A Summary of Software Measurement Experiences in the Software Engineering Laboratory.” Journal of Systems and Software 9, no. 2 (February): 137-48.

VN:F [1.9.22_1171]
Rating: 4.3/5 (6 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Stop It! No… really stop it.

Stop, this means you!There are 5 worst practices that if stopped immediately will  improve your productivity by a minimum of 12% and improve quality by a minimum of 15%.  These practices are so common that people assume that they are normal — they are not, they are silent killers wherever they are present.

We often hear the term best practices enough to know that we all have different definitions for it.  Even when we agree on best practices we then disagree on how to implement and measure them. A best practice is one that increases the chance your project will succeed.

How often do we talk about worst practices?  More importantly, what about those worst practices in your organization that you don’t do anything about?

CatConeFail

When it comes to a worst practice, just stop it.

If your company is practicing even one worst practice in the list below it will kill all your productivity and quality. It will leave you with suboptimal and defective software solutions and canceled projects.

To make matters worse, some of the worst practices will cause other worst practices to come into play.   Capers Jones had statistics on over 18,000 projects and has hard evidence on the worst practices1.  The worst practices and their effect on productivity and quality are as follows:

Worst Practice Productivity Quality
Friction/antagonism among team members -12.0% -15.0%
Friction/antagonism among management -13.5% -18.5%
Inadequate communications with stakeholders -13.5% -18.5%
Layoffs/loss of key personnel -15.7% -21.7%
Excessive schedule pressure -16.0% -22.5%

Excessive Schedule Pressure

Excessive schedule pressure is present whenever any of the following are practiced:

Excessive schedule pressure causes the following to happen:

This alone can create a Death March project and virtually guarantee project failure.

Effect of excessive schedule pressure is that productivity will be down 16% and quality will be down 22%

Not only is excessive schedule pressure one of the worst practices it tends to drive the other worst practices:

  • Friction amongst managers
  • Friction amongst team members
  • Increases the chance that key people leave the organization

If your organization has a habit of imposing excessive schedule pressure — leave!

Friction Between People

Championship TrophySoftware development is a team activity in which we transform our intangible thoughts into tangible working code.  Team spirit and collaboration is not an option if you want to succeed.  The only sports teams that win championships are those that are cohesive and play well together.

You don’t have to like everyone on your team and you don’t have to agree with all their decisions.  However, you must understand that the team is more important than any single individual and learn to work through your differences.

Teams only work well when they are hard on the problem, not each other

Fighting ManagersFriction among managers because of different perspectives on resource allocation, objectives, and requirements.  It is much more important for managers to come to a consensus than to fight for the sake of fighting. Not being able to come to a consensus will cave in projects and make ALL the managers look bad. Managers win together and lose together.

Effect of management friction is that productivity will be down 13.5% and quality will be down 18.5%

Team FrictionFriction among team members because of different perspectives on requirementsdesign, and priority.  It is also much more important for the team to come to a consensus than to fight for the sake of fighting.  Again, everyone wins together and loses together — you can not win and have everyone else lose.

Effect of team friction is that productivity will be down 12% and quality will be down 15%

Any form of friction between managers or the team is deadly.

Inadequate Stakeholder Communication

Inadequate stakeholder communication comes in several forms:

  • Not getting enough information on business objectives
  • Not developing software in a transparent manner

If you have insufficient information on the business objectives of a project then you are unlikely to capture the correct requirements.  If you are not transparent in how you are developing the project then you can expect excessive schedule pressure from senior management.

Effect of inadequate stakeholder communication is that productivity will be down 13.5% and quality will be down 18.5%

Loss of Key Personnel

To add insult to injury, any of the other four worst practices above will lead to either:

  • Key personnel leaving your organization
  • Key personnel being layed off

I Quit!!Badly managed organizations and projects will cause the most competent people to leave the organization, simply because they can more easily get a job in another organization.

When organizations experience financial distress from late projects they will often cut key personnel because they are expensive.  The reality is that laying off key personnel will sandbag your ability to get back on track.  The correct thing to do is to find your least effective personnel and let them go.

Effect of layoffs/loss of key personnel is that productivity will be down 15.7% and quality will be down 21.7%

The loss of key personnel has a dramatic effect on team productivity and morale and a direct effect on product quality.

Conclusion

Any of the worst practices mentioned above will cause a project to be late and deliver defective code. Even worse, the worst practices tend to feed each other and cause a negative spiral. If you are in an organization that habitually practices any of these worst practices then your only real option is to quit.

The most deadly situation is when there is the following cascading of worst practices:

  • Excessive schedule pressure (leads to)
  • Management and team friction (leads to)
  • Loss of key personnel

If you are in senior management then none of these practices can be allowed if you want to avoid canceled projects or highly defective products.


1Jones, Capers. SCORING AND EVALUATING SOFTWARE METHODS, PRACTICES, AND RESULTS. 2008.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Comments are for Losers

If software development is like driving a car then comments are road signs along the way.

Comments are purely informational and do NOT affect the
final machine code. Imagine how much time you would waste driving in a city where road signs looked like this one.

A good comment is one that reduces the development life cycle for the next developer that drives down the road.

A bad comment is one that increases the development life cycle for any developer unfortunate enough to have to drive down that road. Sometimes that next unfortunate driver will be you several years later!

Comments do not Necessarily Increase Development Speed

I was in university in 1985 (yup, I’m an old guy 🙂 ) and one of my professors presented a paper (which I have been unable to locate 🙁 ) of a study done in the 1970s. The study took a software program, introduced defects into it, and then asked several teams to find as many defects as they could. The interesting part of the study was that 50% of the teams had the comments completely
removed from the source code. The result was that the teams without comments not only found more defects but also found them in less time.

So unfortunately, comments can serve as weapons of mass distraction

Bad comments
A bad comment is one that wastes your time and does not help you to drive your development faster.

Let’s go through the categories of really bad comments:

  • Too many comments
  • Excessive history comments
  • Emotional and humorous comments

Too many comments are a clear case of where less is more. There are programs with so many comments that it obscures the code. Some of us have worked on programs where there were so many comments you could barely find the code!

History comments can make some sense, but then again isn’t that what the version control comment is for? History comments are questionable when you have to page down multiple times just to get to the beginning of the source code. If anything, history comments should be moved to the bottom of the file so that Ctrl-End actually takes you to the bottom of the modification history.


We have all run across comments that are not relevant. Some comments are purely about the  developer’s instantaneous emotional and intellectual state, some are about how clever they are, and some are simply attempts at humor (don’t quit your day job!).

Check out some of these gems (more can be found here):


// I am not sure if we need this, but too scared to delete.

//When I wrote this, only God and I understood what I was doing
//Now, God only knows

// I am not responsible of this code.
// They made me write it, against my will.

// I have to find a better job

try {

}
catch (SQLException ex) {
// Basically, without saying too much, you’re screwed. Royally and totally.
}
catch(Exception ex)
{
//If you thought you were screwed before, boy have I news for you!!!
}

// Catching exceptions is for communists

// If you’re reading this, that means you have been put in charge of my previous project.
// I am so, so sorry for you. God speed.

// if i ever see this again i’m going to start bringing guns to work

//You are not expected to understand this


Self-Documenting Code instead of Comments

We apply science to software by checking the functionality we desire (requirements model) against the behavior of the program (machine code model).

When observations of the final program disagree with the requirements model we have a defect which leads us to change our machine code model.

Of course we don’t alter the machine code model directly (at least most of us); we update the source code which is the last easily modified model. Since comments are not compiled into the machine code there is some logic to making sure that the source code
model be self-documenting. It is the only model that really counts!

Self-documenting code requires that you choose good names for variables, classes, function names, and enumerated types. Self-documenting means that OTHER developers can understand what you have done. Good self-documenting code has the same
characteristic of good comments; it decreases the time it takes to do development.

Practically, your code is self-documenting when your peers say that it is, not when YOU say that it is. Peer reviewed comments and code is the only way to make sure that code will lead to faster development cycles.

Comments gone Wild

Even if all the comments in a program are good (i.e. reduce
development life cycle) they are subject to drift over time. The speed of software development makes it difficult to make sure that comments stay in alignment with the source code. Comments that are allowed to drift become road signs that are no longer relevant to drivers.

Good comments go wild when the developer is so focused on getting a release out that he does not stop to maintain comments. Comments have gone wild when they become misaligned with the source code; you have to terminate them.

No animals (or comments) were harmed in the writing of this blog.

Commented Code
Code gets commented during a software release as we are experimenting with different designs or to help with debugging. What is really not clear is why code remains commented before the final check-in of a software release.

Over my career as a manager and trainer, I’ve asked developers why they comment their code. The universal answer that I get is “just in case”. Just in case what? At the end of a software release you have already established that you are not going to use your commented code, so why are you dragging it around? People hang on to commented code as if it is a “Get Out of Jail Free” card, it isn’t.

The reality is that commented code can be a big distraction. When you leave commented code in your source code you are leaving a land mine for the next developer that walks through it.

When the pressure is on to get defects fixed developers will
uncomment previously commented code to see if it will fix the problem. There is no substitute for understanding the code you are working on – you might get lucky when you reinstate commented code; in all likelihood it will blow up in your face.

Solutions

If your developers are not taking (or given) enough time to put in good comments then they should not write ANY comments. You will get more productivity because they will not waste time putting in bad comments that will slow everyone else down.

Time spent on writing self-documenting code will help you and your successors reduce development life cycles. It is absolutely false to believe that you do not have time to write self-documenting code.

If you are going to take on the hazards of writing comments then they need to be peer reviewed to make sure that OTHER developers understand the code. Unless the code reviewer(s) understands all the comments the code should not pass inspection.

If you don’t have a code review process then you are only commenting the code for yourself. The key principle when writing comments is Non Nobis Solum (not for ourselves alone).

When you run across a comment that sends you on a wild goose
 chase – fix it or delete them. If you are the new guy on the team and realize that the comments are wasting your time – get rid of them; your development speed will go up.

Other articles


Want to see other sacred cows get tipped?Check out:

Moo?

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Uncertainty and Risk in Software Development (1 of 3)

To develop high quality software consistently and reliably is to learn how to master complexity. You master complexity when you understand the different sources of uncertainty and the different risk characteristics of each uncertainty. Uncertainties introduce delays in development as you attempt to resolve them. Resolving uncertainties always involves alternative designs and generally affects your base architecture. Poor architecture choices will increase code complexity and create uncertainty as future issues become harder to resolve in a consistent manner.

Confused? Let’s untangle this mess one issue at a time.

Uncertainty

The key principle here is that uncertainty will introduce delays in development. Let’s look at the average speed of development. The Mythical Man Month conjectures that an average developer can produce 10 lines of production code per day regardless of programming language. Let’s assume for the sake of argument that todays developers can code 100 lines of code per day.

Development speed is limited because of meetings, changed and confused requirements, and bug fixing. Suppose we print out all the source code of a working 200,000 line program. If we ask a programmer to type this code in again, they are likely to be typing at least 2,000 lines of code per day. So to develop the program from scratch would have taken 2,000 man days, but to type it in again would only take 100 man days.

The time difference has to do with uncertainty. The developer that develops the application from scratch faces uncertainty whereas the developer that types in the application faces no uncertainty.

If you have ever done mazes you discover that to do the maze from the entry to the exit point involves making decisions, and this introduces delays while you are thinking. However, try doing a maze from the exit back to the entry, you will find there are few decisions to make and it is much faster. Fewer decisions from resolving uncertainty faster leads to fewer delays.

It is always faster to do something when you know the solution.

Sources of Uncertainty

The major sources of uncertainty are:

  • Untrained developers
  • Incomplete and inconsistent requirements
  • Technical challenges

We use the term “learning curve” to indicate that we will be slower when working with new technologies. The slope of the learning curve indicates how much time it will take to learn a new technology. If you don’t know the programming language, libraries/APIs, or IDE that you need to work with this will introduce uncertainty.

You will be constantly making syntax and semantic errors as you learn new languages, but this should pass rather quickly. What will take longer is learning about the base functionality provided by the libraries/APIs. In particular, you will probably end up creating routines only to discover that they are already in the API. Learning a new IDE can take a very long time and create serious frustration along the way! Incomplete and inconsistent requirements are a big source of uncertainty.

Incomplete requirements occur when you discover new use cases as you create a system. They also occur when the details required to code are unavailable, i.e. valid input fields, GUI design, report structure, etc. In particular, you can end up iterating endlessly over GUI and report elements – things that should be resolved before development starts.

Inconsistent requirements occur because of multiple sources of requirements as well as poor team communication. Technical challenges come in many forms and levels of difficulties. A partial list of technical challenges includes:

  • Poorly documented vendor APIs
  • Buggy vendor APIs
  • Interfacing incompatible technologies
  • Insufficient architecture
  • Performance problems

In all cases technical challenge is resolved either by searching for a documented solution in publications, on the Internet, or by trial an error. Trial and error can be done formally or informally but involves investigating multiple avenues of development, possibly building prototypes, and then choosing a solution.

While you are resolving a technical challenge your software project will not advance. A common source of uncertainty is insufficient architecture.

Insufficient architecture occurs when the development team is not aware of the end requirements of the final software system. This happens when only partial requirements are available and/or understood by the developers. The development team lays down the initial architecture for the software based on their understanding of the requirements of the final software system.

Subsequently, clarified requirements or new requirements make developers realize that there was a better way to implement the architecture. The developer and manager will have a conversation that is similar to:


Manager: We need to have feature X changed to allow Y, how soon can we do this?

(pause from the developer)

Developer: We had asked if feature X would ever need Y and we were told that it would never happen. We designed the architecture based on that. If we have to have behavior Y it will take 4 months to fix the architecture and we would have to rewrite 10% of the application.

Manager: That would take too long. Look I don’t want you to over engineer this, we need to get Y without taking too much of a hit on the schedule. What if we only need to have this for this screen?

(pause from the developer)

Developer: If we ONLY had to do it for this one screen then we can code a work around that will only take 2 weeks. But it would be 2 weeks for every screen where you need this. It would be much simpler in the long run to fix the architecture.

Manager: Let’s just code the work around for this screen. We don’t have time to fix the architecture.


The net effect of insufficient requirements is that you end up with poor architecture. Poor architecture will cause a technical challenge every time you need to implement a feature that the architecture won’t support.

You will end up wasting time every time you need to work around your own architecture.

Management will not endorse the proper solution, i.e. fixing the architecture, because they have a very poor understanding that every work around that is made is pushing the project closer and closer to failure. Eventually the software will have so many work-arounds that development will slow to a crawl. It is interesting that the project will probably fail, yet, soon enough the organization will attempt to build the same software using the same philosophy.

There is never enough time to get the project done properly, but there will always be enough time to do it again when the project fails.

http://www.geekherocomic.com/2009/06/03/clever-workaround/

Summary

  • Uncertainty comes from several sources
    • Untrained personnel (language, API, IDE)
    • Inconsistent and incomplete requirements
    • Technical challenges

Next part (2 of 3)

  • Defining and understanding risk
  • Matching uncertainties and risks
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Why Adding Personnel to a late Software Project delays it more

The blog entry on Root Causes of ‘Fire-Fighting’ explains how poor requirements and insufficient team synchronization mechanisms can lead to constant fire-fighting. When faced with constant fire-fighting your project starts spinning out of control and code development will slow to a crawl. At this time, management’s first instinct is to throw more developers at the problem.

While adding resources to a late project seems like a logical thing to do, it generally makes the problem worse, i.e. leads to more fire fighting and reduced productivity. While it seems counter-intuitive, actually throwing people off the project is more likely to make your project move faster.  Fred Brooks, author of The Mythical Man-Month calls this principle Brook’s law.

Different Types of Team Activity

Before addressing why adding resources slow down late projects,  let’s look at the different types of team activities and their inherent productivity characteristics. When teams of people perform tasks they fall into one of three different categories: 1) additive, 2) disjunctive, and 3) conjunctive.

In an additive activity, the productivity of the group is determined by adding up the productivity of each of the individuals comprising the team, i.e. team productivity = Σ (individual productivity) . One additive activity is tug-of-war where the productive output of your team is equal to the sum of the pulling force of all the members of your team. Another additive activity would be a team of people painting a house.

Managers throw additional people into late projects on the assumption that coding is an additive activity, it isn’t; we’ll cover why in a second.

In a disjunctive activity, the productivity of the group is determined by the strongest member of the team, i.e. team productivity = max(individual1, individual2, …, individualn). A disjunctive activity would be playing Trivial Pursuit in large teams, if team gets the answer right when any team member gets it right.  In software projects disjunctive activities occur when there is a very specific technical problem to solve. In the meeting, whoever solves the problem first will solve it for the entire team.

In a conjunctive activity, the productivity of the group is determined by the weakest member of the team, i.e. team productivity = min(individual1, individual2, …, individualn). Conjunctive activities are equivalent to the weakest link in a chain. Security is a conjunctive activity, you are only as secure as the weakest part of your security architecture. Quality is a conjunctive activity and this is why we say “quality is everyone’s job“. It only takes one poor quality component to reduce the quality of an entire product.

When an organization is unaware of critical conjunctive activities, they are likely to have all kinds of execution problems.

Understanding Requirements is a Conjunctive Activity

Software projects get into a fire fighting mode because there is a poor understanding of the requirements from a team perspective. Whether the requirements were well written or not, if those requirements are poorly understood by the team then you start playing 6 blind men and the elephant.

This is where you discover that everyone in your project has a different perspective on what the system is supposed to do and how it is supposed to do it. The fire-fighting mode is nothing more than a set of meetings to resolve differences and solve problems caused by divergent beliefs on the project.

Understanding the requirements is a conjunctive activity. Your productivity is only as good as the weakest understanding in the team. The developer on the team with the weakest understanding of the requirements is probably generating the most defects. If QA does not understand the requirements (if they exist) then they will be generating all kinds of false positives when they are unsure the software is behaving properly.

With this perspective, it is easy to see how adding people to a late project will cause it to be later. The additional developers and QA being added to the project will have the poorest understanding of the requirements of all the team members. This means that they will almost certainly generate more defects in development and cause even more false positives in QA. This will increase the amount of fire-fighting that you do and cause the project to slow down even more.

Solution: Throw People off the Ship

Walk the plank

So as counter-intuitive as it sounds, you need to throw people off the ship. Find the developers and QA personnel who don’t understand the requirements and remove them from the project. These are the guys creating much of the noise in the fire-fighting meetings.

Otherwise get these people together with the business analysts and educate them about what the software is supposed to do and how it is supposed to be done. If you are going to add personnel to the team then this becomes an ideal time to get them educated on the requirements BEFORE they start producing or testing code.

While they are not working directly on the project have them put together the centralized requirements repository suggested in the last blog.  If they become sufficiently familiar with the requirements then you can add them back to the software team.

Additional resource: The Mythical Man Month, by Fred Brooks

VN:F [1.9.22_1171]
Rating: 4.0/5 (2 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Root cause of ‘Fire-Fighting’ in Software Projects

Quite a few projects descend into ‘continual fire fighting‘ after the first usable version of the software is produced. Suddenly there are an endless set of meetings which involve the business analysts [1] , developers, QA, and managers. Even when these meetings are well run, you cut the productivity of your developers who can barely get a few contiguous hours to write code between the meetings.

Ever wonder what causes this scenario to occur in so many projects?Below we look at the root causes of fire fighting in a project. We also try to suggest meeting strategies to maximize productivity and minimize developer disruption.
The first thing to notice is the composition of the meetings when fire fighting starts. One common denominator is that it is rarely just the developers getting together to solve a problem involving some technical constraint; often these are cross-functional meetings that involve business analysts and QA. In larger organizations, this will involve end-users and customers. Fundamentally, fire-fighting is the result of poor coordinating mechanisms between team members and confused communication.

Common Scenarios that Waste Time

Typically an issue gets raised in a bug triage meeting about some feature that QA claims is improperly implemented. Development will then go on to explain how they implemented it and where they got the specific requirements.  At this point, the business analyst chimes in about what was actually required. There are several basic scenarios that could be happening here:

  1. The requirements are complete and QA is pointing out that development has implemented the feature incorrectly.
  2. The requirements are loose and development has coded the feature correctly but QA believes that the feature is incorrect
  3. QA has insufficient requirements to know if the feature is implemented correctly or not.
  4. The requirements are loose and development and QA have different interpretations of what that means.

Scenario 1 is what you would expect to happen in a bug triage session. There is no wasted effort for this case as you would expect to need the business analyst, development, and QA to resolve this issue.

Scenario 2 is what happens when the requirements are not well written. Odds are the developer has made several voice calls and emails to the business analyst to resolve the functionality of a particular feature. This information exists purely in the heads of the business analyst and the developer and is buried in their email exchanges and does not make it back into the requirements. This scenario wastes the developer’s time.

Scenario 3 also happens when the requirements are not well written. Most competent QA personnel know how to write test plans and test cases. When the requirements are available to QA with enough time, they can generally determine if they have sufficient information to write the test cases for a given feature. If given the requirements with enough time, QA can resolve the ambiguity with the business analyst and make sure that the requirements are updated. When there is insufficient time, the problem surfaces in the bug triage meeting. This scenario wastes both QA and development’s time.

Scenario 4 occurs when you have requirements that can legitimately be implemented in many different ways. It is likely that QA did not get the requirements before coding started, if so they could have warned the business analyst to fix it. If development has implemented the feature incorrectly, then: 1) the business analyst needs to fix the requirement, 2) development needs to re-code the feature, and 3) QA needs to update their test cases. In this scenario everyone’s time is wasted.

If the your scenarios are not 2), 3), and 4) then you are probably in fire fighting mode because you have requirements that can not be coded as specified due to unexpected technical constraints.  Explaining to the organization why something is technically infeasible can take up quite a few meetings.

As an example of unexpected technical constraints, at Way Systems (now Verifone) we were building a cell phone POS system.  Typically signal strength is shown as 5 bars, however, due to the 3rd party libraries we were using we could only display a number 0-32 for the wireless signal strength.  There was no way to overcome the technical constraint because there were too many framework layers that we did not control.  Needless to say there were quite a few (useless?) meetings while we informed everyone about the issue.

Strategies to Reduce Fire-Fighting

The best way to reduce fire-fighting is simply to have effective requirements when you start a project.  Once you are caught in fire-fighting the cure is the same – you need to fix the requirements and document them in a repository that everyone has access to.  By improving the synchronization mechanisms between the business analysts, development, and QA your fire fighting meetings will go away.  In particular, all those requirements discussions that the business analysts have had with QA and development need to be written down.

Centralize and Document Requirements

If you are using use cases then the changes need to be made in the use case documents.  If you don’t have a centralized repository then you need to create one.   You can use a formal collaboration tool such as SharePoint, an informal collaboration tool such as Google Sites, or simply use a Wiki to host and document all requirements.

In your document repository you will want to keep all requirements by scenario.  If you are using use cases or user stories then each of these is a scenario.  If you have more traditional requirements they you will need to determine the name of the scenarios from your requirements.  Scenarios will be of the form ‘verb noun phrase’, i.e. ‘create person’, ‘notify customer of delivery’, etc.

Once you have a central repository for putting your requirements then ALL incremental requirements should be put on this site, not in cumbersome email chains.  If you need to send an email to someone, then document the requirement to the central site and email a link to the party; do not allow requirements to become buried in your email server.

Run Effective Meetings

Managers are often tempted to call meetings with everyone present ‘just in case‘.  There is no doubt that this will solve the occasional problem, but you are likely just to have a bunch of developers with ‘kill me now‘ expressions on their faces from beginning to end in the meeting.

Structure the meeting by grouping issues by developer and make an agenda so that each developer knows the order that they will be at the meeting.  No developer should have to go to a meeting that does not have an agenda! Next, use some IM tool from the conference room to let developers know when they are required to attend the meeting (not the 1st developer, obviously 🙂 ).  Issues generally run over time so don’t call anyone into the meeting before they are really required.  Give yourself breathing room by having meetings finish 10 minutes before the next generally used slot (i.e. 10:20 am or 2:50 pm).


Issues that really need multiple developers present should be delayed for end of the meeting.  When all other items are handled, use IM to call all the developers for those issues at the end.  By having the group issues at the end you are unlikely to keep them around for a long time since you will probably have to give up your conference room to someone else.

Conclusion

Not all fire-fighting involves bad requirements, but many of them do.  By trying to produce better requirements at the start of a project and implementing centralized mechanisms for those requirements you will reduce the fire-fighting later in your project. If you find yourself in fire-fighting mode, you can use implement a centralized requirements mechanism to help fight your way out of the mess.


[1] Business analysts or product managers
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)