Debuggers are Crutches

Woman on CrutchesDefects are common, but they are not not necessary.  They find their way into code because:

Defects are only corrected by understanding pathways and debuggers are not the best way to do this.

Debuggers are commonly used by developer’s to understand a problem, but just because they are common does not make them the best way to find defects.  I’m not advocating a return to “the good old days” but there was a time when we did not have debuggers and we managed to debug programs.

Avoid Defects

Personal Software ProcessThe absolute best way to remove defects is simply not to create them in the first place. You can be skeptical, but things like  the Personal Software Process (PSP) have been used practically to prevent 1 of every 2 defects from getting into your code.  Over thousands of projects:

The Personal Software Process increases productivity by 21% and increases code quality by 31%

A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. This huge waste could be cut in half if all developers focused on not creating defects in the first place.

Not only does the PSP focus on code planning, it also makes developers aware of how many defects they actually create.  Here are two graphs that show the same group of developers and their defect injection rates before and after PSP training.

Before PSP training After PSP training

Finding Defects

Unskilled professionalUsing a debugger to understand the source of a defect is definitely one way.  But if it is the best way then why do poor developers spend 25 times more time in the debugger than a a good developer? (see No Experience Required!)

That means that poor developers spend a week in the debugger for every 2 hours that good developer does.

No one is saying that debuggers do not have their uses.  However, a debugger is a tool and is only as good as the person using it.  Focus on tools obscures lack of skill (see Agile Tools do NOT make you Agile)

If you are only using a debugger to understand defects then you will be able to remove a maximum of about 85% of all defects, i.e. 1 in 7 defects will always be present in your code.

Orkin manWould it surprise you to learn that their are organizations that achieve 97% defect removal?  Software inspections take the approach of looking for all defects in code and getting rid of them.

Learn more about software inspections and why they work here:

Software inspections increase productivity by 21% and increases code quality by 31%

Even better, people trained in software inspections tend to inject fewer defects into code. When you become adept at parsing code for defects then you become much more aware of how defects get into code in the first place.

But interestingly enough, not only will developers inject fewer defects into code and achieve defect removal rates of up to 97%, in addition:

Every hour spent in code inspections reduces formal QA by 4 hours

Conclusion

As stated above, there are times where a skilled professional will use a debugger correctly.  However, if you are truly interested in being a software professional then:

  • You will learn how to plan and think through code before using the keyboard
  • You will learn and execute software inspections
  • You will learn techniques like PSP which lead to you injecting fewer defects into the code

You are using a debugger as a crutch if it is your primary tool to reduce and remove defects.

Related Articles

Want to see more sacred cows get tipped? Check out:

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

References

VN:F [1.9.22_1171]
Rating: 2.5/5 (2 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

The Programmer Productivity Paradox

Programmers seem to be fairly productive people.  You always see them typing at their desks; they chafe for meetings to finish so that they can go back to their desks and code. When asked, they will say that there is not enough time to produce the code, and the sooner they can start coding, the sooner they will be done.

So writing code must be the most important thing, correct?

If the average programmer writes about 50 lines of production code a day.  A 50,000 line program would take 1,000 man days to produce.  The 50,000 line listing can be entered by a programmer at about 1,000 lines a day or about 50 man days.

So what the heck are the developers doing for the other 950 days?

Before addressing that issue, lets make a simple observation. Capers Jones has compared many methodologies (RUP, XP, Agile, Waterfall, etc) and programming languages over thousands of projects and determined that programmers write between 325 and 750 lines of code (LOC) per month, which is less than the 1,000 LOC per month suggested above1.  Even if programmers do not average 50 lines of code per day, the following is clear2.

  • Methodology does not explain the apparent productivity gap
  • No language accounts for the apparent productivity gap

The reality is that only a fraction of a developer’s time is actually spent writing production code. If a developer is typing in code all the time then they are really trying different combinations of code until they finally find the combination of code that works.  Or more correctly, the combination that seems to match the requirements until either QA or the business analyst comes back and lets them know there is a problem.

That is why developers that plan their code before using the keyboard tend to outperform other developers.  Not only do only a few developers really plan out their code before coding but also years of experience do not teach developers to learn to plan.  In fact studies over 40 years show that developer productivity does not change with years of experience. (see No Experience Required!)

Years of experience do not lead to higher productivity

Interestingly enough, there are methodologies that have been around for a long time that emphasize planning code.  Watts Humphrey is the created of the Personal Software Process (PSP)3.  Using PSP has been measured to:

PSP can raise productivity by 21.2% and quality by 31.2%

If you are interested there are many other proven methods of raising code quality that are not commonly used (see Not Planning is for Losers).

If your developers at their keyboard and not planning at a white board then odds are that your productivity is not as high as it could be.

Bibliography

1 The The Mythical Man Month is even more pessimistic suggesting that programmers produce 10 production lines of code per day
2 Jones, Capers and Bonsignour, Olivier.  The Economics of Software Quality.  Addison Wesley.  2011
3 Watts, Humphrey.  Introduction to the Personal Software Process, Addison Wesley Longman. 1997

Other articles in the “Loser” series

Want to see more sacred cows get tipped? Check out:

Moo?

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

When BA means B∪<<$#!t Artist

BA typically means Business Analyst, but what makes for a good BA?  When do you have a good BA and when don’t you? Many projects fail at the beginning due to incomplete, inconsistent, and overly verbose analysis produced by BAs.

Are your BAs any good?  The success of many projects depend on the ability of the BA do correctly identify the problem you are trying to solve.  If your BA is not competent then you are doomed before you start.

RubiksCubeBusiness analysis consists of all facets of solving business problems. Business analysis is a role executed by project, product, and engineering managers.  However, there are people that do business analysis as their main role and we simply label them as business analysts or product managers. We shall stick to the term business analyst for simplicity.

Business analysts perform a range of tasks including:

  • Gathering requirements
  • Writing business cases and project charters
  • Performing gap analysis between products and corporate processes

Ever suspect that the people responsible for performing business analysis for you are not up for the challenge?  Here are three simple questions; good business analysts will get all three correct and not take longer than 20 seconds.

  Context Question
#1  MullerLyerLines Which of these two lines is longer?
#2  paris_spring_puzzle What does this sign say?
#3 I have two products whose prices add up to $1.10 and one product is $1 more than the other. How much is the more expensive product?

The answers are:

  1. The lines are the same length
    1. Take a ruler if you are not convinced
  2. Paris in the the spring
    1. Notice that the occurs twice
  3. The more expensive product is $1.05
    1. If the more expensive one had been $1 then the cheaper one would be $0.10 but then the expensive one would only be $0.90 more expensive than the cheaper one.

Jumping+to+conclusionsThese three questions illustrate that the mind can jump to conclusions that are often wrong.  The mind can be tricked because it assesses things quickly and only some people have the instinct to double check their results.

Most BAs are well educated; however, the interesting thing is that studies show that educated people are less able to see their biases and they jump to conclusions more readily than other people.1

A good business analyst realizes that during analysis there will be situations where the mind will jump to conclusions.  Many business analysts are asked to resolve conflicting requirements, recognize missing requirements, and deal with biases coming from many different sources.  Unless you have the reflex of checking facts for consistency and eliminating bias then you won’t make a good business analyst.

So what is the quality of your BAs?

Other articles

Bibliography

West, Richard F. and Meserve, Russel J. Cognitive Sophistication does not Attenuate the Bias Blind Spot. Journal of Personality and Social Psychology.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

No Business Case = Failed Project

A business case comes between a bright idea for a software project and the creation o that project. Project Timeline, Business Case

  • To – idea to have a project is born
  • Tcheck – formal or informal business case
  • Tstart – project is initiated
  • Tend – project finishes successfully or is abandoned

Not all ideas for software projects make sense.  In the yellow zone above, between idea and project being initiated, some due diligence on the project idea should occur.  This is where you do the business case, even if only informally on the back of a napkin.

The business case is where you pause and and estimate  whether the project is worth it, i.e. will this project leave you better off than if you did not do it.

For those who want precise definitions the project should be NPV +ve.  In layman’s terms, the project should leave the organization better off on it’s bottom line or at least improve skill levels so that other projects are better off.

Projects that do not improve skills or the bottom line are a failure.

Out of 10 software projects (see Understanding your chances):

  • 3 are successful
  • 4 are challenged, i.e. over cost, over budget, or deliver much less functionality
  • 3 will fail, i.e. abandoned

This means that the base rate of success for any software project is only 3 out of 10.

Yet executives routinely execute projects assuming that they can not fail even though the project team knows that the project will be a failure from day 1.

Business cases give executives a chance to stop dubious projects before they start. (see Stupid is as Stupid Does)

Understanding how formal the business case needs to be comes down to uncertainty. There are three key uncertainties with every project:

  • Requirements uncertainty
  • Technical uncertainty
  • Skills uncertainty

When there is a moderate amount of uncertainty in any of these three areas then a formal business case with cash flows and risks needs to be prepared.

Requirements Uncertainty

Requirements uncertainty is what leads to scope shift (scope creep).  The probability of a project failing is proportional to the number of unknown requirements when the project starts (see Shift Happens).

Requirements uncertainty is only low for two particular projects: 1) re-engineering a project where the requirements do not change, and 2) the next minor version of a software project.

For all other software projects the requirements uncertainty is moderate and a formal business case should be prepared.

Projects new to you have high requirements uncertainty.

Technical Uncertainty

Technical uncertainty exists when it is not clear that all requirements can be implemented using the selected technologies at the level of performance required for the project.

Technical uncertainty is only low when you have a strong understanding of the requirements and the implementation technology. When there is only a moderate understanding of the requirements or the implementation technology then you will encounter the following problems:

  • Requirements that get clarified late in the project that the implementation technology will not support
  • Requirements that can not be implemented once you improve your understanding of the implementation technology

Therefore technical uncertainty is high when you are doing a project for the first time and requirement uncertainty is high.  Technical uncertainty is high when you are using new technologies, i.e. shifting from Java to .NET or changing GUI technology.

Projects with new technologies have moderate to high uncertainty.

Skills Uncertainty

Skills uncertainty comes from using resources that are unfamiliar with the requirements or the implementation technology.  Skills uncertainty is a knowledge problem.

Skills uncertainty is only low when the resources understand the current requirements and implementation technology.

Resources unfamiliar with the requirements will often implement requirements in a suboptimal way when requirements are not well written.  This will involve rework; the worse the requirements are understood the more rework will be necessary.

Resources unfamiliar with the implementation technology will make mistakes choosing implementation methods due to lack of familiarity with the philosophies of the implementation libraries.  Often after a project is complete, resources will understood that different implementation tactics should have been used.

Formal or Informal Business Cases?

An informal business case is possible only if the requirements, technical, and skills uncertainty is low.  This only happens in a few situations:

  • Replacing a system where the requirements will be the same and the implementation technology is well understood by the team
  • The next minor version of a software system

Every other project requires a formal business case that will quantify what kind of uncertainty and what degree of uncertainty exists in the project.  At a minimum project managers facing moderate to high uncertainty should be motivated to push for a business case (see Stupid is as Stupid Does). Here is a list of projects that tend to be accepted without any kind of real business case that quantifies the uncertainties:

  • Change of implementation technology
    • Moving to object-oriented technology if you don’t use it
    • Moving from .NET to Java or vice versa
  • Software projects by non-software companies
  • Using generalists to implement technical solutions
  • Replacing systems with resources unfamiliar with the requirements
    • Often happens with outsourcing

Projects with moderate to high risks and no business case are doomed to fail.

Related articles

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Stupid is as stupid does

StupidIsAsStupidDoesSenior management does not set out to have a failed project, however, when the failure rate is 7 out of 10 projects you wonder what the problem is.

As Ma Gump stated, “stupid is as stupid does“, that is, smart people sometimes do stupid things.  The collective IQ of a project is reduced by the number of managers involved in the project.

Now from a human perspective, something very interesting is going on here.  For example, suppose you were give the following choices:

  1. An 80% chance of making $100
  2. A guaranteed $40

If people behaved according to expected value then everyone would choose the first choice.  However, because humans are risk averse, almost everyone will choose the second alternative with guaranteed money.

Now out of 10 software projects:

  • 3 will succeedExecutives, Understanding your chances, PedestriansCrossingStreet
  • 4 will be challenged
  • 3 will outright fail

To put that in perspective, if you were watching people cross the street at an intersection:

  • 3 cross the street successfully
  • 4 get maimed
  • 3 get killed

How interested would you be in crossing that street?

You can Google “software project failure rates” to see that this has been demonstrated by multiple reliable institutions over every industry.  Challenged projects are generally projects which go over budget and under deliver a software solution that can be declared a moral victory by management.

So if human beings are risk averse and the odds of project success are so low then:

Why does senior management ignore risks on software projects?

Damn the torpedoes, full speed aheadThe only possible conclusion is that senior management can’t conceive of a their projects failing. They must believe that every software project that they initiate will be successful, that other people fail but that they are in the 3 out of 10 that succeed.

This inability to understand the base rate of failure in software development is systemic. There are so many software projects that are started by senior management where the technical team knows that the chance of success is 0% from the start.

Senior management is human and is risk averse, you just need to find a way to remind them of this. One way to get senior management to think twice about projects is to make sure that there is a meeting before launching the project where management is asked the following question:

Assume that this project will fail, why would it have failed? What will the consequences be?

This exercise (if done seriously) may have the effect of causing senior management to realize that the project can indeed fail. With luck, the normal risk aversion that every human being is endowed with will kick in and the project may get re-evaluated.

Related Articles

Bibliography

Kahneman, Daniel. Thinking, Fast and Slow.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Size Matters

Some say that software development is challenging because of complexity. This might be true, but this definition does not help us find solutions to reduce complexity. We need a better way to explain complexity to non-technical people.

The reality is that when coding a project size matters.  Size is measured in the number of pathways through a code base, not by the number of lines of code. Size is proportional to the number of function points in a project.

There are many IT people that succeed with programs of a certain size and then fail miserably when taking on programs that are more sophisticated.  Complexity increases with size because the number of pathways increase exponentially in large programs.

Virtually anyone (even CEOs 🙂 ) can build a hello, world! application; an application that only has a single pathway through it and is as simple as you can get.  Some CEOs write the simple hello, world! program and incorrectly convince themselves that development is easy. Hello, world! only has a single pathway through it and virtually anyone can write it.

main() {
printf( “hello, world” );
}

If you have an executive that can’t even complete hello,world then you should take away his computer 🙂

CallTreeComplexity Defined

As programs get more sophisticated, the number of decisions that have to be made increase and the depth of the call tree increases.  Every non-trivial routine will have multiple pathways through it.

If your average call depth is 10 with an average of 4 pathways through each routine then this represents over 1 million pathways.  If the average call depth is 15 then it represents 107 million pathways.

Increasing sophisticated programs have greater call depth than ever and distributed applications increase the call depth even because the call depth of a system is additive. This is what we mean by complexity; it is impossible for us to test all of the different pathways in a black box fashion.

Now in reality every combination of pathways is not possible, but you only have to leave holes in a few routines and you will have hundreds, if not thousands, of pathways where calculations and decisions can go wrong.

In addition, incorrect calculations or decisions higher up in the call tree can lead to difficult to find defects that may blow up much further away from the source of the problem.

What are Defects?

Software defects occur for very simple reasons, an incorrect calculation is performed that causes an output value to be incorrect.  Sometimes there is no calculation at all because input data is not validated to be consistent and that data is either stored incorrectly or goes on to cause incorrect calculations to be performed.

Call Tree and Defects, discoveredWe only recognize that we have a defect when we see an output value and recognize that it is incorrect. More likely QA sees it and tells us that we are incorrect.

Basically we follow a pathway that is correct through nodes 1, 2, 3, 4, and 5.  At point 6 we make a miscalculation calculation, and then we have the incorrect values at points 7 and 8 and discover the problem at node 9.

So once we have a miscalculation, we will either continue to make incorrect calculations or make incorrect decisions and go down the wrong pathways (where we will then make incorrect calculations).

Not all Defects are Equal

It is clear that the more distance there is between a miscalculation and its discover will make defects harder to detect.  The longer the call depth the greater the chance that there can be a large distance between the origin and detection, in other words:

Size Matters

Today we build sophisticated systems of many cooperating applications and the call depth is exponential with the size of the system.  This is what we mean by complexity in software.

Reducing Complexity

Complexity is reduced for every function where:

  • You can identify when inconsistent parameters are passed to a function
  • All calculations inside of a function are done correctly
  • All decisions through the code are taken correctly

The best way to solve all 3 issues is through formal  planning and development.Two methodologies that focus directly on planning at the personal and team level are the Personal Software Process (PSP) and the Team Software Process (TSP) invented by Watts Humphrey.

Identifying inconsistent parameters is easiest when you use Design By Contract (DbC) , a technique that was pioneered by the Eiffel programming language. It is important to use DbC on all functions that are in the core pathways of an application.

Using Test Driven Development is a sure way to make sure that all calculations inside of a function are done correctly, but only if you write tests for every pathway through a function.

Making sure that all calculations are done correctly inside a function and that correct decisions are make through the code is best done through through code inspections (see Inspections are not Optional and Software Professionals do Inspections).

All techniques that can be used to reduce complexity and prove the correctness of your program are covered in Debuggers are for Losers.  N.B. Debuggers as the only formalism will only work well for systems with low call depth and low branching.

Conclusion

Therefore, complexity in software development is about making sure that all the code pathways are accounted for.  In increasingly sophisticated software systems the number of code pathways increases exponentially with the call depth. Using formal methods is the only way to account for all the pathways in a sophisticated program; otherwise the number of defects will multiply exponentially and cause your project to fail.

Only projects with low complexity (i.e. small call depth) can afford to be informal and only use debuggers to get control of the system pathways. As a system gets larger only the use of formal mechanisms can reduce complexity and develop sophisticated systems. Those formal mechanisms include:

  • Personal Software Process and Team Software Process
  • Design by Contract (via Aspect Oriented Programming)
  • Test Driven Development
  • Code and Design Inspections
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Are we there yet?

Are We There YetWe associate “are we there yet?” with kids asking incessantly if a long trip is almost over.  It is generally funny, however, it is less funny on a project that should be complete.

Projects follow distinct phases:

  1. Basic requirements are collected
  2. Project plan and end date are established
  3. Development starts
  4. Projects track to the project plan

Often, tasks start off well until they all level off at 90-95% complete and get stuck.  Management was satisfied with progress until progress stall, they see frantic activity picking up and they wonder “are we there yet?.

Like a family vacation, this trip is sometimes not even close to being finished.  You  expect that if 9,000 hours have been spent on a project estimated at 10,000 hours that you would be 90% done.  Surprisingly this is not the case for 7 projects out of 10 — have you ever worked on a project where a 90% project plan meant the project was 90% done?

Project PlanUsing a project plan to estimate completion is useful if there is a direct correlation between the project goal and the project plan. You often discover that the goal and the plan differ late in a project. Project plans and results differ because

  • Requirements and tasks are missing
  • The project is incorrectly estimated
  • Work is performed on tasks that do not advance the project

Requirements and Tasks are Missing

Clearly missing requirements mean that more work will be necessary to get to the goal, but the time for this work is rarely added to the project deadline.

A relative of missing requirements is missing tasks, this occurs when the work breakdown structure is incomplete and more subtasks are necessary to complete a task than estimated.

In both cases, if there are 2,000 hours of missing requirements and tasks then a project initially forecasted for 10,000 hours should move the deadline to 11,000 hours.  Therefore if 9,000 hours are done then you are only 75% complete.

OverbearingUnfortunately, weak IT leadership, internal politics, and embarrassment over  poor estimates will not move the deadline and teams will have pressure put on them by overbearing senior executives to get to the original deadline even though that is not possible.

The Project is Incorrectly Estimated

There is much literature about how accurate estimates are possible and necessary to successful projects.  Weak and uniformed IT leadership will cave in to senior management demands for project deadlines without formal estimates.  A typical interaction looks as follows:

CEO: We need feature X, how much will it cost and how long will it take to built?

VP Engineering: Well we need to define feature X properly, see how it will be implemented, determine if we have the necessary skill sets, and see what the impact to our other operations will be.  It will take time to do do this work.

CEO: We don’t have time for formal estimates.  How hard can it be to add feature X?  But next Friday, I will need a ballpark estimate for time and cost.

VP Engineering: I’ll see what I can come up with for next week.

Weak IT executives allow themselves to be bullied all the time by other executives that have no idea what is involved in IT projects.  The end result is an underspecified project that will be underestimated in time and cost (see Why Executive Declared Deadlines lead to Disaster)

The more inaccurate the requirements the more extra work there is to do to get to the target.  That is why short requirements processes lead to strongly shifting requirements and cancelled projects (see Shift Happens).  In fact, the degree of requirements shift is equal to the chance of a project being cancelled.

Work is Performed that Does Not Advance the Project

Even if a project is correctly specified, there are several activities that will be performed that will not advance the project:

  • Some requirements can not be implemented as specified and time will be spent researching and implementing work arounds
  • Some requirements will be ambiguously specified and be implemented incorrectly and need to be redone
  • Some requirements will be inconsistent and require time and analysis to establish consistent requirements
  • Infrastructure might need to be refactored when you discover that it will not support the code created later

ReworkWork executed on these activities will not advance your project and should not be counted in the total of completed hours.  So if 2,000 hours have been spent on activities that don’t advance the project then if 9,000 hours have been done on a 10,000 hour project then you have really done 7,000 hours of the 10,000 hour project and you are only 70% done.

Not Changing the Deadline Leads to Bad Practices

Whether a project is off course because of missing activities or non-productive activities, not changing the project end date will lead to schedule pressure as the project advances and people slowly start to realize that you are not going to make it.

HopeIsNotAStrategy

When it becomes clear that a project can’t make it’s original deadline many organizations will start common but deadly practices.  Excessive schedule pressure often leads to the following bad practices (see Stop It! No… Really stop it.):

  • Friction within the team
  • Friction amongst the managers
  • Inadequate communication with  stakeholders
  • Layoffs of key personnel

Solutions

There are only a few cures for the ‘Are we there yet?‘ problem:

  1. IT management with the intestinal fortitude to hold out for the creation of proper work breakdown structures and formal estimates
  2. Proper requirement processes that yield complete, consistent, and concise requirements
  3. Proper change management processes to alter the project deadline when missing tasks and non-productive activities are encountered

Failure to comply with these 3 principles means that you will continue to be subject to chaotic environments where 7 out of 10 projects fail (see Executives: Understanding your Chances of a Successful Project)

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Let the Machines Take Over

Whether you believe that the machines will take over or not will always be up for debate (until they actually take over :-)).

What is not up for debate is that many processes for software creation have been automated and can help improve productivity and quality in every phase of a software project.

Here are some statistics about how useful automation can be and give you some ammunition when you confront management about the need for these tools.

These results have been generated by Capers Jones who has been collecting data for over 15,000 projects for about 40 years.  Full results shown in the paper in the references.

Requirements Analysis

There have been tools available for tracing requirements for quite some time. Generally the only companies forced to provide requirements traceability, but this makes sense to do on all large projects:

Automated requirements tracing can raise productivity by 12.89% and quality by 17.8%

There are tools that can help size programs in terms of function points, which then allows you to get relatively accurate estimates of a project`s cost and time before the project is executed.

Automated sizing in function points can raise productivity by 16.5% and quality by 23.7%

Development

There are several tools that are useful during development, but by far the biggest bang for your buck comes in the form of automated static analysis.

Automated static analysis can raise productivity by 20.9% and quality by 30.9%

There are now static analysis tools available for virtually all current languages that are cost effective and can help to locate those hard to find occasional defects.  You can find a list here of automated tools for C/C++, Java, JavaScript, Objective-C, Perl, PHP, and Python.

Then getting cyclomatic complexity counts for your code can be done in an automated fashion.  Studies have shown that files with a count of 74+ were 98% likely to have defects.

Automated cyclomatic complexity analysis can raise productivity by 14.5% and quality by 19.5%

Performance is a tricky thing, we will talk about it often but rarely do we actually do something about it. Automated performance analysis allows you to determine if there will be production problems before deploying new code.

Automated performance analysis can raise productivity by 12.5% and quality by 19.5%

There are tools for restructuring code automatically, many of these are built directly into today’s IDEs.  There are many stand alone tools for automated restructuring, some are shown here.

Automated restructuring can raise productivity by 8.0% and quality by 11.7%

Automated unit testing is the easiest way to detect defects early.  If your developers are using Test Driven Development (TDD) and any continuous integration tool  (i.e, Hudson or Jenkins) then you can find out with each build if a defect has crept into the code.

Automated unit testing can raise productivity by 16.5% and quality by 23.8%

Testing

Believe it or not I was working with a team that was not using version control or defect tracking last year. So there are clearly still some organizations not using defect tracking.

Automated defect tracking can raise productivity by 17.5% and quality by 25.9%

Testing the percentage of the application pathways that are actually being tested is critical.  Most of your defects will lie in the pathways that you can not test.

Automated test coverage analysis can raise productivity by 15.5% and quality by 21.2%

Test cases can be generated automatically in some situations.

Automated test case generation can raise productivity by 15.0% and quality by 20.0%

Deployment

The next two tools are not about technology that you acquire, rather it is about building automated tools to support critical deployment functions.  When the pressure is on it is tough to think about creating applications to manage configuration in an automated way.  When management tells you that it would be a luxury to build these tools then show them these results.

Some organizations rely on manually updating configurations, but can be error prone if there are many values to update.

Automated configuration control can raise productivity by 16.4% and quality by 23.5%

Also, when automated configuration tools don`t exist then you often don`t have automated deployment tools.

Automated deployment support can raise productivity by 14.6% and quality by 19.6%

Conclusion

One of the most cost effective ways to improve productivity and quality in your software development is to automate any of the issues above.  In most cases you can find a vendor that will provide you support; for deployment you can make the case that building automated tools is the way to go.


References

1N.B. All productivity and quality percentages were derived over 15,000+ actual projects


Articles in the “Loser” series

Want to see sacred cows get tipped? Check out:

Moo?

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

To be, or not to be… Formal

What the heck is a formal process? Is it in the category of you know it when you see it?

A formal process is a methodical way of tackling any repetitive process so that it can be handled in a standardized way. The idea of formality is to reduce variation in output and minimize the chance of forgetting something. Formal processes include the ideas of studying, planning, measurement, and process refinement; they cost more and take longer than just just jumping in and getting things done.

Imagine if they tried to make cars in an informal way… would you drive them?

Informal, or not formal processes, are faster and cheaper because you get rid of the overhead of a formal process. It makes sense to be informal when the variation of the process output is not very sensitive to studying, planning, measurement, or refinement.

We are all familiar with trying to use formality when it makes no sense, you simply end up with process for processes sake and waste time and money.

In software, formality has become synonymous with documentation, and frankly we all hate documentation. We don’t like to produce it and we don’t like to read it. I’ve instructed hundreds of engineers and only a few people bother to read the requirements brick. The way education is going right now I’m not even sure that university graduates can read anyways, but I digress… 🙂

For example, documentation for requirements, design, and testing is the end result of a formal process, it is not the formal process itself. It is smart not to produce more documentation than you need, but that does not mean that the formal processes that creates the documentation are optional.

Formal processes in software development cluster around two very simple ideas:

  • getting the correct requirements
  • elimination of defects

The problem is that formal processes only work when you do enough of them to make a difference. That is doing too little of a formal process is just as bad as doing too much. For example, UML class diagrams will not only help you to plan code but also educate new developers on the team.

Producing UML diagrams is a formal practice; however, if you insist on producing UML diagrams for everything then you will end up spending a significant amount of time creating the diagrams to get a diminishing benefit. Not having any UML diagrams will lead to code being developed more than once simply because no one has a good overview of the system.

Selective UML diagrams can really accelerated your development

Visualizing Formality

As you increase any formal practice the effect of that practice will increase, for example the more formal you are with UML diagrams the less defects you are likely to have:


However, as you increase formality, the costs of keeping track of all your artifacts goes up:


When you put the two graphs together you get this effect:


This shows that there is a cost to not having enough formality, i.e. no UML diagrams means that more code will get developed and less reuse means more defects. However, producing too many UML diagrams will also have the cost of producing the UML diagrams and keeping them updated. The reality is that for any formal practice there is a sweet spot where the minimal formality gives the lowest cost.

What that means is that productivity will be maximized in the sweet spot.


Remember that most formal practices are hygiene practices, these are things that no one wants to do but are really necessary to be highly productive and produce high quality code (see here for more on hygiene practices)

Statistics Behind Formal / Informal Practices

Formal practices can be applied to many things, but Capers Jones has measured the following formal practices (when properly executed) can have a positive effect on a project (15,000+ projects):

Formal risk management can raise productivity by 17.8% and quality by 26.4%

Formal measurement programs can raise productivity by 20.0% and quality by 30.0%

Formal test plans can raise productivity by 16.6% and quality by 24.1%

Formal requirements analysis can raise productivity by 16.3% and quality by 23.2%

Formal SQA teams can raise productivity by 15.2% and quality by 20.5%

Formal scope management can raise productivity by 13.5% and quality by 18.5%

Formal project office can raise productivity by 11.8% and quality by 16.8%

A special category of formal practices are inspections:
Formal code inspections can raise productivity by 20.8% and quality by 30.9%

Formal requirements inspections can raise productivity by 18.2% and quality by 27.0%
Formal design inspections can raise productivity by 16.9% and quality by 24.7%

Formal inspection of test materials can raise productivity by 15.1% and quality by 20.2%

More information on inspections can be found here:

Not being formal enough has its costs, and it is rarely neutral. Here are some of the costs of not being formal enough:

Informal progress tracking can lower productivity by 0.5% and quality by 1.0%

Informal requirements gathering can lower productivity by 5.7% and quality by 8.4%

Inadequate risk analysis can lower productivity by 12.5% and quality by 17.5%

Inadequate testing can lower productivity by 13.1% and quality by 18.1%

Inadequate measurement of quality can lower productivity by 13.5% and quality by 18.5%

Inadequate defect tracking can lower productivity by 15.3% and quality by 20.9%

Inadequate inspections can lower productivity by 16.0% and quality by 22.1%

Inadequate progress tracking can lower productivity by 16.0% and quality by 22.5%

N.B. progress tracking is on the list twice. Informal progress tracking is fairly neutral, but inadequate progress tracking is deadly.

Conclusion

In development practices it is not a matter of formal vs informal. It is a matter of seeing what kind of variation that you are exposed to and recognizing that formality can make a difference in that area. Then it is a matter of only adding as much formality as you need to get maximum productivity.

Formal practices must be monitored, you must have some way of testing if the formal practice is having an effect on your development. You want to make sure that you are executing all formal practices properly and not just going through the motions of a formal practice with no effect. You must monitor the effects of formality to understand when you are not doing enough and when you are doing too much!


Articles in the “Loser” series

Want to see sacred cows get tipped? Check out:

Moo?

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Agility is not Informality

Agility is about following the Manifesto for Agile Software Development, not the sloppy and crazy implementations of Scrum, XP, or other home-grown schemes out there. Sometimes I feel like people are using Agility as an excuse to be informal.Agility is about discipline and being formal for key things.

Two of the key principles of the Agile Manifesto are:

  • Working software over comprehensive documentation
  • Responding to change over following a plan

chaosSomehow some developers have interpreted this as meaning that there are no formal processes to be followed. But just because you are putting the priority on working software does not mean that there can be no documentation; just because you are responsive to change doesn’t mean that there is no plan.

Agile development when properly implemented can accomplish great things.  When it is just an excuse to be undisciplined and cover-up cowboy development then a so called ‘Agile’ process will work very poorly and the organization will just spin its wheels.

Agile development in general raises productivity by 18.2% and quality by 27.1%

Extreme programming (XP) specifically raises productivity by 12.8% and quality by 17.8%

Agile software development does have formality and it requires quite a bit of discipline to stick with it. A few formal practices for Scrum include:

  • Stand-up meetings
  • Work burn down charts
  • Sprint demos
  • Simplicity
  • Improvement through reflection

Stand-up Meetings

Stand-up meetings are an integral part of the Scrum process and is a key discipline of Scrum, when they are done properly. This meeting is a coordinating mechanism of a self-organizing team where each member states:

  • What have they finished
  • What are they working on
  • What external issues do they need help on

The scrum daily meeting raises productivity by 16.4% and quality by 23.5%

Believe it or not, there are some Scrum teams that sit down for the stand-up meeting. What the heck is this? The purpose of standing up is to be slightly uncomfortable and keep in mind that the meeting is supposed to move quickly. The meeting will be short if you do it standing up.

You state what features that you have finished so that team mates waiting on your deliverables know where you are at. Finished means that you have unit tested your code (i.e. TDD) and that black box testing can start if there is a tester in your Scrum team.

Automated unit testing raises productivity by 16.5% and quality by 23.8%

You state what you are working on so that other team mates know what you are doing and can organize their work accordingly.  Don’t bother to state how you solved problems, that was an issue for planning with team mates before you started coding.

You state what you need help on that is external to the Scrum team so that the Scrum master can resolve the issue.  This is not a design meeting, if there are issues that you need to resolve with team mates then you can resolve this issue right after the meeting.

Burn Down Charts

Work burn down charts is a formal discipline and essential to success (see Who Needs Formal Measurement?). The work burn down chart is the heart of accountability in Scrum.  For the chart hours to get to 0 on a regular basis, it relies on the principles of: 1) commitment, 2) execution, and 3) estimates.

The point of a self-organizing team is that it commits to work each sprint to get the work done.  If it turns out that they over committed to a sprint then they must do whatever it takes to get the job done.  That may mean extra hours or it may mean being extra creative, but once you commit to work then you have got to get it done.  That means execution, it means doing what you say that you will do.

You may have a sprint or two where the work burn down chart does not go to 0. If this is happening on a regular basis then you have either an estimation problem or a team commitment problem.  If engineers are regularly underestimating their tasks then it means that you really don’t know how long things take.  You should consider some of the techniques in the Personal Software Process (PSP) to improve your ability to estimate.

PSP development raises productivity by 21.3% and quality by 31.3%

Another cause of work burn down charts not going to 0 is that the team is not really self-organizing.  The team needs to commit to the work to be done, it will never work if the organization assigns the work to be done in the sprint.

If you lose discipline during the sprint and don’t keep the work burn down charts up to date, then you are not tracking progress correctly.  Remember, you can’t manage what you don’t measure.

Inadequate progress tracking reduces productivity by 16.0% and quality by 22.5%

Sprint Demos

The sprint demo is not optional, it is a mandatory part of the process.  The sprint demo is to demonstrate near-production code and demonstrate a working system.  The team needs to put pressure on itself that the code must be complete by the end of the sprint; this forces a self-organizing team think through the amount of work that they can commit to. No demo = no team commitment.

Remember the whole point of Agile development is to put the emphasis on working software.  If you are not producing working software every sprint, then what are you doing? Whatever you are doing it is not Agile development.

The Agile Manifesto puts emphasis on working software.  If your sprints are not producing working software then you are not doing agile development.

Simplicity

The best designs are the simplest designs because they are easier to understand and maintain.  In a previous article No Experience Required! I discuss how years of experience do not make for better programmers.  You might expect that the best developers have more years of experience, but 8 studies over 30 years show that this is not the case (you might even want to get rid of poor performers with many years of experience, but see the article 🙂 )

In fact two persistent facts about the differences between the best and worst developers is that:

  • program execution speed about 10 to 1
  • program size 5 to 1

Much of this results from simplicity. The best developers plan their code and write simple and straight forward routines. The are rewarded by having smaller programs that execute quickly. The worst developers do not plan their code and shoot from the hip. They write convoluted code that is much larger and has much worse execution time.

Agility is about simplicity. It is spending 80% of your time thinking and 20% of the time writing simple and elegant code. It is not about spending 100% of your time writing complex and convoluted code that even you can’t understand.

Simplicity does not happen, it is planned; it takes tremendous effort to write simple code. The best developers plan their code (see PSP above) and are continually refactoring their code to make designs simpler to understand and maintain.

Automated restructuring raises productivity by 8.0% and quality by 11.7%

Code refactoring raises productivity by 4.3% and quality by 6.7%

Improvement through Reflection

A key discipline in Agile development is to reflect on what you can do better as individuals and as a team. There is always room for improvement, there is often a better way to do things.  Professional developers are keen to keep learning and improving their craft, and the best way to do this is by reflecting on the sub-optimal choices that we have made in the past and trying to do better in the future.

Conclusion

Agile development is about following the Agile Manifesto, it is not about informality.  True agility is developed by having the discipline to be formal in many things:

  • Team synchronization
  • Learning to estimate
  • Tracking progress through burn down charts
  • Committing to create production code
  • Planning simple code
  • Improving by not making the same mistakes

Agility is about being disciplined and formal in processes that count


References

N.B. All productivity and quality percentages were derived over 15,000+ actual projects


Articles in the “Loser” series

Want to see sacred cows get tipped? Check out:

Moo?

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)