What do you do if the customer is not right?

We often hear the customer is always right!, but is this really true? Haven’t we all been in situations where the customer is asking for something unreasonable or is simply downright wrong?  Aren’t there times when the customer is dead wrong?

This general strategy reflects the fact that 5:1 it is more expensive to acquire a new customer than to retain an existing one.  So even when the customer is wrong, accommodating their idiosyncrasies is worth losing a battle to win the profit war because acquiring customers is so expensive.

Today we take a more nuanced position.

If the customer is unreasonable and unprofitable then it makes no sense to adopt the motto that the customer is always right!.  Unprofitable accounts are retained if they are strategic and either 1) become profitable, or 2) draw enough profitable accounts into the company to make up for the loss. This strategy is employed by start-ups to get their first customer.

Recognizing bad customers is usually not difficult. Transactional customers are often bad customers; especially those that want the lowest price and act as though every product is a commodity; they try to play vendors off against each other despite quality requirements.

It is often useful to allow transactional customers interested in the lowest price to purchase from competitors on price and let the lack of a quality solution come back to haunt them. Reducing quality to meet customer price objectives will leave you with customer complaints when product and service quality is substandard.

When sales executives chase all opportunities hoping for a sale is when transactional buyers are courted and you get pulled into pricing concessions from demanding customers. The problem is demanding transactional buyers won’t just ask for the best price, they will also ask for product changes.

There is no doubt that customer requirements need to be a driver for product management it is an early indication of changing markets. But, accepting all customization requests is impossible and would cripple your product and brand.

Constant unreasonable requests leave internal resources believing that sales people have extremely low IQs and morals.

Good sales people understand these principles and don’t chase bad customers.  But, there are not enough good sales people to go around, so virtually every company has a less-than-excellent sales person making trouble for product management and engineering.

To make matters worse, sales people are very good at making a case that all customers are strategic and argue that a short term loss will eventually turn into a long term gain. This behavior is normal and expected because sales is driven by commissions, which are often revenue based. Hopefully, your sales process is robust enough to catch these attempts which would saddle you with bad customers.

If you find yourself in a position where you have acquired one or more bad customers (you know who they are..) then your best course of action is to find some way to send them to your competitors. This will increase your profitability and reduce the stress of unreasonable requests flooding into product management and engineering.

The customer is not always right, but with due diligence and account reviews you can determine the customers that should be retained and those that should be let go.

Don’t be afraid to let unprofitable and non-strategic customers go. You will feel less stressed and be better off in the long run.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Ready, Fire, Aim: Why correct requirements are rarely gathered

missed_target-300x204Connecting with your customers and delivering value depends on understanding your customer’s requirements and selling the correct product or solution that solves your customer’s problems.

Commonly, the requirements gathering process is done hastily or not at all in the rush to get the sale. After all, the faster you can make the sales process go, the faster the money is in the bank — correct?

tailoringThe more that product customization or creation is required, the more it is important understand the customer’s actual problem.  The more time required to build and implement a solution means will not only lead to a failed sale but also to a more disappointed customer and a loss of future revenue.

If you build software, which has long lead times, it is even more important to make sure that what you are building will satisfy the customer’s requirements as changing the final software solution will be nearly impossible and very costly.

Why do we continually misunderstand and sell the wrong solutions and building the wrong software?

Human Nature

Time pressures to make a sale put us under pressure and this stress leads to making quick decisions about whether a product or solution can be sold to a customer.  We listen to the customer but interpret everything he says according to the products and solutions that we have.

Often the customer will use words that seem to match exactly the products we have.  But then after the sale once we have to implement we often find that either we deliver a poor solution to the customer or a solution is infeasible and we must refund the customers money.

However, behind every word that the customer is using there is an implied usage, and understanding that implied usage is where we fail to gather requirements.

For example, suppose the customer says I need a car. Suppose that you sell used cars:

  • the customer asked for a car
  • you sell cars

Ergo problem solved!

  • What if the customer needs an SUV but you don’t have any?
  • What if the customer really needs a truck?
  • What if the customer needs a car with many modifications?
  • What if the car the customer needs has never been built?

We hear the word car and we think that we know what the customer means.  The order-taker sales person will spring into action and sell what he thinks the customer needs. Behind the word car is an implied usage and unless you can ferret out the meaning that the customer has in mind, you are unlikely to sell the correct solution.

If you don’t understand how the customer will to use your solution then you don’t understand the problem.

Comedy of Errors

For products that require customization, the sale will get transferred to professional services that will dig deeper into the customer’s requirements.  At this point you discover that the needs of the customer cannot be met.  This leads to sales people putting pressure on professional services and product management to ‘find a solution’, after all, losing the sale is not an option.

Sometimes heroic actions by the product management, professional services, and software development teams lead to a successful implementation, but usually not until there has been severe pain at the customer and midnight oil burned in your company.  You can eventually be successful but that customer will never buy from you again.

Consequences

As WIlliam Ralph Inge said, There are no rewards or punishments — only consequences.

The consequence of selling the wrong solution to a customer is:

  • Whether you lose the sale or not, the customer loses faith in you
  • The sales person is perceived as incompetent in the rest of the organization especially by professional services and software devleopment
  • Your cost of sales and implementation is much higher than expected
  • Your reputation is damaged

Conclusion

Selling the correct solution to the customer requires that you understand the customer’s problem before you sell the solution.

When customization is required, good sales people engage resources that can capture the customer’s requirements accurately and assess that you can deliver a solution to the customer.

Slowing down to understand the customer requirements and how you will solve his problems is the key.  By understanding the customer’s requirements and producing the correct solutions you become a trusted adviser to the customer.

 

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Not using UML on Projects is Fatal

UMLThe Unified Modeling Language (UML) was adopted as a standard by the OMG in 1997, almost 20 years ago.  But despite its longevity, I’m continually surprised at few organizations actually use it.

Code is the ultimate model for software, but it is like the trees of a forest.  You can see a couple, but only few people can see the entire forest by just looking at the code.  For the rest of us, diagrams are the way to see the forest, and UML is the standard for diagrams.

They say, “A picture is worth a thousand words“, and this is true for code; even on a large monitor you can only see so many lines of code.  Every other engineering discipline has diagrams for complex systems, e.g. design diagrams for airplanes, blueprints for buildings.  In fact, the diagrams need to be created and approved  BEFORE the airplane or building is created.

Contrast that with software where UML diagrams are rarely produced, or if they are produced, they are produced as an after thought.  The irony is that the people pushing to build the architecture quickly say that there is no time to make diagrams, but they are the first people to complain when the architecture sucks.  UML is key to planning (see Not planning is for losers)

I think this happens because developers, like all people, are focused on what they can see and touch right now.  It is easier to try to code a GUI interaction or tackle database update problems than it is to work at an abstract level through the interactions that are taking place from GUI to database.

Yet this is where all the architecture is.  Good architecture makes all the difference in medium and large systems.  Architecture is the glue that holds the software components in place and defines communication through the structure.  If you don’t plan the layers and modules of the system then you will continually be making compromises later on.

In particular, medium to large projects (>10,000 function points) are at a very high risk of failure if you don’t consider the architectural issues.  Considering only 3 out of 10 software projects are successful only a fool would skip planning the architecture (see Failed? You get what you deserve!)

Good diagrams, in particular UML, allow you to abstract away all the low level details of an implementation and let you focus on planning the architecture.  This higher level planning leads to better architecture and therefore better extensibility and maintainability of software.

If you are a good coder then you will make a quantum leap in your ability to tackle large problems by being able to work through abstractions at a higher level.  How often do we find ourselves unable to implement simple features simply because the architecture doesn’t support it?

Well the architecture doesn’t support it because we spend very little time developing the blueprint for the architecture of the system.

UML diagrams need to be produced at two levels:

  • the analysis or ‘what’ level
  • the design or ‘how’ level

Analysis UML diagrams (class, sequence, collaboration) should be produced early in the project and support all the requirements.  Ideally you use a requirements methodology that allows you to trace easily from the requirements onto the diagrams.

Analysis diagrams do not have implementation classes on them, i.e. no vendor specific classes.  The goal is to identify how the high level concepts (user, warehouse, product, etc) relate to each other.

These analysis level UML diagrams will help you to identify gaps in the requirements before moving to design.  This way you can send your BAs and product managers back to collect missing requirements when you identify missing elements before you get too far down the road.

Once the analysis diagrams validate that the requirements are relatively complete and consistent, then you can create design diagrams with the implementation classes.  In general the analysis diagrams are one to many to the design diagrams.

Since you have validated the architecture at the analysis level, you can now do the design level without worrying about compromising the architectural integrity.  Once the design level is complete you can code without compromising the design level.

When well done the analysis UML, design UML, and code are all in sync.  Good software is properly planned and executed from the top down.  It is mentally tougher to create software this way, but the alternative is continuous patches and never ending bug-fix cycles.

So remember the following example from Covey’s The 7 Principles of Highly Effective People:

You enter a clearing where a man is furiously sawing at a large log, but he is not making any progress.  You notice that the saw is dull and is unable to cut the wood, so you say, “Hey, if you sharpen the saw then you will saw the log faster”.  To which the man replies, “I don’t have time, I’m too busy sawing the log”.

Don’t be the guy sawing with a dull

UML is the tool to sharpen the saw, it does take time to learn and apply, but you will save yourself much more time and be much more successful.

Bibliography

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Don’t be a Slave to Your Tools

Abstract SlaveDevelopers attach quickly to tools because they are concrete and have well defined behavior.  It is easier to learn a tool than to learn good practices or methodology.

Tools only assist in solving problems, they can’t solve the problem by themselves. A developer who understands the problem can use tools to increase productivity and quality.

Poor developers don’t invest the time or effort to understand how to code properly and avoid defects.  They spend their time learning how to use tools without understanding the purpose of the tool or how to use it effectively.

To some degree, this is partially the fault of the tool vendors.  The tool vendors perceive an opportunity to make $$$$$ based on providing support for a common problems, such as:

  • defect trackers to help you manage defect tracking
  • version control systems to manage source code changes
  • tools to support Agile development (Version One, JIRA)
  • debuggers to help you find defects

There are many tools out there, but let’s just go through this list and point out where developers and organizations get challenged.  Note, all statistics below are derived from over 15,000 projects over 40 years.1

Defect Trackers

Believe it or not, some organizations still don’t have defect tracking software. I’ve run into a couple of these companies and you would not believe why…

Inadequate defect tracking methods: productivity -15%, quality -21%

So we are pretty much all in agreement that we need to have defect tracking; we all know that the ability to manage more than a handful of defects is impossible without some kind of system.

Automated defect tracking tools: productivity +18%, quality +26%

The problem is that developers fight over which is the best defect tracking system. The real problem is that almost every defect tracking system is poorly set-up, leading to poor results. Virtually every defect tracking system when configured properly will yield tremendous benefits. The most common pitfalls are:

  • Introducing irrelevant attributes into the defect lifecycle status, i.e. creation of statuses like deferred, won’t fix, or functions as designed
  • Not being able to figure out if something is fixed or not
  • Not understanding who is responsible for addressing a defect

The tool vendors are happy to continue to provide new versions of defect trackers. However, using a defect tracker effectively has more to do with how the tool is used rather than which tool is selected.

One of the most fundamental issues that organizations wrestle with is what is a defect?  A defect only exists if the code does not behave according to specifications. But what if there are no specifications or the specifications are bad?  See It’s not a bug, it’s… for more information.

Smart organizations understand that the way in which the defect tracker is used will make the biggest difference.  Discover how to get more out of you defect tracking system in Bug Tracker Hell and How to Get Out.

Another common problem is that organizations try to manage enhancements and requirements in the defect tracking system.  After all whether it is a requirement or a defect it will lead to a code change, so why not put all the information into the defect tracker?  Learn why managing requirements and enhancements in the defect tracking system is foolish in Don’t manage enhancements in the bug tracker.

Version Control Systems

Like defect tracking systems most developers have learned that version control is a necessary hygiene procedure.  If you don’t have one then you are likely to catch a pretty serious disease (and at the least convenient time)

Inadequate change control: productivity -11%, quality -16%

Virtually all developers dislike version control systems and are quite vocal about what they can’t do with their version control system.  If you are the unfortunate person who made the final decision on which version control system is used just understand that their are hordes of developers out their cursing you behind your back.

Version control is simply chapter 1 of the story.  Understanding how to chunk code effectively, integrate with continuous build technology, and making sure that the defects in the defect tracker refers to the correct version are just as important as the choice of version control system.

Tools to support Agile

Sorry Version One and JIRA, the simple truth is that using an Agile tool does not make you agile, see this.

These tools are most effective when you actually understand Agile development. Enough said.

Debuggers

I have written extensively about why debuggers are not the best tools to track down defects.  So I’ll try a different approach here.

One of the most enduring sets of ratios in software engineering has been 1:10:100.  That is, if the cost of tracking down a defect pre-test (i.e. before QA) is 1, then it will cost 10x if the defect is found by QA, and 100x if the defect is discovered in deployment by your customers.

Most debuggers are invoked when the cost function is in the 10x or 100x part of the process.  As stated before, it is not that I do not believe in debuggers — I simply believe in using pre-test defect removal strategies because they cost less and lead to higher code quality.

Pre-test defect removal strategies include:

  • Planning code, i.e. PSP
  • Test driven development, TDD
  • Design by Contract (DbC)
  • Code inspections
  • Pair programming for complex sections of code

You can find more information about this in:

Seldom Used Tools

Tools that can make a big difference but many developers don’t use them:

Automated static analysis: productivity +21%, quality +31%

Automated unit testing: productivity +17%, quality +24%

Automated unit testing generally involves using test driven development (TDD) or data driven development together with continual build technology.

Automated sizing in function points: productivity +17%, quality +24%

Automated quality and risk prediction: productivity +16%, quality +23%

Automated test coverage analysis: productivity +15%, quality +21%

Automated deployment support: productivity +15%, quality +20%

Automated cyclomatic complexity computation: productivity +15%, quality +20%

Important Techniques with No Tools

There are a number of techniques available in software development that tool vendors have not found a way to monetize on. These techniques tend to be overlooked by most developers, even though they can make a huge difference in productivity and quality.

The Personal Software Process and Team Software Process were developed by Watts Humphrey, one of the pioneers of building quality software.

Personal software process: productivity +21%, quality +31%2

Team software process: productivity +21%, quality +31%3

The importance of inspections is covered in:

Code inspections: productivity +21%, quality +31%4

Requirement inspections: productivity +18%, quality +27%4

Formal test plans: productivity +17%, quality +24%

Function point analysis (IFPUG): productivity +16%, quality +22%

Conclusion

There is definitely a large set of developers that assume that using a tool makes them competent.

The reality is that learning a tool without learning the principles that underly the problem you are solving is like assuming you can beat Michael Jordan at basketball just because you have great running shoes.

Learning tools is not a substitute for learning how do do something competently. Competent developers are continually learning about techniques that lead to higher productivity and quality, whether or not that technique is supported by a tool.

References

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Debuggers are Crutches

Woman on CrutchesDefects are common, but they are not not necessary.  They find their way into code because:

Defects are only corrected by understanding pathways and debuggers are not the best way to do this.

Debuggers are commonly used by developer’s to understand a problem, but just because they are common does not make them the best way to find defects.  I’m not advocating a return to “the good old days” but there was a time when we did not have debuggers and we managed to debug programs.

Avoid Defects

Personal Software ProcessThe absolute best way to remove defects is simply not to create them in the first place. You can be skeptical, but things like  the Personal Software Process (PSP) have been used practically to prevent 1 of every 2 defects from getting into your code.  Over thousands of projects:

The Personal Software Process increases productivity by 21% and increases code quality by 31%

A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. This huge waste could be cut in half if all developers focused on not creating defects in the first place.

Not only does the PSP focus on code planning, it also makes developers aware of how many defects they actually create.  Here are two graphs that show the same group of developers and their defect injection rates before and after PSP training.

Before PSP training After PSP training

Finding Defects

Unskilled professionalUsing a debugger to understand the source of a defect is definitely one way.  But if it is the best way then why do poor developers spend 25 times more time in the debugger than a a good developer? (see No Experience Required!)

That means that poor developers spend a week in the debugger for every 2 hours that good developer does.

No one is saying that debuggers do not have their uses.  However, a debugger is a tool and is only as good as the person using it.  Focus on tools obscures lack of skill (see Agile Tools do NOT make you Agile)

If you are only using a debugger to understand defects then you will be able to remove a maximum of about 85% of all defects, i.e. 1 in 7 defects will always be present in your code.

Orkin manWould it surprise you to learn that their are organizations that achieve 97% defect removal?  Software inspections take the approach of looking for all defects in code and getting rid of them.

Learn more about software inspections and why they work here:

Software inspections increase productivity by 21% and increases code quality by 31%

Even better, people trained in software inspections tend to inject fewer defects into code. When you become adept at parsing code for defects then you become much more aware of how defects get into code in the first place.

But interestingly enough, not only will developers inject fewer defects into code and achieve defect removal rates of up to 97%, in addition:

Every hour spent in code inspections reduces formal QA by 4 hours

Conclusion

As stated above, there are times where a skilled professional will use a debugger correctly.  However, if you are truly interested in being a software professional then:

  • You will learn how to plan and think through code before using the keyboard
  • You will learn and execute software inspections
  • You will learn techniques like PSP which lead to you injecting fewer defects into the code

You are using a debugger as a crutch if it is your primary tool to reduce and remove defects.

Related Articles

Want to see more sacred cows get tipped? Check out:

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

References

VN:F [1.9.22_1171]
Rating: 2.5/5 (2 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

When BA means B∪<<$#!t Artist

BA typically means Business Analyst, but what makes for a good BA?  When do you have a good BA and when don’t you? Many projects fail at the beginning due to incomplete, inconsistent, and overly verbose analysis produced by BAs.

Are your BAs any good?  The success of many projects depend on the ability of the BA do correctly identify the problem you are trying to solve.  If your BA is not competent then you are doomed before you start.

RubiksCubeBusiness analysis consists of all facets of solving business problems. Business analysis is a role executed by project, product, and engineering managers.  However, there are people that do business analysis as their main role and we simply label them as business analysts or product managers. We shall stick to the term business analyst for simplicity.

Business analysts perform a range of tasks including:

  • Gathering requirements
  • Writing business cases and project charters
  • Performing gap analysis between products and corporate processes

Ever suspect that the people responsible for performing business analysis for you are not up for the challenge?  Here are three simple questions; good business analysts will get all three correct and not take longer than 20 seconds.

  Context Question
#1  MullerLyerLines Which of these two lines is longer?
#2  paris_spring_puzzle What does this sign say?
#3 I have two products whose prices add up to $1.10 and one product is $1 more than the other. How much is the more expensive product?

The answers are:

  1. The lines are the same length
    1. Take a ruler if you are not convinced
  2. Paris in the the spring
    1. Notice that the occurs twice
  3. The more expensive product is $1.05
    1. If the more expensive one had been $1 then the cheaper one would be $0.10 but then the expensive one would only be $0.90 more expensive than the cheaper one.

Jumping+to+conclusionsThese three questions illustrate that the mind can jump to conclusions that are often wrong.  The mind can be tricked because it assesses things quickly and only some people have the instinct to double check their results.

Most BAs are well educated; however, the interesting thing is that studies show that educated people are less able to see their biases and they jump to conclusions more readily than other people.1

A good business analyst realizes that during analysis there will be situations where the mind will jump to conclusions.  Many business analysts are asked to resolve conflicting requirements, recognize missing requirements, and deal with biases coming from many different sources.  Unless you have the reflex of checking facts for consistency and eliminating bias then you won’t make a good business analyst.

So what is the quality of your BAs?

Other articles

Bibliography

West, Richard F. and Meserve, Russel J. Cognitive Sophistication does not Attenuate the Bias Blind Spot. Journal of Personality and Social Psychology.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

No Business Case = Failed Project

A business case comes between a bright idea for a software project and the creation o that project. Project Timeline, Business Case

  • To – idea to have a project is born
  • Tcheck – formal or informal business case
  • Tstart – project is initiated
  • Tend – project finishes successfully or is abandoned

Not all ideas for software projects make sense.  In the yellow zone above, between idea and project being initiated, some due diligence on the project idea should occur.  This is where you do the business case, even if only informally on the back of a napkin.

The business case is where you pause and and estimate  whether the project is worth it, i.e. will this project leave you better off than if you did not do it.

For those who want precise definitions the project should be NPV +ve.  In layman’s terms, the project should leave the organization better off on it’s bottom line or at least improve skill levels so that other projects are better off.

Projects that do not improve skills or the bottom line are a failure.

Out of 10 software projects (see Understanding your chances):

  • 3 are successful
  • 4 are challenged, i.e. over cost, over budget, or deliver much less functionality
  • 3 will fail, i.e. abandoned

This means that the base rate of success for any software project is only 3 out of 10.

Yet executives routinely execute projects assuming that they can not fail even though the project team knows that the project will be a failure from day 1.

Business cases give executives a chance to stop dubious projects before they start. (see Stupid is as Stupid Does)

Understanding how formal the business case needs to be comes down to uncertainty. There are three key uncertainties with every project:

  • Requirements uncertainty
  • Technical uncertainty
  • Skills uncertainty

When there is a moderate amount of uncertainty in any of these three areas then a formal business case with cash flows and risks needs to be prepared.

Requirements Uncertainty

Requirements uncertainty is what leads to scope shift (scope creep).  The probability of a project failing is proportional to the number of unknown requirements when the project starts (see Shift Happens).

Requirements uncertainty is only low for two particular projects: 1) re-engineering a project where the requirements do not change, and 2) the next minor version of a software project.

For all other software projects the requirements uncertainty is moderate and a formal business case should be prepared.

Projects new to you have high requirements uncertainty.

Technical Uncertainty

Technical uncertainty exists when it is not clear that all requirements can be implemented using the selected technologies at the level of performance required for the project.

Technical uncertainty is only low when you have a strong understanding of the requirements and the implementation technology. When there is only a moderate understanding of the requirements or the implementation technology then you will encounter the following problems:

  • Requirements that get clarified late in the project that the implementation technology will not support
  • Requirements that can not be implemented once you improve your understanding of the implementation technology

Therefore technical uncertainty is high when you are doing a project for the first time and requirement uncertainty is high.  Technical uncertainty is high when you are using new technologies, i.e. shifting from Java to .NET or changing GUI technology.

Projects with new technologies have moderate to high uncertainty.

Skills Uncertainty

Skills uncertainty comes from using resources that are unfamiliar with the requirements or the implementation technology.  Skills uncertainty is a knowledge problem.

Skills uncertainty is only low when the resources understand the current requirements and implementation technology.

Resources unfamiliar with the requirements will often implement requirements in a suboptimal way when requirements are not well written.  This will involve rework; the worse the requirements are understood the more rework will be necessary.

Resources unfamiliar with the implementation technology will make mistakes choosing implementation methods due to lack of familiarity with the philosophies of the implementation libraries.  Often after a project is complete, resources will understood that different implementation tactics should have been used.

Formal or Informal Business Cases?

An informal business case is possible only if the requirements, technical, and skills uncertainty is low.  This only happens in a few situations:

  • Replacing a system where the requirements will be the same and the implementation technology is well understood by the team
  • The next minor version of a software system

Every other project requires a formal business case that will quantify what kind of uncertainty and what degree of uncertainty exists in the project.  At a minimum project managers facing moderate to high uncertainty should be motivated to push for a business case (see Stupid is as Stupid Does). Here is a list of projects that tend to be accepted without any kind of real business case that quantifies the uncertainties:

  • Change of implementation technology
    • Moving to object-oriented technology if you don’t use it
    • Moving from .NET to Java or vice versa
  • Software projects by non-software companies
  • Using generalists to implement technical solutions
  • Replacing systems with resources unfamiliar with the requirements
    • Often happens with outsourcing

Projects with moderate to high risks and no business case are doomed to fail.

Related articles

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

It’s not a bug, it’s…

When does a bug become a bug?
Who decides that it is a bug?

Caution, BugHow many legs does a lamb have if I say the tail is a leg?  The answer is 4, just because I say the tail is a leg does not make it a leg!

Bugs should be obvious, but we say It’s not a bug, it’s a feature because often it isn’t obvious.  Watson Humphrey felt that we should use the term defect and not bug because most people don’t take bugs seriously, so let’s use the term defect instead.

So when does a defect become a defect?

  • When quality assurance tells you that you have a defect?
  • When product management says that it is a defect?
  • When the customer says that it is a defect?

The answer is: none of the above.

Now it might turn out that there is a problem and that code needs to change, but a defect only exists if:

code behaves differently than the requirements specification

This is important because most systems are under specified (if they are specified at all) and so when code misbehaves it is only a defect if the code behavior differs from the specification.  We call defects undocumented features because we know that the problem is that the requirements were never written.

Incomplete and Inconsistent Requirements

Many organizations do not create sufficiently complete requirements before starting development, either because they don’t know how to capture requirements properly or because they don’t have resources capable of capturing complete requirements. Incomplete (and inconsistent) requirements and unrealistic deadlines often force developers into making decisions about how to implement features.  The end result is that developers are regularly told that they have defects in their code.

While this process is common, it is destructive.  When requirements are under specified and inconsistent developers end up needing to perform serious rework. The rework will can require dramatic changes that will impact the architecture of the code.

The time required to find a work around (if it is possible) is rarely included in the project plan. Complicating matters is that the organizations that are reluctant to spend time creating requirements also tend to underestimate their projects.  This puts tremendous pressure on the engineering department to deliver; this promotes the 5 worst practices in software development (see Stop It! No… really stop it.)

Only 54% of Issues are Resolved by Engineers

The attitude that all defects must be resolved by the engineering department is severely misguided.  Analysis by Capers Jones of over 18,000+ projects shows that only about 54% of all defects can be resolved by the engineers! (only the 3 highlighted rows below)

Defect Role Category Frequency Role
Requirements defect 9.58% BA/Product Management
Architecture or design defect 14.58% Architect
Code defect 16.67% Developer
Testing defect 15.42% Quality Assurance
Documentation defect 6.25% Technical Writer
Database defect 22.92% Data base administrator
Website defect 14.58% Operations/Webmaster

This means that precious time will be wasted assigning issues to developers that they can not resolve.  The time necessary to redirect the issue to the correct person is a major contributing factor to fire-fighting

Getting Control of the  Defect Process

For most organizations fixing the defect process involves understanding and categorizing defects correctly.  Organizations that are not tracking the different sources of defects probably have a bug tracker that has gone to hell.  Here is how you can fix that problem, see Bug Tracker Hell and How To Get Out!

At a minimum you need to implement the requirements defect, once you identify issues that are caused by poor requirements it will shine the white hot light of shame onto the resources that are capturing your requirements.  Once you realize how many requirements defects exist in your system you can begin to inform senior management about the requirements problem.

Reducing Fire-Fighting

Fire fightingThe best way to reduce fire fighting is to start writing better requirements (or writing requirements 🙂 ).  To do so you need to figure out which of the following are broken:

  1. Not enough time is allocated to the requirements phase
  2. Unskilled people are capturing your requirements

In all likelihood both of these issues need to be fixed in your organization.  When requirements are incomplete and inconsistent you will have endless fire-fighting meetings involving everyone (see Root cause of ‘Fire-Fighting’ in Software Projects)

Stand your ground if someone tells you that you have coded a defect when there is no documentation for the requirement.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Size Matters

Some say that software development is challenging because of complexity. This might be true, but this definition does not help us find solutions to reduce complexity. We need a better way to explain complexity to non-technical people.

The reality is that when coding a project size matters.  Size is measured in the number of pathways through a code base, not by the number of lines of code. Size is proportional to the number of function points in a project.

There are many IT people that succeed with programs of a certain size and then fail miserably when taking on programs that are more sophisticated.  Complexity increases with size because the number of pathways increase exponentially in large programs.

Virtually anyone (even CEOs 🙂 ) can build a hello, world! application; an application that only has a single pathway through it and is as simple as you can get.  Some CEOs write the simple hello, world! program and incorrectly convince themselves that development is easy. Hello, world! only has a single pathway through it and virtually anyone can write it.

main() {
printf( “hello, world” );
}

If you have an executive that can’t even complete hello,world then you should take away his computer 🙂

CallTreeComplexity Defined

As programs get more sophisticated, the number of decisions that have to be made increase and the depth of the call tree increases.  Every non-trivial routine will have multiple pathways through it.

If your average call depth is 10 with an average of 4 pathways through each routine then this represents over 1 million pathways.  If the average call depth is 15 then it represents 107 million pathways.

Increasing sophisticated programs have greater call depth than ever and distributed applications increase the call depth even because the call depth of a system is additive. This is what we mean by complexity; it is impossible for us to test all of the different pathways in a black box fashion.

Now in reality every combination of pathways is not possible, but you only have to leave holes in a few routines and you will have hundreds, if not thousands, of pathways where calculations and decisions can go wrong.

In addition, incorrect calculations or decisions higher up in the call tree can lead to difficult to find defects that may blow up much further away from the source of the problem.

What are Defects?

Software defects occur for very simple reasons, an incorrect calculation is performed that causes an output value to be incorrect.  Sometimes there is no calculation at all because input data is not validated to be consistent and that data is either stored incorrectly or goes on to cause incorrect calculations to be performed.

Call Tree and Defects, discoveredWe only recognize that we have a defect when we see an output value and recognize that it is incorrect. More likely QA sees it and tells us that we are incorrect.

Basically we follow a pathway that is correct through nodes 1, 2, 3, 4, and 5.  At point 6 we make a miscalculation calculation, and then we have the incorrect values at points 7 and 8 and discover the problem at node 9.

So once we have a miscalculation, we will either continue to make incorrect calculations or make incorrect decisions and go down the wrong pathways (where we will then make incorrect calculations).

Not all Defects are Equal

It is clear that the more distance there is between a miscalculation and its discover will make defects harder to detect.  The longer the call depth the greater the chance that there can be a large distance between the origin and detection, in other words:

Size Matters

Today we build sophisticated systems of many cooperating applications and the call depth is exponential with the size of the system.  This is what we mean by complexity in software.

Reducing Complexity

Complexity is reduced for every function where:

  • You can identify when inconsistent parameters are passed to a function
  • All calculations inside of a function are done correctly
  • All decisions through the code are taken correctly

The best way to solve all 3 issues is through formal  planning and development.Two methodologies that focus directly on planning at the personal and team level are the Personal Software Process (PSP) and the Team Software Process (TSP) invented by Watts Humphrey.

Identifying inconsistent parameters is easiest when you use Design By Contract (DbC) , a technique that was pioneered by the Eiffel programming language. It is important to use DbC on all functions that are in the core pathways of an application.

Using Test Driven Development is a sure way to make sure that all calculations inside of a function are done correctly, but only if you write tests for every pathway through a function.

Making sure that all calculations are done correctly inside a function and that correct decisions are make through the code is best done through through code inspections (see Inspections are not Optional and Software Professionals do Inspections).

All techniques that can be used to reduce complexity and prove the correctness of your program are covered in Debuggers are for Losers.  N.B. Debuggers as the only formalism will only work well for systems with low call depth and low branching.

Conclusion

Therefore, complexity in software development is about making sure that all the code pathways are accounted for.  In increasingly sophisticated software systems the number of code pathways increases exponentially with the call depth. Using formal methods is the only way to account for all the pathways in a sophisticated program; otherwise the number of defects will multiply exponentially and cause your project to fail.

Only projects with low complexity (i.e. small call depth) can afford to be informal and only use debuggers to get control of the system pathways. As a system gets larger only the use of formal mechanisms can reduce complexity and develop sophisticated systems. Those formal mechanisms include:

  • Personal Software Process and Team Software Process
  • Design by Contract (via Aspect Oriented Programming)
  • Test Driven Development
  • Code and Design Inspections
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Let the Machines Take Over

Whether you believe that the machines will take over or not will always be up for debate (until they actually take over :-)).

What is not up for debate is that many processes for software creation have been automated and can help improve productivity and quality in every phase of a software project.

Here are some statistics about how useful automation can be and give you some ammunition when you confront management about the need for these tools.

These results have been generated by Capers Jones who has been collecting data for over 15,000 projects for about 40 years.  Full results shown in the paper in the references.

Requirements Analysis

There have been tools available for tracing requirements for quite some time. Generally the only companies forced to provide requirements traceability, but this makes sense to do on all large projects:

Automated requirements tracing can raise productivity by 12.89% and quality by 17.8%

There are tools that can help size programs in terms of function points, which then allows you to get relatively accurate estimates of a project`s cost and time before the project is executed.

Automated sizing in function points can raise productivity by 16.5% and quality by 23.7%

Development

There are several tools that are useful during development, but by far the biggest bang for your buck comes in the form of automated static analysis.

Automated static analysis can raise productivity by 20.9% and quality by 30.9%

There are now static analysis tools available for virtually all current languages that are cost effective and can help to locate those hard to find occasional defects.  You can find a list here of automated tools for C/C++, Java, JavaScript, Objective-C, Perl, PHP, and Python.

Then getting cyclomatic complexity counts for your code can be done in an automated fashion.  Studies have shown that files with a count of 74+ were 98% likely to have defects.

Automated cyclomatic complexity analysis can raise productivity by 14.5% and quality by 19.5%

Performance is a tricky thing, we will talk about it often but rarely do we actually do something about it. Automated performance analysis allows you to determine if there will be production problems before deploying new code.

Automated performance analysis can raise productivity by 12.5% and quality by 19.5%

There are tools for restructuring code automatically, many of these are built directly into today’s IDEs.  There are many stand alone tools for automated restructuring, some are shown here.

Automated restructuring can raise productivity by 8.0% and quality by 11.7%

Automated unit testing is the easiest way to detect defects early.  If your developers are using Test Driven Development (TDD) and any continuous integration tool  (i.e, Hudson or Jenkins) then you can find out with each build if a defect has crept into the code.

Automated unit testing can raise productivity by 16.5% and quality by 23.8%

Testing

Believe it or not I was working with a team that was not using version control or defect tracking last year. So there are clearly still some organizations not using defect tracking.

Automated defect tracking can raise productivity by 17.5% and quality by 25.9%

Testing the percentage of the application pathways that are actually being tested is critical.  Most of your defects will lie in the pathways that you can not test.

Automated test coverage analysis can raise productivity by 15.5% and quality by 21.2%

Test cases can be generated automatically in some situations.

Automated test case generation can raise productivity by 15.0% and quality by 20.0%

Deployment

The next two tools are not about technology that you acquire, rather it is about building automated tools to support critical deployment functions.  When the pressure is on it is tough to think about creating applications to manage configuration in an automated way.  When management tells you that it would be a luxury to build these tools then show them these results.

Some organizations rely on manually updating configurations, but can be error prone if there are many values to update.

Automated configuration control can raise productivity by 16.4% and quality by 23.5%

Also, when automated configuration tools don`t exist then you often don`t have automated deployment tools.

Automated deployment support can raise productivity by 14.6% and quality by 19.6%

Conclusion

One of the most cost effective ways to improve productivity and quality in your software development is to automate any of the issues above.  In most cases you can find a vendor that will provide you support; for deployment you can make the case that building automated tools is the way to go.


References

1N.B. All productivity and quality percentages were derived over 15,000+ actual projects


Articles in the “Loser” series

Want to see sacred cows get tipped? Check out:

Moo?

Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once 🙂

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)