What do you do if the customer is not right?

We often hear the customer is always right!, but is this really true? Haven’t we all been in situations where the customer is asking for something unreasonable or is simply downright wrong?  Aren’t there times when the customer is dead wrong?

This general strategy reflects the fact that 5:1 it is more expensive to acquire a new customer than to retain an existing one.  So even when the customer is wrong, accommodating their idiosyncrasies is worth losing a battle to win the profit war because acquiring customers is so expensive.

Today we take a more nuanced position.

If the customer is unreasonable and unprofitable then it makes no sense to adopt the motto that the customer is always right!.  Unprofitable accounts are retained if they are strategic and either 1) become profitable, or 2) draw enough profitable accounts into the company to make up for the loss. This strategy is employed by start-ups to get their first customer.

Recognizing bad customers is usually not difficult. Transactional customers are often bad customers; especially those that want the lowest price and act as though every product is a commodity; they try to play vendors off against each other despite quality requirements.

It is often useful to allow transactional customers interested in the lowest price to purchase from competitors on price and let the lack of a quality solution come back to haunt them. Reducing quality to meet customer price objectives will leave you with customer complaints when product and service quality is substandard.

When sales executives chase all opportunities hoping for a sale is when transactional buyers are courted and you get pulled into pricing concessions from demanding customers. The problem is demanding transactional buyers won’t just ask for the best price, they will also ask for product changes.

There is no doubt that customer requirements need to be a driver for product management it is an early indication of changing markets. But, accepting all customization requests is impossible and would cripple your product and brand.

Constant unreasonable requests leave internal resources believing that sales people have extremely low IQs and morals.

Good sales people understand these principles and don’t chase bad customers.  But, there are not enough good sales people to go around, so virtually every company has a less-than-excellent sales person making trouble for product management and engineering.

To make matters worse, sales people are very good at making a case that all customers are strategic and argue that a short term loss will eventually turn into a long term gain. This behavior is normal and expected because sales is driven by commissions, which are often revenue based. Hopefully, your sales process is robust enough to catch these attempts which would saddle you with bad customers.

If you find yourself in a position where you have acquired one or more bad customers (you know who they are..) then your best course of action is to find some way to send them to your competitors. This will increase your profitability and reduce the stress of unreasonable requests flooding into product management and engineering.

The customer is not always right, but with due diligence and account reviews you can determine the customers that should be retained and those that should be let go.

Don’t be afraid to let unprofitable and non-strategic customers go. You will feel less stressed and be better off in the long run.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Do Project Managers need Domain Experience?

Opinions vary on whether a project manager needs to have domain experience.  Certainly project managers that do not have domain experience will be the first to say that domain experience is not necessary as long as they have access to excellent subject matter experts.

I would advocate a more nuanced position; that is, a project manager does not need domain experience IF his subject matter experts understand the risks and dependencies that are inherent to the domain.

Let’s go through a couple of personal projects that I have been involved with where the project manager did not have domain experience.

Telco Project

I am currently involved in a project that involves a LAN/WAN/WIFI upgrade of a large customer for a large telecommunications company.  The project manager does not have domain expertise in networks and is counting on the subject matter experts to provide him sufficient input to execute the project.

The subject matter experts are so advanced in their knowledge of networks that they no longer understand what beginners (i.e. the project manager) do not know.  They have assumed that when they indicate things to the project manager that he understands what they mean and will take appropriate actions.

The project manager is continually running into situations where he did not understand the implications of certain risks and dependencies.  This has caused a certain amount of rework and delays.

Fortunately, this is not a project with tremendous amounts of risk or dependencies so the project will be late but will succeed.

Mobile Handset Project

In the distant past ,I was part of a team that was building a mobile POS terminal that worked over cellular (GSM, CDMA).  The project manager in this situation did not have domain experience and was counting on the subject matter experts.  In this case, the subject matter experts were very good at general design, but not experts in building cellular devices.

Because the subject matter experts were not specialists, they knew most of the key principles of designing mobile handsets but did not understand all the nuances of handset design.  There were several key issues required by practical handset manufacturing that were overlooked by the generalists and ended up creating such a strong cost over-run that the start-up went out of business.

Sumary

In the first project, the subject matter experts were extremely good, however, the project manager failed to understand the implication of some of their statements and this introduced large delays in the project.

In the second project, the subject matter experts were generalists and did not understand all the risks and dependencies of the project.  The project manager (and start-up) were doomed to fail because “you don’t know what you don’t know”.

Both these projects show that a project can be delayed or fail because a project manager does not have domain experience.

Conclusion

So if a project does not have many uncertainties and dependencies then it is extremely likely that the project manager does not require domain experience and can rely to some degree on his subject matter experts.

However, if the project has complex uncertainties and/or dependencies then a good project manager without domain experience is likely to find himself in a several positions where the consequences of not understanding the uncertainties and dependencies will either introduce serious rework or torpedo the project.

VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)

Infeasible Projects: Executive Ignorance or IT Impotence?

IDohDoh2nfeasible software projects are launched all the time and teams are continually caught up in them, but what is the real source of the problem?

There are 2 year actual projects for which the executives set a 6 month deadline.  The project is guaranteed to fail but isthis due to executive ignorance or IT impotence?

InfeasibleTimelineThere is no schedule risk in an infeasible project because the deadline will be missed.  Schedule risk only exists in the presence of uncertainty (see Schedule Risk is a Red Herring!!!)

As you might expect, all executives and IT manager share responsibility for infeasible projects that turn into death marches.  Learn about the nasty side effects Death March Calculus.

The primary causes for infeasible projects are:

  • Rejection of formal estimates
  • No estimation or improper estimation methods are used

Rejecting Formal Estimates

Infeasible

This situation occurs frequently; an example would be the Denver Baggage Handling System (see Case Study).

The project was automatically estimated (correctly) to take 2 years; however, executives declared that IT would only have 1 year to deliver.

Of course, they failed1.

The deadline was rejected by executives because it did not fit their desires.  They could not have enjoyed the subsequent software disaster and bad press.

When executives ignore formal estimates they get what they deserve.  Formal estimates are ignored because executives believe through sheer force of will that they can set deadlines.

If IT managed to get the organization to pay for formal tools for estimating then it is not their problem that the executives refuse to go along with it.

Improper Estimation Methods

The next situation that occurs frequently is using estimation processes that have low validity.  Estimation has been extensively studied and documented by Tom DeMarco, Capers Jones, Ed Yourdon, and others.

IceBergImproper estimation methods will underestimate a software project every time. Fast estimates will be based on what you can think of, unfortunately, software is not tangible and so what you are aware of is like the tip of an iceberg.

None of this prevents executives demanding fast estimates from development.  Even worse, development managers will cave in to ridiculous demands and actually give fast estimates.

Poor estimates are guaranteed to lead to infeasible projects (see Who needs Formal Measurement?)

Poor estimates are delivered by IT managers that:

  • Can’t convince executives to use formal tools
  • Give in to extreme pressure for fast estimates

Infeasible projects that result from poor estimates are a matter of IT impotence.

Conclusion

ChildWithIceCreamBoth executive ignorance and IT impotence lead to infeasible projects on a regular basis because of poor estimates and rejecting estimates; so there is no surprise here.

However, infeasible projects are a failure of executives and IT equally because we are all on the same team.  It is not possible for part of the organization to succeed if the other one fails.

IntestinalFortitudePossibly a greater share of problem is with IT management.  After all, whose responsibility is a bad decision — the guys that know what the issues are or the ones that don’t.

If a child wants ice cream before they eat dinner then whose fault is it if you cave in and give them the ice cream?

Unfortunately, even after 60 years of developing software projects, IT managers are either as ignorant as the executives or simply have no intestinal fortitude.

Even when IT managers convince executives of the importance of estimating tools, the estimates are routinely discarded because they do not meet executive expectations.

Rejection of automated estimates: productivity -16%, quality -22%

Until we can get a generation of IT managers that are prepared to educate executives on the necessity of proper estimation and be stubborn about holding to those estimates, we are likely to continue to have an estimated $3 trillion in failures of software projects every year.


End Notes

1For inquiring minds, good automated estimation systems have been shown to be within 5% of time and cost on a regular basis. Contact me for additional information.

References

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Seriously. The Devil Made me do It!

good vs evilJust as eternal as the cosmic struggle between good and evil is the challenge between our two natures. Religion aside, we have two natures, the part of us that:

  • thinks things through; make good or ethical decisions a.k.a. our angelic nature
  • react immediately; make quick but often wrong decisions a.k.a. our devil nature

Guess God left a bug in our brains so that it emphasizes fast decisions over good / ethical decisions.

Quite often we make sub-optimal or ethically ambiguous decisions under pressure

You decide…


SteamingPileSituation: Your manager comes to you and says that something urgent needs to be fixed right away. Turns out the steaming pile of @#$%$ that you inherited from Bob is malfunctioning again.

Of course Bob created the mess and then conveniently left the company; in fact, the code is so bad that the work-arounds have work-arounds.

Bite the bullet, start re-factoring the program when things goes wrong.  It will take more time up front, but over time the program will become stable.

Find another fast workaround and defer the problem to the future.  Find a good reason why the junior member of the team should inherit this problem.


MultiplePathsSituation: You’ve got a challenging section of code to write and not much time to write it.

Get away from the computer, think things through.  Get input from your peers, maybe they have seen this problem before. Then plan the pathways out and write the code once cleanly. Taking time to plan seems counter intuitive, but it will save time.

Naw, just sit at the keyboard and bang it out already.  How difficult can it be?


BlameSituation: The project is late and you know that your piece is behind schedule.  However, you also know that several other pieces are late as well.

Admit that you are late and that the project can’t finish by the deadline.  Give the project manager and senior managers a chance to make a course correction.

Say that you are on schedule but you are not sure that other people (be vague here) will have their pieces ready on time and it could cause you to become late.


Measurement, smallSituation: You have been asked to estimate how long a critical project will take.  You are only been given a short time to come up with the estimate.

Tell the project manager that getting a proper estimate takes longer than a few hours. Without proper estimates the project is likely to be severely underestimated and this will come back to bite you and the project manager in the @$$.

Tell the project manager exactly the date that senior management wants the project to be finished by.  You know this is what they want to hear, why deal with the problem now? This will become the project manager’s problem when the project is late.


The statistics show that we often don’t listen to our better (angelic?) natures very often. So when push comes to shove and you have to make a sub-optimal or less than ethical decision, just remember:

The devil made you do it!

Run into other common situations, email me

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Don’t be a Slave to Your Tools

Abstract SlaveDevelopers attach quickly to tools because they are concrete and have well defined behavior.  It is easier to learn a tool than to learn good practices or methodology.

Tools only assist in solving problems, they can’t solve the problem by themselves. A developer who understands the problem can use tools to increase productivity and quality.

Poor developers don’t invest the time or effort to understand how to code properly and avoid defects.  They spend their time learning how to use tools without understanding the purpose of the tool or how to use it effectively.

To some degree, this is partially the fault of the tool vendors.  The tool vendors perceive an opportunity to make $$$$$ based on providing support for a common problems, such as:

  • defect trackers to help you manage defect tracking
  • version control systems to manage source code changes
  • tools to support Agile development (Version One, JIRA)
  • debuggers to help you find defects

There are many tools out there, but let’s just go through this list and point out where developers and organizations get challenged.  Note, all statistics below are derived from over 15,000 projects over 40 years.1

Defect Trackers

Believe it or not, some organizations still don’t have defect tracking software. I’ve run into a couple of these companies and you would not believe why…

Inadequate defect tracking methods: productivity -15%, quality -21%

So we are pretty much all in agreement that we need to have defect tracking; we all know that the ability to manage more than a handful of defects is impossible without some kind of system.

Automated defect tracking tools: productivity +18%, quality +26%

The problem is that developers fight over which is the best defect tracking system. The real problem is that almost every defect tracking system is poorly set-up, leading to poor results. Virtually every defect tracking system when configured properly will yield tremendous benefits. The most common pitfalls are:

  • Introducing irrelevant attributes into the defect lifecycle status, i.e. creation of statuses like deferred, won’t fix, or functions as designed
  • Not being able to figure out if something is fixed or not
  • Not understanding who is responsible for addressing a defect

The tool vendors are happy to continue to provide new versions of defect trackers. However, using a defect tracker effectively has more to do with how the tool is used rather than which tool is selected.

One of the most fundamental issues that organizations wrestle with is what is a defect?  A defect only exists if the code does not behave according to specifications. But what if there are no specifications or the specifications are bad?  See It’s not a bug, it’s… for more information.

Smart organizations understand that the way in which the defect tracker is used will make the biggest difference.  Discover how to get more out of you defect tracking system in Bug Tracker Hell and How to Get Out.

Another common problem is that organizations try to manage enhancements and requirements in the defect tracking system.  After all whether it is a requirement or a defect it will lead to a code change, so why not put all the information into the defect tracker?  Learn why managing requirements and enhancements in the defect tracking system is foolish in Don’t manage enhancements in the bug tracker.

Version Control Systems

Like defect tracking systems most developers have learned that version control is a necessary hygiene procedure.  If you don’t have one then you are likely to catch a pretty serious disease (and at the least convenient time)

Inadequate change control: productivity -11%, quality -16%

Virtually all developers dislike version control systems and are quite vocal about what they can’t do with their version control system.  If you are the unfortunate person who made the final decision on which version control system is used just understand that their are hordes of developers out their cursing you behind your back.

Version control is simply chapter 1 of the story.  Understanding how to chunk code effectively, integrate with continuous build technology, and making sure that the defects in the defect tracker refers to the correct version are just as important as the choice of version control system.

Tools to support Agile

Sorry Version One and JIRA, the simple truth is that using an Agile tool does not make you agile, see this.

These tools are most effective when you actually understand Agile development. Enough said.

Debuggers

I have written extensively about why debuggers are not the best tools to track down defects.  So I’ll try a different approach here.

One of the most enduring sets of ratios in software engineering has been 1:10:100.  That is, if the cost of tracking down a defect pre-test (i.e. before QA) is 1, then it will cost 10x if the defect is found by QA, and 100x if the defect is discovered in deployment by your customers.

Most debuggers are invoked when the cost function is in the 10x or 100x part of the process.  As stated before, it is not that I do not believe in debuggers — I simply believe in using pre-test defect removal strategies because they cost less and lead to higher code quality.

Pre-test defect removal strategies include:

  • Planning code, i.e. PSP
  • Test driven development, TDD
  • Design by Contract (DbC)
  • Code inspections
  • Pair programming for complex sections of code

You can find more information about this in:

Seldom Used Tools

Tools that can make a big difference but many developers don’t use them:

Automated static analysis: productivity +21%, quality +31%

Automated unit testing: productivity +17%, quality +24%

Automated unit testing generally involves using test driven development (TDD) or data driven development together with continual build technology.

Automated sizing in function points: productivity +17%, quality +24%

Automated quality and risk prediction: productivity +16%, quality +23%

Automated test coverage analysis: productivity +15%, quality +21%

Automated deployment support: productivity +15%, quality +20%

Automated cyclomatic complexity computation: productivity +15%, quality +20%

Important Techniques with No Tools

There are a number of techniques available in software development that tool vendors have not found a way to monetize on. These techniques tend to be overlooked by most developers, even though they can make a huge difference in productivity and quality.

The Personal Software Process and Team Software Process were developed by Watts Humphrey, one of the pioneers of building quality software.

Personal software process: productivity +21%, quality +31%2

Team software process: productivity +21%, quality +31%3

The importance of inspections is covered in:

Code inspections: productivity +21%, quality +31%4

Requirement inspections: productivity +18%, quality +27%4

Formal test plans: productivity +17%, quality +24%

Function point analysis (IFPUG): productivity +16%, quality +22%

Conclusion

There is definitely a large set of developers that assume that using a tool makes them competent.

The reality is that learning a tool without learning the principles that underly the problem you are solving is like assuming you can beat Michael Jordan at basketball just because you have great running shoes.

Learning tools is not a substitute for learning how do do something competently. Competent developers are continually learning about techniques that lead to higher productivity and quality, whether or not that technique is supported by a tool.

References

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

No Business Case = Failed Project

A business case comes between a bright idea for a software project and the creation o that project. Project Timeline, Business Case

  • To – idea to have a project is born
  • Tcheck – formal or informal business case
  • Tstart – project is initiated
  • Tend – project finishes successfully or is abandoned

Not all ideas for software projects make sense.  In the yellow zone above, between idea and project being initiated, some due diligence on the project idea should occur.  This is where you do the business case, even if only informally on the back of a napkin.

The business case is where you pause and and estimate  whether the project is worth it, i.e. will this project leave you better off than if you did not do it.

For those who want precise definitions the project should be NPV +ve.  In layman’s terms, the project should leave the organization better off on it’s bottom line or at least improve skill levels so that other projects are better off.

Projects that do not improve skills or the bottom line are a failure.

Out of 10 software projects (see Understanding your chances):

  • 3 are successful
  • 4 are challenged, i.e. over cost, over budget, or deliver much less functionality
  • 3 will fail, i.e. abandoned

This means that the base rate of success for any software project is only 3 out of 10.

Yet executives routinely execute projects assuming that they can not fail even though the project team knows that the project will be a failure from day 1.

Business cases give executives a chance to stop dubious projects before they start. (see Stupid is as Stupid Does)

Understanding how formal the business case needs to be comes down to uncertainty. There are three key uncertainties with every project:

  • Requirements uncertainty
  • Technical uncertainty
  • Skills uncertainty

When there is a moderate amount of uncertainty in any of these three areas then a formal business case with cash flows and risks needs to be prepared.

Requirements Uncertainty

Requirements uncertainty is what leads to scope shift (scope creep).  The probability of a project failing is proportional to the number of unknown requirements when the project starts (see Shift Happens).

Requirements uncertainty is only low for two particular projects: 1) re-engineering a project where the requirements do not change, and 2) the next minor version of a software project.

For all other software projects the requirements uncertainty is moderate and a formal business case should be prepared.

Projects new to you have high requirements uncertainty.

Technical Uncertainty

Technical uncertainty exists when it is not clear that all requirements can be implemented using the selected technologies at the level of performance required for the project.

Technical uncertainty is only low when you have a strong understanding of the requirements and the implementation technology. When there is only a moderate understanding of the requirements or the implementation technology then you will encounter the following problems:

  • Requirements that get clarified late in the project that the implementation technology will not support
  • Requirements that can not be implemented once you improve your understanding of the implementation technology

Therefore technical uncertainty is high when you are doing a project for the first time and requirement uncertainty is high.  Technical uncertainty is high when you are using new technologies, i.e. shifting from Java to .NET or changing GUI technology.

Projects with new technologies have moderate to high uncertainty.

Skills Uncertainty

Skills uncertainty comes from using resources that are unfamiliar with the requirements or the implementation technology.  Skills uncertainty is a knowledge problem.

Skills uncertainty is only low when the resources understand the current requirements and implementation technology.

Resources unfamiliar with the requirements will often implement requirements in a suboptimal way when requirements are not well written.  This will involve rework; the worse the requirements are understood the more rework will be necessary.

Resources unfamiliar with the implementation technology will make mistakes choosing implementation methods due to lack of familiarity with the philosophies of the implementation libraries.  Often after a project is complete, resources will understood that different implementation tactics should have been used.

Formal or Informal Business Cases?

An informal business case is possible only if the requirements, technical, and skills uncertainty is low.  This only happens in a few situations:

  • Replacing a system where the requirements will be the same and the implementation technology is well understood by the team
  • The next minor version of a software system

Every other project requires a formal business case that will quantify what kind of uncertainty and what degree of uncertainty exists in the project.  At a minimum project managers facing moderate to high uncertainty should be motivated to push for a business case (see Stupid is as Stupid Does). Here is a list of projects that tend to be accepted without any kind of real business case that quantifies the uncertainties:

  • Change of implementation technology
    • Moving to object-oriented technology if you don’t use it
    • Moving from .NET to Java or vice versa
  • Software projects by non-software companies
  • Using generalists to implement technical solutions
  • Replacing systems with resources unfamiliar with the requirements
    • Often happens with outsourcing

Projects with moderate to high risks and no business case are doomed to fail.

Related articles

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Stupid is as stupid does

StupidIsAsStupidDoesSenior management does not set out to have a failed project, however, when the failure rate is 7 out of 10 projects you wonder what the problem is.

As Ma Gump stated, “stupid is as stupid does“, that is, smart people sometimes do stupid things.  The collective IQ of a project is reduced by the number of managers involved in the project.

Now from a human perspective, something very interesting is going on here.  For example, suppose you were give the following choices:

  1. An 80% chance of making $100
  2. A guaranteed $40

If people behaved according to expected value then everyone would choose the first choice.  However, because humans are risk averse, almost everyone will choose the second alternative with guaranteed money.

Now out of 10 software projects:

  • 3 will succeedExecutives, Understanding your chances, PedestriansCrossingStreet
  • 4 will be challenged
  • 3 will outright fail

To put that in perspective, if you were watching people cross the street at an intersection:

  • 3 cross the street successfully
  • 4 get maimed
  • 3 get killed

How interested would you be in crossing that street?

You can Google “software project failure rates” to see that this has been demonstrated by multiple reliable institutions over every industry.  Challenged projects are generally projects which go over budget and under deliver a software solution that can be declared a moral victory by management.

So if human beings are risk averse and the odds of project success are so low then:

Why does senior management ignore risks on software projects?

Damn the torpedoes, full speed aheadThe only possible conclusion is that senior management can’t conceive of a their projects failing. They must believe that every software project that they initiate will be successful, that other people fail but that they are in the 3 out of 10 that succeed.

This inability to understand the base rate of failure in software development is systemic. There are so many software projects that are started by senior management where the technical team knows that the chance of success is 0% from the start.

Senior management is human and is risk averse, you just need to find a way to remind them of this. One way to get senior management to think twice about projects is to make sure that there is a meeting before launching the project where management is asked the following question:

Assume that this project will fail, why would it have failed? What will the consequences be?

This exercise (if done seriously) may have the effect of causing senior management to realize that the project can indeed fail. With luck, the normal risk aversion that every human being is endowed with will kick in and the project may get re-evaluated.

Related Articles

Bibliography

Kahneman, Daniel. Thinking, Fast and Slow.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Uncertainty trumps Risk in Software Development

Successful software development involves understanding uncertainty, and uncertainty only comes from a few sources in a software project.  The uncertainties of a software project increase with the size of the project and the inexperience of the team with the domain and technologies.The focus on this article is on uncertainty and not on risk.  In part 1 we discussed uncertainty and in part 2 we discussed risk, so it should be clear that:

All risks are uncertain, however, not all uncertainties are risks.

For example, scope creep is not a risk (see Shift Happens) because it is certain to happen in any non-trivial project.  Since risk is uncertain, a risk related to scope creep might be that the scope shifts so much that the project is canceled.  However, this is a useless risk to track because by the time it has triggered it is much too late for anything proactive to be done.

It is important to understand the uncertainties behind any software development and then to extract the relevant risks to monitor.  The key uncertainties of a software project are around:

  • requirements
  • technology
  • resources
  • estimating the project deadline

Uncertainty in Requirements
There are several methodologies for capturing requirements:

  • Business requirements document (BRD) or software requirement specification (SRS)
  • Contract-style requirement lists
  • Use cases (tutorial)
  • User stories

Regardless of the methodology used, your initial requirements will split up into several categories:

The blue area above represents what the final requirements will be once your project is completed, i.e. the System To Build.  The initial requirements that you capture are in the yellow area called Initial Requirements.

With a perfect requirements process the Initial Requirements area would be the same as the System To Build area.  When they don’t overlap we get the following requirement categories:

  1.  Superfluous requirements
  2. Missing requirements

Superfluous initial requirements tends to happen in very large projects or projects where the initial requirements process is incomplete.  Due to scope shift the Missing requirements category always has something in it (see Shift Happens).  If either of these two categories contains a requirement that affects your core architecture negatively then you will increase your chance of failure by at least one order of magnitude.

For example, a superfluous requirement that causes the architecture to be too flexible will put the developers through a high learning curve and lead to slow development.

If scalability is a requirement of the architecture but it is missing during the initial architecture then you will discover that it is difficult and costly to add later.


The physical equivalent would be the apartment building here on the right.  The foundation was insufficient to the needs of the building and it is slowly collapsing on one side.  Imagine the cost of trying to fix the foundation now that the building is complete.

I’ve been in start-ups that did not plan for sufficient scalability in the initial architecture; subsequently, the necessary scalability was only added with serious development effort and a large cost in hardware. Believe me, throwing expensive hardware at a software problem late in the life cycle is not fun or a practical solution :-(.


The overlapping box, Inconsistent Requirements, is to categorize known and missing requirements that turn out to be in conflict with other requirements.  This can happen when requirements are gathered from multiple user roles all of whom have a different view of what the system will deliver.

It is much easier and faster to test your requirements and discover inconsistencies before you start developing the code rather than discover the inconsistencies in the middle of development.  When inconsistencies are discovered by developers and QA personnel then your project will descend into fire-fighting (see Root cause of ‘Fire-fighting’ in Software Projects).

The physical equivalent here is to have a balcony specified to one set of contractors but forget to notify another set that you need a sliding door (see right).  When the construction people stumble on the inconsistency you may have already started planning for one alternative only to discover that rework is necessary to get to the other requirement.

Note, if you consistently develop software releases and projects with less than 90% of your requirements documented before coding starts then you get what you deserve (N.B. I did not say before the project starts 🙂 ).   One of the biggest reasons that Agile software development stalls or fails is that the requirements in the backlog are not properly documented; if you Google “poor agile backlogs” you will get > 20M hits.

Requirements Risks
Some risks associated with requirements are:

  • Risk of a missing requirement that affects the core architecture
  • Risk that inconsistent requirements cause the critical path to slip

Uncertainty in Technology
Technical uncertainty comes  from correctly using technology but failing to accomplish the goals as outlined by your requirements; lack of knowledge and/or skills will be handled in the next section (Uncertainty Concerning Resources).  Team resources that don’t have experience with technology (poorly documented API, language, IDE, etc) does not constitute a technical risk it is a resource risk (i.e. lack of knowledge).

Technical uncertainty comes from only a few sources:

  • Defective APIs
  • Inability to develop algorithms


Unforeseen defects in APIs will impact one or more requirements and delay development.  If there is an alternative API with the same characteristics then there may be little or no delay in changing APIs, i.e. there are multiple choices for XML parsing in Java with the same API.

However, much of the time changing to another API will cause delays because the new API will be implemented differently than the defective one. There are also no guarantees that the new API will be bug free.

Mature organizations use production APIs, but even then this does not protect you against defects.  The best known example has to be the Pentium bug from Intel discovered in 1994.  Although the bug did not seem to cause any real damage, any time you have an intermittent problem the source might always be a subtle defect in one of the APIs that you are using.

Organizations that use non-production (alpha or beta) APIs for development run an extremely high risk of finding a defect in an API.  This generally only happens in poorly funded start-ups where the technical team might have excessive decisional control in the choice of technologies.

The other source of technical uncertainty is the teams inability to develop algorithms to accomplish the software goals.  These algorithms relate to the limitations of system resources such as CPU, memory, batteries, or bandwidth concern, i.e.:

  • Performance
  • Memory management
  • Power management
  • Volume of data concerns

Every technical uncertainty is associated with one or more requirements.  The inability to produce an algorithm to satisfy a requirement may have a work-around with acceptable behavior or might be infeasible.

If the infeasible requirements prevents a core goal from being accomplished then the project will get canceled.  If affected requirements have technical work-arounds then the project will be delayed while the work-around is being developed.

Technical Risks
Some risks associated with technology are:

  • Risk that a defective API will cause us to look for another API
  • Risk that we will be unable to find a feasible solution for a core project requirement

Uncertainty Concerning Resources
When using the same team to produce the next version of a software product there is little to no resource uncertainty.  Resource uncertainty exists if one of the following are present:

  • Any team member is unfamiliar with the technology you are using
  • Any member of the team is unfamiliar with the subject domain
  • You need to develop new algorithms to handle a technical issue (see previous section)
  • Any team member is not committed to the project because they maintain another system
  • Turnover robs you of a key individual

Resource uncertainty revolves around knowledge and skills, commonly this includes: 1) language, 2) APIs, 3) interactive development environments (IDEs), and 4) methodology (Agile, RUP, Spiral, etc).  If your team is less knowledgeable than required then you will underestimate some if not all tasks in the project plan.

When team members are unfamiliar with the subject domain then any misunderstandings that they have will cause delays in the project.  In particular, if the domain is new and the requirements are not well documented then you will probably end up with the wrong architecture, even if you have skilled architects.

The degree to which you end up with a bad architecture and a canceled project depends on how unfamiliar you are with the subject domain and technologies being used.  In addition, the size of your project will magnify all resource uncertainties above.

The majority of stand-alone applications are between 1,000 and 10,000 function points.  As you would expect, the amount of the system that any one person can understand drops significantly between 1,000 and 10,000 function points.  The number of canceled projects goes up as our understanding drops because all uncertainties increase and issues fall between the cracks.  N.B. The total % of the system understood by a single person drops precipitously between 1,000 and 10,000 function points.

When there are team members committed to maintaining legacy systems then their productivity will be uncertain.  Unless your legacy system behaves in a completely predictable fashion, those resources will be pulled away to solve problems on an unpredictable basis.  They will not be able to commit to the team and multi-tasking will lower their and the teams productivity (see Multi-tasking Leads to Lower Productivity).

Resource Risks
Some risks associated with resources are:

  • The team is unable to build an appropriate architecture foundation for the project
  • A key resource leaves the project before the core architecture is complete

Uncertainty in Estimation
When project end dates are estimated formally you will have 3 dates: 1) earliest finish, 2) expected finish, and 3) latest finish.  This makes sense because each task in the project plan can finish in a range of time, i.e. earliest finish to latest finish.  When a project only talks about a single date for the end date, it is almost always the earliest possible finish so there is a 99.9% chance that you will miss it.  Risk in estimation makes the most sense if:

  • Formal methods are used to estimate the project
  • Senior staff accepts the estimate

There are numerous cost estimating tools that can do a capable job.  Capers Jones lists those methods, but also comments about how many companies don’t use formal estimates and those that do don’t trust them:

Although cost estimating is difficult there are a number of commercial software cost estimating tools that do a capable job:  COCOMO II, KnowledgePlan, True Price, SEER, SLIM, SoftCost, CostXpert, and Software Risk
Master are examples.

However just because an accurate estimate can be produced using a commercial estimating tool that does not mean that clients or executives will accept it.  In fact from information presented during litigation, about half of the cases did not produce accurate estimates at all and did not use estimating tools.  Manual estimates tend towards optimism or predicting shorter schedules and lower costs than actually occur.   

Somewhat surprisingly, the other half of the case had accurate estimates, but they were rejected and replaced by forced estimates based on business needs rather than team abilities. 

Essentially, senior staff have a tendency to ignore formal estimates and declare the project end date.  When this happens the project is usually doomed to end in disaster (see Why Senior Management Declared Deadlines lead to Disaster).

So estimation is guaranteed to be uncertain.  Let’s combine the requirements categories from before with the categories of technical uncertainty to see where our uncertainty is coming from.  Knowing the different categories of requirements uncertainty gives us strategies to minimize or eliminate that uncertainty.
Starting with the Initial Requirements, we can see that there are two categories of uncertainty that can addressed before a project even starts:
  1. Superfluous initial requirements
  2. Inconsistent requirements

Both of these requirements will waste time if they get into the development process where they will cause a great deal of confusion inside the team.  At best these requirements will cause the team to waste time, at worst these requirements will deceive the team into building the architecture incorrectly.  A quality assurance process on your initial requirements can ensure that both of these categories are empty.

The next categories of uncertainty that can be addressed before the project starts is:

  1. Requirements with Technical Risk
  2. Requirements Technically Infeasible

Technical uncertainty is usually relatively straight forward to find when a project starts.  It will generally involve non-functional requirements such as scalability, availability, extendability, compatibility, portability, maintainability, reliability, etc, etc.  Other technical uncertainties will be concerned with:

  1. algorithms to deal with limited resources, i.e. memory, CPU, battery power
  2. volume of data concerns, i.e. large files or network bandwidth
  3. strong security models
  4. improving compression algorithms

Any use cases that are called frequently and any reports tie up your major tables are sources of technical uncertainty.  If there will be significant technical uncertainty in your project then you are better off to split these technical uncertainties into a smaller project that the architects will handle before starting the main project.  This way if there are too many technically infeasible issues then at least you can cancel the project.

However, the greatest source of uncertainty comes from the Missing Requirements section.  The larger the number of missing requirements the greater the risk that the project gets canceled.  If we look at the graph we presented above:

You can see that the chance of a project being canceled is highly correlated with the % of scope creep. Companies that routinely start projects with a fraction of the requirements identified are virtually guaranteed to have a canceled project.

In addition, even if you use formal methods for estimation, your project end date will not take into account the Missing Requirements.  If you have a significant number of missing requirements then your estimates will be way off.

Estimation Risks

The most talked about estimation risk is schedule risk.  Since most companies don’t use formal methods, and those that do are often ignored, it makes very little sense to talk of schedule risk.

When people say “schedule risk”, they are making a statement that the project will miss the deadline.  But given that improper estimation is used in most projects it is certain that the project will miss its deadline , the only useful question is “by how much?“.

Schedule risk can only exist when formal methods are used and there is an earliest finish/latest finish range for the project.  Schedule risk then applies to any task that takes longer than its latest finish and compromises the critical path of the project.  The project manager needs to try to crash current and future tasks to see if he can get the project back on track.  If you don’t use formal methods then this paragraph probably makes no sense 🙂

Conclusion

The main sources of uncertainty in software development comes from:

  • requirements
  • technology
  • resources
  • estimates

Successful software projects look for areas of uncertainty and minimize them before the project starts. Some uncertainties can be qualified as risks and should be managed aggressively by the project manager during the project.

Uncertainty in requirements, technology, and resources will cause delays in your project.  If you are using formal methods than you need to pay attention to delays caused by uncertainties not accounted for in your model.  If you don’t use formal methods then every time you hit a delay caused by an uncertainty, then that delay needs to be tacked on to the project end-date (of course, it won’t be 🙂 ).

If your project does not have strong architectural requirements and is not too big (i.e. < 1,000 function points) then you should be able to use Agile software development to set-up a process that grapples with uncertainty in an incremental fashion.  Smaller projects with strong architectural requirements should set up a traditional project to settle the technical uncertainties before launching into Agile development.

Projects that use more traditional methodologies need to add a quality assurance process to their requirements to ensure a level of completeness and consistency before starting development.  One way of doing this is to put requirements gathering into its own project.  Once you capture the requirements, if you establish that you have strong architectural concerns, then you can create a project to build out the technical architecture of the project.  Finally you would do the project itself.  By breaking projects into 2 or 3 stages this gives you the ability to cancel the project before too much effort is sunk into a project with too much uncertainty.

Regardless of your project methodology; being aware of the completeness of your requirements as well as the technical uncertainty of your non-functional requirements will help you reduce the chance of project cancellation by at least one order of magnitude.

It is much more important to understand uncertainty that it is to understand risk in software development.


Appendix: Traditional Software Risks
This list of software risks courtesy of Capers Jones.  Risks listed in descending order of importance.

  • Risk of missing toxic requirements that should be avoided
  • Risk of inadequate progress tracking
  • Risk of development tasks interfering with maintenance
  • Risk of maintenance tasks interfering with development
  • Risk that designs are not kept updated after release
  • Risk of unstable user requirements
  • Risk that requirements are not kept updated after release
  • Risk of clients forcing arbitrary schedules on team
  • Risk of omitting formal architecture for large systems
  • Risk of inadequate change control
  • Risk of executives forcing arbitrary schedules on team
  • Risk of not using a project office for large applications
  • Risk of missing requirements from legacy applications
  • Risk of slow application response times
  • Risk of inadequate maintenance tools and workbenches
  • Risk of application performance problems
  • Risk of poor support by open-source providers
  • Risk of reusing code without test cases or related materials
  • Risk of excessive feature “bloat”
  • Risk of inadequate development tools
  • Risk of poor help screens and poor user manuals
  • Risk of slow customer support
  • Risk of inadequate functionality
VN:F [1.9.22_1171]
Rating: 4.0/5 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Uncertainty and Risk in Software Development (2 of 3)

[Part 1 of 3 is here.]

Defining Risk and its Components

There are future events whose impact can have a negative outcome or consequence to our project. A future event can only be risky if the event is uncertain. If an event is certain then it is no longer a risk even if the entire team does not perceive the certainty of the event, e.g. individuals know that the project is late even though the project manager and senior staff do not.

Risks always apply to a measurable goal that we are trying to achieve; if there is no goal there can be no risk, i.e. a project can’t have schedule risk if it has no deadline.

Once a goal has been impacted by a risk we say that the risk has triggered. The severity of the outcome depends on how
far it displaces us from our goal. Once triggered, there should be a mitigation process to reduce the severity of the possible outcomes.

Before looking at software project risks tied to these goals, let’s make sure that we all understand the components of risk by going through an example.

Risk Example: Auto Collision

Let’s talk about risk using a physical example to make things concrete. The primary goal of driving a car is to get from point A to point B. Some secondary goals are:

  • Get to the destination in a reasonable time
  • Make sure all passengers arrive in good health.
  • Make sure that the car arrives in the same condition it departs in.

There is a risk of collision every time you drive your car:

  • The event of a collision is uncertain
  • The outcome is the damage cost and possible personal injury
  • The severityis proportional to the amount of damage and personal injury sustained if there is an accident
    • If there is loss of life then the severity is catastrophic

A collision will affect one or more of the above goals. Risk management with respect  to auto collisions involves:

  • Reducing the probability of a collision
  • Minimizing the effects of a collision

There are actions that can reduce or increase the likelihood of a collision is:

  • Things that reducethe chance of collision
    • Understanding safe driving techniques
    • Driving when there are fewer drivers on the road
    • Using proper turn signals
  • Things that increasethe chance of collision

By taking the actions that reduce a collision while avoiding the actions that increase it we can reduce the probability or likelihood of a
collision.

Reducing the likelihood of a collision does not change the severity of the event if it occurs. The likelihood of an event
and its consequence are independent even if there are actions that will reduce the likelihood and consequences of an event, i.e. driving slowly.

If an auto collision happens then a mitigation strategy would attempt to minimize the effect of the impact. Mitigation strategies with respect to auto collision are:

  • Wear a seat belt
  • Drive a car with excellent safety features
  • Have insurance for collisions
  • Have the ability to communicate at all times (i.e. cell phone, etc)

Having a mitigation strategy will not reduce the chance of a collision, it will only lessen the severity.

Goals of a Software Project

The primary goals of a software project are:

  • Building the correct software system
  • Building the system so that its benefits exceed its costs (i.e. NPV positive)

Building the Correct Software System

What is the correct software system? Cartoons similar to this one are easily found on the Internet:

The correct system is shown in the last frame of the cartoon; so let’s define the correct system as what the customer actually needs. To build the correct system we will need to have correct requirements.

How Long Will The Project Take?

Let’s assume we have complete and consistent requirements for a correct system. How long will it take to build this system? One approach is to take a competent team and have them build out the system without imposing a deadline. Once the system is built we would have the actual time to build the system (Tbuild) .

Tbuild is theoretical because unless you are using an Agile methodology you will want to estimate (Testimate) how long it takes to produce the system before you start. Nonetheless, given your resources and requirements Tbuild does exist and is a finite number; as one of my colleagues used to say, “the software will take the time that it takes“.

Most executives want to know how long a project is going to take before the project starts. To do this we take the requirements and form an estimate (Testimate)
of how long the system will take to build. The key point to note here is that the actual time to build, Tbuild, and the estimated time to build the system, Testimate , will be different. The key thing to keep in mind is that Testimate is only valid to the extent that you use a valid methodology for establishing an estimate.

Building the System so that its Benefits Exceed its Costs

Building a system so that its benefits exceed its costs is equivalent to saying that the project puts money on the organization’s bottom line. We hope that an organization will do the following:

  • Define the system correctly (project scope)
  • Assess the financial viability of the project (capital budgeting)
  • Establish a viable project plan

Financial viability implies that the available resources will be able to produce the desired system before a specific date (Tbuild < Tviable then the organization will have a financial failure.

The problem is that we don’t know what financial failure. We need to have a reasonable expectation that the project is viable BEFORE we build it out. Therefore we use a proxy by estimating the time (Testimate) it will take to build the software from our project plan.

Once we have a time estimate then we can go forward on the project if Testimate, for a project can be done in multiple ways:

Software Project Risks

There are several primary risks for a software project:

  • Schedule risk
  • Estimation risk
  • Requirements risk
  • Technical risk

We often confuse schedule risk and estimation risk. Schedule risk is the risk that the tasks on the critical path have been under estimated and the project will miss the end date (i.e. Testimate). A project that takes longer than the
estimate is not necessarily a failure unless Tviable.

You can only talk meaningfully about schedule risk in projects where:

  • formal estimation techniques are used
  • proper task dependency analysis is done
  • project critical path is identified

Most of us do not work for organizations that are CMM level 4+ (or equivalent),
so you are unlikely to be using formal methods. When the project end date is arbitrary (i.e. method 2 or 3 above) it is not meaningful to talk about schedule risk, especially since history shows that we underestimate how long it will take to build the system, i.e. Testimate<<< Tbuild. When formal methods are not used (i.e. method 2 or 3 above) then the real issue is estimation risk and not schedule risk.

The real tragedy is when an IT departments attempt to meet unrealistic dates set by management when a realistic date would still yield a viable project (below). Unfortunately, unrealistic deadlines will cause developers to take short cuts and usually cripple the architecture. So that when management gives you additional time after their date fails, the damage to the architecture is terminal and you can’t achieve the initial objective.

Requirements risk is the risk that we do not have the correct requirements and are unable to get to a subset of the requirements that enables us to build the correct system prior to the project end date. There are many reasons for having incorrect requirements when a project starts:

  • The customer can not articulate what he needs
  • Requirements are not gathered from all stakeholders for the project
  • Requirements are incomplete
  • Requirements are inconsistent

Technical risk is the risk that some feature of the correct system can not be implemented due to a technical reason. If a technical issue has no work around and is critical to the correct system then the project will need to be abandoned.

If the technical issue has a work around the:

  • If the technical issue prevents the correct system from being built then we have requirements risk
  • If the technical work around takes to long it can trigger schedule risk

Last part (3 of 3):

  • Discuss other risks and how they roll up into one of the 4 risks outlined above
  • Discuss how risk probability and severity combines to form acceptable or unacceptable risks
  • Discuss risk mitigation strategies
  • Discuss how to form a risk table/database
  • Discuss how to redefine victory for informal projects
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Uncertainty and Risk in Software Development (1 of 3)

To develop high quality software consistently and reliably is to learn how to master complexity. You master complexity when you understand the different sources of uncertainty and the different risk characteristics of each uncertainty. Uncertainties introduce delays in development as you attempt to resolve them. Resolving uncertainties always involves alternative designs and generally affects your base architecture. Poor architecture choices will increase code complexity and create uncertainty as future issues become harder to resolve in a consistent manner.

Confused? Let’s untangle this mess one issue at a time.

Uncertainty

The key principle here is that uncertainty will introduce delays in development. Let’s look at the average speed of development. The Mythical Man Month conjectures that an average developer can produce 10 lines of production code per day regardless of programming language. Let’s assume for the sake of argument that todays developers can code 100 lines of code per day.

Development speed is limited because of meetings, changed and confused requirements, and bug fixing. Suppose we print out all the source code of a working 200,000 line program. If we ask a programmer to type this code in again, they are likely to be typing at least 2,000 lines of code per day. So to develop the program from scratch would have taken 2,000 man days, but to type it in again would only take 100 man days.

The time difference has to do with uncertainty. The developer that develops the application from scratch faces uncertainty whereas the developer that types in the application faces no uncertainty.

If you have ever done mazes you discover that to do the maze from the entry to the exit point involves making decisions, and this introduces delays while you are thinking. However, try doing a maze from the exit back to the entry, you will find there are few decisions to make and it is much faster. Fewer decisions from resolving uncertainty faster leads to fewer delays.

It is always faster to do something when you know the solution.

Sources of Uncertainty

The major sources of uncertainty are:

  • Untrained developers
  • Incomplete and inconsistent requirements
  • Technical challenges

We use the term “learning curve” to indicate that we will be slower when working with new technologies. The slope of the learning curve indicates how much time it will take to learn a new technology. If you don’t know the programming language, libraries/APIs, or IDE that you need to work with this will introduce uncertainty.

You will be constantly making syntax and semantic errors as you learn new languages, but this should pass rather quickly. What will take longer is learning about the base functionality provided by the libraries/APIs. In particular, you will probably end up creating routines only to discover that they are already in the API. Learning a new IDE can take a very long time and create serious frustration along the way! Incomplete and inconsistent requirements are a big source of uncertainty.

Incomplete requirements occur when you discover new use cases as you create a system. They also occur when the details required to code are unavailable, i.e. valid input fields, GUI design, report structure, etc. In particular, you can end up iterating endlessly over GUI and report elements – things that should be resolved before development starts.

Inconsistent requirements occur because of multiple sources of requirements as well as poor team communication. Technical challenges come in many forms and levels of difficulties. A partial list of technical challenges includes:

  • Poorly documented vendor APIs
  • Buggy vendor APIs
  • Interfacing incompatible technologies
  • Insufficient architecture
  • Performance problems

In all cases technical challenge is resolved either by searching for a documented solution in publications, on the Internet, or by trial an error. Trial and error can be done formally or informally but involves investigating multiple avenues of development, possibly building prototypes, and then choosing a solution.

While you are resolving a technical challenge your software project will not advance. A common source of uncertainty is insufficient architecture.

Insufficient architecture occurs when the development team is not aware of the end requirements of the final software system. This happens when only partial requirements are available and/or understood by the developers. The development team lays down the initial architecture for the software based on their understanding of the requirements of the final software system.

Subsequently, clarified requirements or new requirements make developers realize that there was a better way to implement the architecture. The developer and manager will have a conversation that is similar to:


Manager: We need to have feature X changed to allow Y, how soon can we do this?

(pause from the developer)

Developer: We had asked if feature X would ever need Y and we were told that it would never happen. We designed the architecture based on that. If we have to have behavior Y it will take 4 months to fix the architecture and we would have to rewrite 10% of the application.

Manager: That would take too long. Look I don’t want you to over engineer this, we need to get Y without taking too much of a hit on the schedule. What if we only need to have this for this screen?

(pause from the developer)

Developer: If we ONLY had to do it for this one screen then we can code a work around that will only take 2 weeks. But it would be 2 weeks for every screen where you need this. It would be much simpler in the long run to fix the architecture.

Manager: Let’s just code the work around for this screen. We don’t have time to fix the architecture.


The net effect of insufficient requirements is that you end up with poor architecture. Poor architecture will cause a technical challenge every time you need to implement a feature that the architecture won’t support.

You will end up wasting time every time you need to work around your own architecture.

Management will not endorse the proper solution, i.e. fixing the architecture, because they have a very poor understanding that every work around that is made is pushing the project closer and closer to failure. Eventually the software will have so many work-arounds that development will slow to a crawl. It is interesting that the project will probably fail, yet, soon enough the organization will attempt to build the same software using the same philosophy.

There is never enough time to get the project done properly, but there will always be enough time to do it again when the project fails.

http://www.geekherocomic.com/2009/06/03/clever-workaround/

Summary

  • Uncertainty comes from several sources
    • Untrained personnel (language, API, IDE)
    • Inconsistent and incomplete requirements
    • Technical challenges

Next part (2 of 3)

  • Defining and understanding risk
  • Matching uncertainties and risks
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)