Comments are for Losers

If software development is like driving a car then comments are road signs along the way.

Comments are purely informational and do NOT affect the
final machine code. Imagine how much time you would waste driving in a city where road signs looked like this one.

A good comment is one that reduces the development life cycle for the next developer that drives down the road.

A bad comment is one that increases the development life cycle for any developer unfortunate enough to have to drive down that road. Sometimes that next unfortunate driver will be you several years later!

Comments do not Necessarily Increase Development Speed

I was in university in 1985 (yup, I’m an old guy 🙂 ) and one of my professors presented a paper (which I have been unable to locate 🙁 ) of a study done in the 1970s. The study took a software program, introduced defects into it, and then asked several teams to find as many defects as they could. The interesting part of the study was that 50% of the teams had the comments completely
removed from the source code. The result was that the teams without comments not only found more defects but also found them in less time.

So unfortunately, comments can serve as weapons of mass distraction

Bad comments
A bad comment is one that wastes your time and does not help you to drive your development faster.

Let’s go through the categories of really bad comments:

  • Too many comments
  • Excessive history comments
  • Emotional and humorous comments

Too many comments are a clear case of where less is more. There are programs with so many comments that it obscures the code. Some of us have worked on programs where there were so many comments you could barely find the code!

History comments can make some sense, but then again isn’t that what the version control comment is for? History comments are questionable when you have to page down multiple times just to get to the beginning of the source code. If anything, history comments should be moved to the bottom of the file so that Ctrl-End actually takes you to the bottom of the modification history.


We have all run across comments that are not relevant. Some comments are purely about the  developer’s instantaneous emotional and intellectual state, some are about how clever they are, and some are simply attempts at humor (don’t quit your day job!).

Check out some of these gems (more can be found here):


// I am not sure if we need this, but too scared to delete.

//When I wrote this, only God and I understood what I was doing
//Now, God only knows

// I am not responsible of this code.
// They made me write it, against my will.

// I have to find a better job

try {

}
catch (SQLException ex) {
// Basically, without saying too much, you’re screwed. Royally and totally.
}
catch(Exception ex)
{
//If you thought you were screwed before, boy have I news for you!!!
}

// Catching exceptions is for communists

// If you’re reading this, that means you have been put in charge of my previous project.
// I am so, so sorry for you. God speed.

// if i ever see this again i’m going to start bringing guns to work

//You are not expected to understand this


Self-Documenting Code instead of Comments

We apply science to software by checking the functionality we desire (requirements model) against the behavior of the program (machine code model).

When observations of the final program disagree with the requirements model we have a defect which leads us to change our machine code model.

Of course we don’t alter the machine code model directly (at least most of us); we update the source code which is the last easily modified model. Since comments are not compiled into the machine code there is some logic to making sure that the source code
model be self-documenting. It is the only model that really counts!

Self-documenting code requires that you choose good names for variables, classes, function names, and enumerated types. Self-documenting means that OTHER developers can understand what you have done. Good self-documenting code has the same
characteristic of good comments; it decreases the time it takes to do development.

Practically, your code is self-documenting when your peers say that it is, not when YOU say that it is. Peer reviewed comments and code is the only way to make sure that code will lead to faster development cycles.

Comments gone Wild

Even if all the comments in a program are good (i.e. reduce
development life cycle) they are subject to drift over time. The speed of software development makes it difficult to make sure that comments stay in alignment with the source code. Comments that are allowed to drift become road signs that are no longer relevant to drivers.

Good comments go wild when the developer is so focused on getting a release out that he does not stop to maintain comments. Comments have gone wild when they become misaligned with the source code; you have to terminate them.

No animals (or comments) were harmed in the writing of this blog.

Commented Code
Code gets commented during a software release as we are experimenting with different designs or to help with debugging. What is really not clear is why code remains commented before the final check-in of a software release.

Over my career as a manager and trainer, I’ve asked developers why they comment their code. The universal answer that I get is “just in case”. Just in case what? At the end of a software release you have already established that you are not going to use your commented code, so why are you dragging it around? People hang on to commented code as if it is a “Get Out of Jail Free” card, it isn’t.

The reality is that commented code can be a big distraction. When you leave commented code in your source code you are leaving a land mine for the next developer that walks through it.

When the pressure is on to get defects fixed developers will
uncomment previously commented code to see if it will fix the problem. There is no substitute for understanding the code you are working on – you might get lucky when you reinstate commented code; in all likelihood it will blow up in your face.

Solutions

If your developers are not taking (or given) enough time to put in good comments then they should not write ANY comments. You will get more productivity because they will not waste time putting in bad comments that will slow everyone else down.

Time spent on writing self-documenting code will help you and your successors reduce development life cycles. It is absolutely false to believe that you do not have time to write self-documenting code.

If you are going to take on the hazards of writing comments then they need to be peer reviewed to make sure that OTHER developers understand the code. Unless the code reviewer(s) understands all the comments the code should not pass inspection.

If you don’t have a code review process then you are only commenting the code for yourself. The key principle when writing comments is Non Nobis Solum (not for ourselves alone).

When you run across a comment that sends you on a wild goose
 chase – fix it or delete them. If you are the new guy on the team and realize that the comments are wasting your time – get rid of them; your development speed will go up.

Other articles


Want to see other sacred cows get tipped?Check out:

Moo?

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Uncertainty trumps Risk in Software Development

Successful software development involves understanding uncertainty, and uncertainty only comes from a few sources in a software project.  The uncertainties of a software project increase with the size of the project and the inexperience of the team with the domain and technologies.The focus on this article is on uncertainty and not on risk.  In part 1 we discussed uncertainty and in part 2 we discussed risk, so it should be clear that:

All risks are uncertain, however, not all uncertainties are risks.

For example, scope creep is not a risk (see Shift Happens) because it is certain to happen in any non-trivial project.  Since risk is uncertain, a risk related to scope creep might be that the scope shifts so much that the project is canceled.  However, this is a useless risk to track because by the time it has triggered it is much too late for anything proactive to be done.

It is important to understand the uncertainties behind any software development and then to extract the relevant risks to monitor.  The key uncertainties of a software project are around:

  • requirements
  • technology
  • resources
  • estimating the project deadline

Uncertainty in Requirements
There are several methodologies for capturing requirements:

  • Business requirements document (BRD) or software requirement specification (SRS)
  • Contract-style requirement lists
  • Use cases (tutorial)
  • User stories

Regardless of the methodology used, your initial requirements will split up into several categories:

The blue area above represents what the final requirements will be once your project is completed, i.e. the System To Build.  The initial requirements that you capture are in the yellow area called Initial Requirements.

With a perfect requirements process the Initial Requirements area would be the same as the System To Build area.  When they don’t overlap we get the following requirement categories:

  1.  Superfluous requirements
  2. Missing requirements

Superfluous initial requirements tends to happen in very large projects or projects where the initial requirements process is incomplete.  Due to scope shift the Missing requirements category always has something in it (see Shift Happens).  If either of these two categories contains a requirement that affects your core architecture negatively then you will increase your chance of failure by at least one order of magnitude.

For example, a superfluous requirement that causes the architecture to be too flexible will put the developers through a high learning curve and lead to slow development.

If scalability is a requirement of the architecture but it is missing during the initial architecture then you will discover that it is difficult and costly to add later.


The physical equivalent would be the apartment building here on the right.  The foundation was insufficient to the needs of the building and it is slowly collapsing on one side.  Imagine the cost of trying to fix the foundation now that the building is complete.

I’ve been in start-ups that did not plan for sufficient scalability in the initial architecture; subsequently, the necessary scalability was only added with serious development effort and a large cost in hardware. Believe me, throwing expensive hardware at a software problem late in the life cycle is not fun or a practical solution :-(.


The overlapping box, Inconsistent Requirements, is to categorize known and missing requirements that turn out to be in conflict with other requirements.  This can happen when requirements are gathered from multiple user roles all of whom have a different view of what the system will deliver.

It is much easier and faster to test your requirements and discover inconsistencies before you start developing the code rather than discover the inconsistencies in the middle of development.  When inconsistencies are discovered by developers and QA personnel then your project will descend into fire-fighting (see Root cause of ‘Fire-fighting’ in Software Projects).

The physical equivalent here is to have a balcony specified to one set of contractors but forget to notify another set that you need a sliding door (see right).  When the construction people stumble on the inconsistency you may have already started planning for one alternative only to discover that rework is necessary to get to the other requirement.

Note, if you consistently develop software releases and projects with less than 90% of your requirements documented before coding starts then you get what you deserve (N.B. I did not say before the project starts 🙂 ).   One of the biggest reasons that Agile software development stalls or fails is that the requirements in the backlog are not properly documented; if you Google “poor agile backlogs” you will get > 20M hits.

Requirements Risks
Some risks associated with requirements are:

  • Risk of a missing requirement that affects the core architecture
  • Risk that inconsistent requirements cause the critical path to slip

Uncertainty in Technology
Technical uncertainty comes  from correctly using technology but failing to accomplish the goals as outlined by your requirements; lack of knowledge and/or skills will be handled in the next section (Uncertainty Concerning Resources).  Team resources that don’t have experience with technology (poorly documented API, language, IDE, etc) does not constitute a technical risk it is a resource risk (i.e. lack of knowledge).

Technical uncertainty comes from only a few sources:

  • Defective APIs
  • Inability to develop algorithms


Unforeseen defects in APIs will impact one or more requirements and delay development.  If there is an alternative API with the same characteristics then there may be little or no delay in changing APIs, i.e. there are multiple choices for XML parsing in Java with the same API.

However, much of the time changing to another API will cause delays because the new API will be implemented differently than the defective one. There are also no guarantees that the new API will be bug free.

Mature organizations use production APIs, but even then this does not protect you against defects.  The best known example has to be the Pentium bug from Intel discovered in 1994.  Although the bug did not seem to cause any real damage, any time you have an intermittent problem the source might always be a subtle defect in one of the APIs that you are using.

Organizations that use non-production (alpha or beta) APIs for development run an extremely high risk of finding a defect in an API.  This generally only happens in poorly funded start-ups where the technical team might have excessive decisional control in the choice of technologies.

The other source of technical uncertainty is the teams inability to develop algorithms to accomplish the software goals.  These algorithms relate to the limitations of system resources such as CPU, memory, batteries, or bandwidth concern, i.e.:

  • Performance
  • Memory management
  • Power management
  • Volume of data concerns

Every technical uncertainty is associated with one or more requirements.  The inability to produce an algorithm to satisfy a requirement may have a work-around with acceptable behavior or might be infeasible.

If the infeasible requirements prevents a core goal from being accomplished then the project will get canceled.  If affected requirements have technical work-arounds then the project will be delayed while the work-around is being developed.

Technical Risks
Some risks associated with technology are:

  • Risk that a defective API will cause us to look for another API
  • Risk that we will be unable to find a feasible solution for a core project requirement

Uncertainty Concerning Resources
When using the same team to produce the next version of a software product there is little to no resource uncertainty.  Resource uncertainty exists if one of the following are present:

  • Any team member is unfamiliar with the technology you are using
  • Any member of the team is unfamiliar with the subject domain
  • You need to develop new algorithms to handle a technical issue (see previous section)
  • Any team member is not committed to the project because they maintain another system
  • Turnover robs you of a key individual

Resource uncertainty revolves around knowledge and skills, commonly this includes: 1) language, 2) APIs, 3) interactive development environments (IDEs), and 4) methodology (Agile, RUP, Spiral, etc).  If your team is less knowledgeable than required then you will underestimate some if not all tasks in the project plan.

When team members are unfamiliar with the subject domain then any misunderstandings that they have will cause delays in the project.  In particular, if the domain is new and the requirements are not well documented then you will probably end up with the wrong architecture, even if you have skilled architects.

The degree to which you end up with a bad architecture and a canceled project depends on how unfamiliar you are with the subject domain and technologies being used.  In addition, the size of your project will magnify all resource uncertainties above.

The majority of stand-alone applications are between 1,000 and 10,000 function points.  As you would expect, the amount of the system that any one person can understand drops significantly between 1,000 and 10,000 function points.  The number of canceled projects goes up as our understanding drops because all uncertainties increase and issues fall between the cracks.  N.B. The total % of the system understood by a single person drops precipitously between 1,000 and 10,000 function points.

When there are team members committed to maintaining legacy systems then their productivity will be uncertain.  Unless your legacy system behaves in a completely predictable fashion, those resources will be pulled away to solve problems on an unpredictable basis.  They will not be able to commit to the team and multi-tasking will lower their and the teams productivity (see Multi-tasking Leads to Lower Productivity).

Resource Risks
Some risks associated with resources are:

  • The team is unable to build an appropriate architecture foundation for the project
  • A key resource leaves the project before the core architecture is complete

Uncertainty in Estimation
When project end dates are estimated formally you will have 3 dates: 1) earliest finish, 2) expected finish, and 3) latest finish.  This makes sense because each task in the project plan can finish in a range of time, i.e. earliest finish to latest finish.  When a project only talks about a single date for the end date, it is almost always the earliest possible finish so there is a 99.9% chance that you will miss it.  Risk in estimation makes the most sense if:

  • Formal methods are used to estimate the project
  • Senior staff accepts the estimate

There are numerous cost estimating tools that can do a capable job.  Capers Jones lists those methods, but also comments about how many companies don’t use formal estimates and those that do don’t trust them:

Although cost estimating is difficult there are a number of commercial software cost estimating tools that do a capable job:  COCOMO II, KnowledgePlan, True Price, SEER, SLIM, SoftCost, CostXpert, and Software Risk
Master are examples.

However just because an accurate estimate can be produced using a commercial estimating tool that does not mean that clients or executives will accept it.  In fact from information presented during litigation, about half of the cases did not produce accurate estimates at all and did not use estimating tools.  Manual estimates tend towards optimism or predicting shorter schedules and lower costs than actually occur.   

Somewhat surprisingly, the other half of the case had accurate estimates, but they were rejected and replaced by forced estimates based on business needs rather than team abilities. 

Essentially, senior staff have a tendency to ignore formal estimates and declare the project end date.  When this happens the project is usually doomed to end in disaster (see Why Senior Management Declared Deadlines lead to Disaster).

So estimation is guaranteed to be uncertain.  Let’s combine the requirements categories from before with the categories of technical uncertainty to see where our uncertainty is coming from.  Knowing the different categories of requirements uncertainty gives us strategies to minimize or eliminate that uncertainty.
Starting with the Initial Requirements, we can see that there are two categories of uncertainty that can addressed before a project even starts:
  1. Superfluous initial requirements
  2. Inconsistent requirements

Both of these requirements will waste time if they get into the development process where they will cause a great deal of confusion inside the team.  At best these requirements will cause the team to waste time, at worst these requirements will deceive the team into building the architecture incorrectly.  A quality assurance process on your initial requirements can ensure that both of these categories are empty.

The next categories of uncertainty that can be addressed before the project starts is:

  1. Requirements with Technical Risk
  2. Requirements Technically Infeasible

Technical uncertainty is usually relatively straight forward to find when a project starts.  It will generally involve non-functional requirements such as scalability, availability, extendability, compatibility, portability, maintainability, reliability, etc, etc.  Other technical uncertainties will be concerned with:

  1. algorithms to deal with limited resources, i.e. memory, CPU, battery power
  2. volume of data concerns, i.e. large files or network bandwidth
  3. strong security models
  4. improving compression algorithms

Any use cases that are called frequently and any reports tie up your major tables are sources of technical uncertainty.  If there will be significant technical uncertainty in your project then you are better off to split these technical uncertainties into a smaller project that the architects will handle before starting the main project.  This way if there are too many technically infeasible issues then at least you can cancel the project.

However, the greatest source of uncertainty comes from the Missing Requirements section.  The larger the number of missing requirements the greater the risk that the project gets canceled.  If we look at the graph we presented above:

You can see that the chance of a project being canceled is highly correlated with the % of scope creep. Companies that routinely start projects with a fraction of the requirements identified are virtually guaranteed to have a canceled project.

In addition, even if you use formal methods for estimation, your project end date will not take into account the Missing Requirements.  If you have a significant number of missing requirements then your estimates will be way off.

Estimation Risks

The most talked about estimation risk is schedule risk.  Since most companies don’t use formal methods, and those that do are often ignored, it makes very little sense to talk of schedule risk.

When people say “schedule risk”, they are making a statement that the project will miss the deadline.  But given that improper estimation is used in most projects it is certain that the project will miss its deadline , the only useful question is “by how much?“.

Schedule risk can only exist when formal methods are used and there is an earliest finish/latest finish range for the project.  Schedule risk then applies to any task that takes longer than its latest finish and compromises the critical path of the project.  The project manager needs to try to crash current and future tasks to see if he can get the project back on track.  If you don’t use formal methods then this paragraph probably makes no sense 🙂

Conclusion

The main sources of uncertainty in software development comes from:

  • requirements
  • technology
  • resources
  • estimates

Successful software projects look for areas of uncertainty and minimize them before the project starts. Some uncertainties can be qualified as risks and should be managed aggressively by the project manager during the project.

Uncertainty in requirements, technology, and resources will cause delays in your project.  If you are using formal methods than you need to pay attention to delays caused by uncertainties not accounted for in your model.  If you don’t use formal methods then every time you hit a delay caused by an uncertainty, then that delay needs to be tacked on to the project end-date (of course, it won’t be 🙂 ).

If your project does not have strong architectural requirements and is not too big (i.e. < 1,000 function points) then you should be able to use Agile software development to set-up a process that grapples with uncertainty in an incremental fashion.  Smaller projects with strong architectural requirements should set up a traditional project to settle the technical uncertainties before launching into Agile development.

Projects that use more traditional methodologies need to add a quality assurance process to their requirements to ensure a level of completeness and consistency before starting development.  One way of doing this is to put requirements gathering into its own project.  Once you capture the requirements, if you establish that you have strong architectural concerns, then you can create a project to build out the technical architecture of the project.  Finally you would do the project itself.  By breaking projects into 2 or 3 stages this gives you the ability to cancel the project before too much effort is sunk into a project with too much uncertainty.

Regardless of your project methodology; being aware of the completeness of your requirements as well as the technical uncertainty of your non-functional requirements will help you reduce the chance of project cancellation by at least one order of magnitude.

It is much more important to understand uncertainty that it is to understand risk in software development.


Appendix: Traditional Software Risks
This list of software risks courtesy of Capers Jones.  Risks listed in descending order of importance.

  • Risk of missing toxic requirements that should be avoided
  • Risk of inadequate progress tracking
  • Risk of development tasks interfering with maintenance
  • Risk of maintenance tasks interfering with development
  • Risk that designs are not kept updated after release
  • Risk of unstable user requirements
  • Risk that requirements are not kept updated after release
  • Risk of clients forcing arbitrary schedules on team
  • Risk of omitting formal architecture for large systems
  • Risk of inadequate change control
  • Risk of executives forcing arbitrary schedules on team
  • Risk of not using a project office for large applications
  • Risk of missing requirements from legacy applications
  • Risk of slow application response times
  • Risk of inadequate maintenance tools and workbenches
  • Risk of application performance problems
  • Risk of poor support by open-source providers
  • Risk of reusing code without test cases or related materials
  • Risk of excessive feature “bloat”
  • Risk of inadequate development tools
  • Risk of poor help screens and poor user manuals
  • Risk of slow customer support
  • Risk of inadequate functionality
VN:F [1.9.22_1171]
Rating: 4.0/5 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Uncertainty and Risk in Software Development (2 of 3)

[Part 1 of 3 is here.]

Defining Risk and its Components

There are future events whose impact can have a negative outcome or consequence to our project. A future event can only be risky if the event is uncertain. If an event is certain then it is no longer a risk even if the entire team does not perceive the certainty of the event, e.g. individuals know that the project is late even though the project manager and senior staff do not.

Risks always apply to a measurable goal that we are trying to achieve; if there is no goal there can be no risk, i.e. a project can’t have schedule risk if it has no deadline.

Once a goal has been impacted by a risk we say that the risk has triggered. The severity of the outcome depends on how
far it displaces us from our goal. Once triggered, there should be a mitigation process to reduce the severity of the possible outcomes.

Before looking at software project risks tied to these goals, let’s make sure that we all understand the components of risk by going through an example.

Risk Example: Auto Collision

Let’s talk about risk using a physical example to make things concrete. The primary goal of driving a car is to get from point A to point B. Some secondary goals are:

  • Get to the destination in a reasonable time
  • Make sure all passengers arrive in good health.
  • Make sure that the car arrives in the same condition it departs in.

There is a risk of collision every time you drive your car:

  • The event of a collision is uncertain
  • The outcome is the damage cost and possible personal injury
  • The severityis proportional to the amount of damage and personal injury sustained if there is an accident
    • If there is loss of life then the severity is catastrophic

A collision will affect one or more of the above goals. Risk management with respect  to auto collisions involves:

  • Reducing the probability of a collision
  • Minimizing the effects of a collision

There are actions that can reduce or increase the likelihood of a collision is:

  • Things that reducethe chance of collision
    • Understanding safe driving techniques
    • Driving when there are fewer drivers on the road
    • Using proper turn signals
  • Things that increasethe chance of collision

By taking the actions that reduce a collision while avoiding the actions that increase it we can reduce the probability or likelihood of a
collision.

Reducing the likelihood of a collision does not change the severity of the event if it occurs. The likelihood of an event
and its consequence are independent even if there are actions that will reduce the likelihood and consequences of an event, i.e. driving slowly.

If an auto collision happens then a mitigation strategy would attempt to minimize the effect of the impact. Mitigation strategies with respect to auto collision are:

  • Wear a seat belt
  • Drive a car with excellent safety features
  • Have insurance for collisions
  • Have the ability to communicate at all times (i.e. cell phone, etc)

Having a mitigation strategy will not reduce the chance of a collision, it will only lessen the severity.

Goals of a Software Project

The primary goals of a software project are:

  • Building the correct software system
  • Building the system so that its benefits exceed its costs (i.e. NPV positive)

Building the Correct Software System

What is the correct software system? Cartoons similar to this one are easily found on the Internet:

The correct system is shown in the last frame of the cartoon; so let’s define the correct system as what the customer actually needs. To build the correct system we will need to have correct requirements.

How Long Will The Project Take?

Let’s assume we have complete and consistent requirements for a correct system. How long will it take to build this system? One approach is to take a competent team and have them build out the system without imposing a deadline. Once the system is built we would have the actual time to build the system (Tbuild) .

Tbuild is theoretical because unless you are using an Agile methodology you will want to estimate (Testimate) how long it takes to produce the system before you start. Nonetheless, given your resources and requirements Tbuild does exist and is a finite number; as one of my colleagues used to say, “the software will take the time that it takes“.

Most executives want to know how long a project is going to take before the project starts. To do this we take the requirements and form an estimate (Testimate)
of how long the system will take to build. The key point to note here is that the actual time to build, Tbuild, and the estimated time to build the system, Testimate , will be different. The key thing to keep in mind is that Testimate is only valid to the extent that you use a valid methodology for establishing an estimate.

Building the System so that its Benefits Exceed its Costs

Building a system so that its benefits exceed its costs is equivalent to saying that the project puts money on the organization’s bottom line. We hope that an organization will do the following:

  • Define the system correctly (project scope)
  • Assess the financial viability of the project (capital budgeting)
  • Establish a viable project plan

Financial viability implies that the available resources will be able to produce the desired system before a specific date (Tbuild < Tviable then the organization will have a financial failure.

The problem is that we don’t know what financial failure. We need to have a reasonable expectation that the project is viable BEFORE we build it out. Therefore we use a proxy by estimating the time (Testimate) it will take to build the software from our project plan.

Once we have a time estimate then we can go forward on the project if Testimate, for a project can be done in multiple ways:

Software Project Risks

There are several primary risks for a software project:

  • Schedule risk
  • Estimation risk
  • Requirements risk
  • Technical risk

We often confuse schedule risk and estimation risk. Schedule risk is the risk that the tasks on the critical path have been under estimated and the project will miss the end date (i.e. Testimate). A project that takes longer than the
estimate is not necessarily a failure unless Tviable.

You can only talk meaningfully about schedule risk in projects where:

  • formal estimation techniques are used
  • proper task dependency analysis is done
  • project critical path is identified

Most of us do not work for organizations that are CMM level 4+ (or equivalent),
so you are unlikely to be using formal methods. When the project end date is arbitrary (i.e. method 2 or 3 above) it is not meaningful to talk about schedule risk, especially since history shows that we underestimate how long it will take to build the system, i.e. Testimate<<< Tbuild. When formal methods are not used (i.e. method 2 or 3 above) then the real issue is estimation risk and not schedule risk.

The real tragedy is when an IT departments attempt to meet unrealistic dates set by management when a realistic date would still yield a viable project (below). Unfortunately, unrealistic deadlines will cause developers to take short cuts and usually cripple the architecture. So that when management gives you additional time after their date fails, the damage to the architecture is terminal and you can’t achieve the initial objective.

Requirements risk is the risk that we do not have the correct requirements and are unable to get to a subset of the requirements that enables us to build the correct system prior to the project end date. There are many reasons for having incorrect requirements when a project starts:

  • The customer can not articulate what he needs
  • Requirements are not gathered from all stakeholders for the project
  • Requirements are incomplete
  • Requirements are inconsistent

Technical risk is the risk that some feature of the correct system can not be implemented due to a technical reason. If a technical issue has no work around and is critical to the correct system then the project will need to be abandoned.

If the technical issue has a work around the:

  • If the technical issue prevents the correct system from being built then we have requirements risk
  • If the technical work around takes to long it can trigger schedule risk

Last part (3 of 3):

  • Discuss other risks and how they roll up into one of the 4 risks outlined above
  • Discuss how risk probability and severity combines to form acceptable or unacceptable risks
  • Discuss risk mitigation strategies
  • Discuss how to form a risk table/database
  • Discuss how to redefine victory for informal projects
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Uncertainty and Risk in Software Development (1 of 3)

To develop high quality software consistently and reliably is to learn how to master complexity. You master complexity when you understand the different sources of uncertainty and the different risk characteristics of each uncertainty. Uncertainties introduce delays in development as you attempt to resolve them. Resolving uncertainties always involves alternative designs and generally affects your base architecture. Poor architecture choices will increase code complexity and create uncertainty as future issues become harder to resolve in a consistent manner.

Confused? Let’s untangle this mess one issue at a time.

Uncertainty

The key principle here is that uncertainty will introduce delays in development. Let’s look at the average speed of development. The Mythical Man Month conjectures that an average developer can produce 10 lines of production code per day regardless of programming language. Let’s assume for the sake of argument that todays developers can code 100 lines of code per day.

Development speed is limited because of meetings, changed and confused requirements, and bug fixing. Suppose we print out all the source code of a working 200,000 line program. If we ask a programmer to type this code in again, they are likely to be typing at least 2,000 lines of code per day. So to develop the program from scratch would have taken 2,000 man days, but to type it in again would only take 100 man days.

The time difference has to do with uncertainty. The developer that develops the application from scratch faces uncertainty whereas the developer that types in the application faces no uncertainty.

If you have ever done mazes you discover that to do the maze from the entry to the exit point involves making decisions, and this introduces delays while you are thinking. However, try doing a maze from the exit back to the entry, you will find there are few decisions to make and it is much faster. Fewer decisions from resolving uncertainty faster leads to fewer delays.

It is always faster to do something when you know the solution.

Sources of Uncertainty

The major sources of uncertainty are:

  • Untrained developers
  • Incomplete and inconsistent requirements
  • Technical challenges

We use the term “learning curve” to indicate that we will be slower when working with new technologies. The slope of the learning curve indicates how much time it will take to learn a new technology. If you don’t know the programming language, libraries/APIs, or IDE that you need to work with this will introduce uncertainty.

You will be constantly making syntax and semantic errors as you learn new languages, but this should pass rather quickly. What will take longer is learning about the base functionality provided by the libraries/APIs. In particular, you will probably end up creating routines only to discover that they are already in the API. Learning a new IDE can take a very long time and create serious frustration along the way! Incomplete and inconsistent requirements are a big source of uncertainty.

Incomplete requirements occur when you discover new use cases as you create a system. They also occur when the details required to code are unavailable, i.e. valid input fields, GUI design, report structure, etc. In particular, you can end up iterating endlessly over GUI and report elements – things that should be resolved before development starts.

Inconsistent requirements occur because of multiple sources of requirements as well as poor team communication. Technical challenges come in many forms and levels of difficulties. A partial list of technical challenges includes:

  • Poorly documented vendor APIs
  • Buggy vendor APIs
  • Interfacing incompatible technologies
  • Insufficient architecture
  • Performance problems

In all cases technical challenge is resolved either by searching for a documented solution in publications, on the Internet, or by trial an error. Trial and error can be done formally or informally but involves investigating multiple avenues of development, possibly building prototypes, and then choosing a solution.

While you are resolving a technical challenge your software project will not advance. A common source of uncertainty is insufficient architecture.

Insufficient architecture occurs when the development team is not aware of the end requirements of the final software system. This happens when only partial requirements are available and/or understood by the developers. The development team lays down the initial architecture for the software based on their understanding of the requirements of the final software system.

Subsequently, clarified requirements or new requirements make developers realize that there was a better way to implement the architecture. The developer and manager will have a conversation that is similar to:


Manager: We need to have feature X changed to allow Y, how soon can we do this?

(pause from the developer)

Developer: We had asked if feature X would ever need Y and we were told that it would never happen. We designed the architecture based on that. If we have to have behavior Y it will take 4 months to fix the architecture and we would have to rewrite 10% of the application.

Manager: That would take too long. Look I don’t want you to over engineer this, we need to get Y without taking too much of a hit on the schedule. What if we only need to have this for this screen?

(pause from the developer)

Developer: If we ONLY had to do it for this one screen then we can code a work around that will only take 2 weeks. But it would be 2 weeks for every screen where you need this. It would be much simpler in the long run to fix the architecture.

Manager: Let’s just code the work around for this screen. We don’t have time to fix the architecture.


The net effect of insufficient requirements is that you end up with poor architecture. Poor architecture will cause a technical challenge every time you need to implement a feature that the architecture won’t support.

You will end up wasting time every time you need to work around your own architecture.

Management will not endorse the proper solution, i.e. fixing the architecture, because they have a very poor understanding that every work around that is made is pushing the project closer and closer to failure. Eventually the software will have so many work-arounds that development will slow to a crawl. It is interesting that the project will probably fail, yet, soon enough the organization will attempt to build the same software using the same philosophy.

There is never enough time to get the project done properly, but there will always be enough time to do it again when the project fails.

http://www.geekherocomic.com/2009/06/03/clever-workaround/

Summary

  • Uncertainty comes from several sources
    • Untrained personnel (language, API, IDE)
    • Inconsistent and incomplete requirements
    • Technical challenges

Next part (2 of 3)

  • Defining and understanding risk
  • Matching uncertainties and risks
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Why Adding Personnel to a late Software Project delays it more

The blog entry on Root Causes of ‘Fire-Fighting’ explains how poor requirements and insufficient team synchronization mechanisms can lead to constant fire-fighting. When faced with constant fire-fighting your project starts spinning out of control and code development will slow to a crawl. At this time, management’s first instinct is to throw more developers at the problem.

While adding resources to a late project seems like a logical thing to do, it generally makes the problem worse, i.e. leads to more fire fighting and reduced productivity. While it seems counter-intuitive, actually throwing people off the project is more likely to make your project move faster.  Fred Brooks, author of The Mythical Man-Month calls this principle Brook’s law.

Different Types of Team Activity

Before addressing why adding resources slow down late projects,  let’s look at the different types of team activities and their inherent productivity characteristics. When teams of people perform tasks they fall into one of three different categories: 1) additive, 2) disjunctive, and 3) conjunctive.

In an additive activity, the productivity of the group is determined by adding up the productivity of each of the individuals comprising the team, i.e. team productivity = Σ (individual productivity) . One additive activity is tug-of-war where the productive output of your team is equal to the sum of the pulling force of all the members of your team. Another additive activity would be a team of people painting a house.

Managers throw additional people into late projects on the assumption that coding is an additive activity, it isn’t; we’ll cover why in a second.

In a disjunctive activity, the productivity of the group is determined by the strongest member of the team, i.e. team productivity = max(individual1, individual2, …, individualn). A disjunctive activity would be playing Trivial Pursuit in large teams, if team gets the answer right when any team member gets it right.  In software projects disjunctive activities occur when there is a very specific technical problem to solve. In the meeting, whoever solves the problem first will solve it for the entire team.

In a conjunctive activity, the productivity of the group is determined by the weakest member of the team, i.e. team productivity = min(individual1, individual2, …, individualn). Conjunctive activities are equivalent to the weakest link in a chain. Security is a conjunctive activity, you are only as secure as the weakest part of your security architecture. Quality is a conjunctive activity and this is why we say “quality is everyone’s job“. It only takes one poor quality component to reduce the quality of an entire product.

When an organization is unaware of critical conjunctive activities, they are likely to have all kinds of execution problems.

Understanding Requirements is a Conjunctive Activity

Software projects get into a fire fighting mode because there is a poor understanding of the requirements from a team perspective. Whether the requirements were well written or not, if those requirements are poorly understood by the team then you start playing 6 blind men and the elephant.

This is where you discover that everyone in your project has a different perspective on what the system is supposed to do and how it is supposed to do it. The fire-fighting mode is nothing more than a set of meetings to resolve differences and solve problems caused by divergent beliefs on the project.

Understanding the requirements is a conjunctive activity. Your productivity is only as good as the weakest understanding in the team. The developer on the team with the weakest understanding of the requirements is probably generating the most defects. If QA does not understand the requirements (if they exist) then they will be generating all kinds of false positives when they are unsure the software is behaving properly.

With this perspective, it is easy to see how adding people to a late project will cause it to be later. The additional developers and QA being added to the project will have the poorest understanding of the requirements of all the team members. This means that they will almost certainly generate more defects in development and cause even more false positives in QA. This will increase the amount of fire-fighting that you do and cause the project to slow down even more.

Solution: Throw People off the Ship

Walk the plank

So as counter-intuitive as it sounds, you need to throw people off the ship. Find the developers and QA personnel who don’t understand the requirements and remove them from the project. These are the guys creating much of the noise in the fire-fighting meetings.

Otherwise get these people together with the business analysts and educate them about what the software is supposed to do and how it is supposed to be done. If you are going to add personnel to the team then this becomes an ideal time to get them educated on the requirements BEFORE they start producing or testing code.

While they are not working directly on the project have them put together the centralized requirements repository suggested in the last blog.  If they become sufficiently familiar with the requirements then you can add them back to the software team.

Additional resource: The Mythical Man Month, by Fred Brooks

VN:F [1.9.22_1171]
Rating: 4.0/5 (2 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Why Senior Management Declared Deadlines lead to Disaster



Senior management, in the face of market pressure, will sometimes initiate a project to address a business need that has a management declared deadline.

Management will do this when they feel market pressures to create a new software product or version because of some impending business event in the market.
These market driven deadlines occur for a variety of reasons:

  • Competitor is releasing a new product/version that will eat your market share
  • Laws have changed that require compliance by your software system
  • Merger with another organization requires software systems to be integrated
  • You are a start-up and need to build version 1.0 to obtain financing
  • Sales has sold vapor-ware and engineering now needs to produce it


Management will be adamant that the deadline is fixed and must be met at all costs. Some management groups are more dysfunctional than others.

The most dysfunctional management will expect all existing project not to be impacted by the addition of a high priority project. They will say things like ‘You will just have to multi-task‘ or something equally ridiculous.



Anyone who has worked in a multitasking mode is familiar with  Little’s law or that productivity drop-off that comes from trying to do multiple things at the same time.

Less dysfunctional management will realize that the high priority project will cause other projects to be late, but they will continue to expect that everything progresses even though resources will be committed to multiple projects.



Project deadlines typically get set in the short term, which means that the resource for the project will be fixed.
Even if the funding is available to hire resources, the new resources will not be familiar enough with the corporate context to be of much use. The scope and quality of the project will be dictated by the market pressure that generated the project.

This project will have fixed scope, quality, time, and resources which will lead to a feasible project only once in a million projects.

Those familiar with project management know that you can only set 2 parameters of time, resources, and scope and then the 3rd parameter must be allowed to vary, if you want a feasible solution. If you specify the resources and scope then the time must be allowed to vary; if you specify resources and time, then the scope must be allowed to vary. When management sets up a project like this the only practical variable that can
stretch is time through resources working overtime.



If an organization is well run and a management declared project occurs rarely then the extra stress due to working long hours can be handled. However there are senior management teams that will declare deadline driven projects on a regular basis.
Such senior management teams are unaware that they are increasing risk by an order of magnitude and are always mystified when they are unable to achieve the results that they want.

Unless the senior IT manager has the courage to point out the problems that such a project will cause, the IT department is doomed to
become a pressure cooker. Even if the IT manager has the courage he needs to have the disciple to refuse the pressure from the other
senior managers until a feasible project is defined.

Unfortunately, most senior IT managers have neither the courage of their convictions nor the discipline to weather the storm.



The end result will lead to a pressure cooker in engineering, where the engineers are being asked to perform super human efforts for which they will
get very little in return.

This kind of environment if prolonged will lead to your better engineers quitting which will cause all the
rest of your projects to get even later. With the loss of productivity that comes from trying to multi-task, the code that is produced is guaranteed to have more bugs than normal, which will then cause the problem to flow into QA.  Even more insanity will follow when QA time is reduced or eliminated, after all, who needs QA?

Needless to say, when faced with such time sensitive projects the work done on the requirements is minimal and so the end product (even if delivered) will generally not satisfy the market pressure the project was designed to alleviate. In short, when senior management responds to market pressures by declaring a project and its deadline it is virtually guaranteed to fail.

How should senior managers deal with market pressures?


When faced with market pressure senior management must be absolutely certain that the pressure is real. There are many situations where managers have declared that a project needs to be finished by a given time, puts too much pressure on the project, and then the project is late or canceled.

Some managers make everything an emergency in order to get their way, they often say ‘We have no choice‘.

In these situations the manager will push for a ridiculous date that will be missed.  They will then turn around and accept a much later date for the the project; if this is the case then the project was not real and management was not responding to a real pressure. This happens regularly in environments where politics is more important than getting things done.



If the project must be done (i.e. compliance project) then it will be critical to assess complete and consistent requirements for the project. Those requirements need to be estimated by the project managers for the deadline and allow the resources to vary.

If the resources exist in the organization then they need to be dedicated to the high priority project and all other work that they were doing needs to be put on hold.

If the resources required exceeds available resources, then the project scope needs to be trimmed if possible and the quality of the features need to be weakened until a feasible project is defined. If trimming the scope and bringing quality to a minimum will still
not result in a feasible project then you need to consider what it means for the project to be late (because it WILL be late).



Any project which increases the overtime by the IT department must be done on a sustainable basis. The more time you demand from the IT department the less sustainable it is, i.e. don’t expect your IT department resources to do 80 hour weeks for 6 months.

With high priority projects make sure that the engineers get some kind of bonus on a regular basis (free lunches, dinners) as well as a financial bonus on completion of the project. If no bonuses are available don’t be surprised to see IT resources jump ship at or before the project completion date.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Root cause of ‘Fire-Fighting’ in Software Projects

Quite a few projects descend into ‘continual fire fighting‘ after the first usable version of the software is produced. Suddenly there are an endless set of meetings which involve the business analysts [1] , developers, QA, and managers. Even when these meetings are well run, you cut the productivity of your developers who can barely get a few contiguous hours to write code between the meetings.

Ever wonder what causes this scenario to occur in so many projects?Below we look at the root causes of fire fighting in a project. We also try to suggest meeting strategies to maximize productivity and minimize developer disruption.
The first thing to notice is the composition of the meetings when fire fighting starts. One common denominator is that it is rarely just the developers getting together to solve a problem involving some technical constraint; often these are cross-functional meetings that involve business analysts and QA. In larger organizations, this will involve end-users and customers. Fundamentally, fire-fighting is the result of poor coordinating mechanisms between team members and confused communication.

Common Scenarios that Waste Time

Typically an issue gets raised in a bug triage meeting about some feature that QA claims is improperly implemented. Development will then go on to explain how they implemented it and where they got the specific requirements.  At this point, the business analyst chimes in about what was actually required. There are several basic scenarios that could be happening here:

  1. The requirements are complete and QA is pointing out that development has implemented the feature incorrectly.
  2. The requirements are loose and development has coded the feature correctly but QA believes that the feature is incorrect
  3. QA has insufficient requirements to know if the feature is implemented correctly or not.
  4. The requirements are loose and development and QA have different interpretations of what that means.

Scenario 1 is what you would expect to happen in a bug triage session. There is no wasted effort for this case as you would expect to need the business analyst, development, and QA to resolve this issue.

Scenario 2 is what happens when the requirements are not well written. Odds are the developer has made several voice calls and emails to the business analyst to resolve the functionality of a particular feature. This information exists purely in the heads of the business analyst and the developer and is buried in their email exchanges and does not make it back into the requirements. This scenario wastes the developer’s time.

Scenario 3 also happens when the requirements are not well written. Most competent QA personnel know how to write test plans and test cases. When the requirements are available to QA with enough time, they can generally determine if they have sufficient information to write the test cases for a given feature. If given the requirements with enough time, QA can resolve the ambiguity with the business analyst and make sure that the requirements are updated. When there is insufficient time, the problem surfaces in the bug triage meeting. This scenario wastes both QA and development’s time.

Scenario 4 occurs when you have requirements that can legitimately be implemented in many different ways. It is likely that QA did not get the requirements before coding started, if so they could have warned the business analyst to fix it. If development has implemented the feature incorrectly, then: 1) the business analyst needs to fix the requirement, 2) development needs to re-code the feature, and 3) QA needs to update their test cases. In this scenario everyone’s time is wasted.

If the your scenarios are not 2), 3), and 4) then you are probably in fire fighting mode because you have requirements that can not be coded as specified due to unexpected technical constraints.  Explaining to the organization why something is technically infeasible can take up quite a few meetings.

As an example of unexpected technical constraints, at Way Systems (now Verifone) we were building a cell phone POS system.  Typically signal strength is shown as 5 bars, however, due to the 3rd party libraries we were using we could only display a number 0-32 for the wireless signal strength.  There was no way to overcome the technical constraint because there were too many framework layers that we did not control.  Needless to say there were quite a few (useless?) meetings while we informed everyone about the issue.

Strategies to Reduce Fire-Fighting

The best way to reduce fire-fighting is simply to have effective requirements when you start a project.  Once you are caught in fire-fighting the cure is the same – you need to fix the requirements and document them in a repository that everyone has access to.  By improving the synchronization mechanisms between the business analysts, development, and QA your fire fighting meetings will go away.  In particular, all those requirements discussions that the business analysts have had with QA and development need to be written down.

Centralize and Document Requirements

If you are using use cases then the changes need to be made in the use case documents.  If you don’t have a centralized repository then you need to create one.   You can use a formal collaboration tool such as SharePoint, an informal collaboration tool such as Google Sites, or simply use a Wiki to host and document all requirements.

In your document repository you will want to keep all requirements by scenario.  If you are using use cases or user stories then each of these is a scenario.  If you have more traditional requirements they you will need to determine the name of the scenarios from your requirements.  Scenarios will be of the form ‘verb noun phrase’, i.e. ‘create person’, ‘notify customer of delivery’, etc.

Once you have a central repository for putting your requirements then ALL incremental requirements should be put on this site, not in cumbersome email chains.  If you need to send an email to someone, then document the requirement to the central site and email a link to the party; do not allow requirements to become buried in your email server.

Run Effective Meetings

Managers are often tempted to call meetings with everyone present ‘just in case‘.  There is no doubt that this will solve the occasional problem, but you are likely just to have a bunch of developers with ‘kill me now‘ expressions on their faces from beginning to end in the meeting.

Structure the meeting by grouping issues by developer and make an agenda so that each developer knows the order that they will be at the meeting.  No developer should have to go to a meeting that does not have an agenda! Next, use some IM tool from the conference room to let developers know when they are required to attend the meeting (not the 1st developer, obviously 🙂 ).  Issues generally run over time so don’t call anyone into the meeting before they are really required.  Give yourself breathing room by having meetings finish 10 minutes before the next generally used slot (i.e. 10:20 am or 2:50 pm).


Issues that really need multiple developers present should be delayed for end of the meeting.  When all other items are handled, use IM to call all the developers for those issues at the end.  By having the group issues at the end you are unlikely to keep them around for a long time since you will probably have to give up your conference room to someone else.

Conclusion

Not all fire-fighting involves bad requirements, but many of them do.  By trying to produce better requirements at the start of a project and implementing centralized mechanisms for those requirements you will reduce the fire-fighting later in your project. If you find yourself in fire-fighting mode, you can use implement a centralized requirements mechanism to help fight your way out of the mess.


[1] Business analysts or product managers
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

The Art of War: How it Applies to Software

Sun Tzu wrote:

War is of vital importance to the state; hence it is a subject of inquiry which can on no account be neglected.

In our modern world, software is of vital importance to your organization. If you can build software consistently and reliably you will gain a tremendous advantage over your competition. If developing software is like waging war then who is the enemy?

The enemy is complexity. Complexity comes from having to make dozens of decisions correctly for your software project to even have a chance of succeeding. For every thing that you know at the outset of the project, there will be at least ten things that you don’t know. Complexity is compounded by having to make those decisions on a deadline, especially if that deadline is not well chosen.

IcebergSo when you consider launching a software project you must understand that you are looking at the tip of the iceberg. Your ability to handle uncertainty will dictate how successful you will be.

Unlike a conventional war this enemy does not sleep, has no obvious weaknesses, and can not be deceived. Once you engage the enemy your success will depend on correctly estimating the magnitude of complexity and making excellent decisions.

Sources of Uncertainty

If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.

Attack by Stratagem, p. 18

Uncertainty comes from a few main sources:

  • Team resources unfamiliar with the technologies to be used
  • Requirements that are incomplete or inconsistent
  • Technical requirements that turn out to be infeasible
  • Inability to understand project dependencies
  • Inability to formulate a correct and understandable plan

Unlike fighting a conventional enemy, complexity will find all your weaknesses. The only way to defeat complexity is through understanding the requirements, having well trained teams, building a solid project plan, and executing well.

Charge of the Light BrigadeUnfortunately the way most organizations develop software resembles the Charge of the Light Brigade (poem).

In that battle about 400 men on horses attacked 20 battalions of infantry supported by 50 artillery pieces.

Needless to say it was a slaughter.

The Art of War (online)

Several principles outlined by Sun Tzu apply to software development as well:

  • Deception
  • Leading to advantage
  • Energy
  • Using Spies
  • Strengths and weaknesses
  • Winning whole

Deception


Since the enemy is complexity we can not deceive it. Rather the problem is that we allow ourselves to be deceived by complexity. How often do you see developer’s say that they can code anything over a weekend over a case of Red Bull?People generally do not launch software projects because they want to fail; but when only 3 out of 10 projects succeed, it shows that people have been deceived about the complexity of building software.This statistic has haunted us for 50 years.Senior management should really pause and consider if all their ducks are lined up in a row before embarking on a software project. Unfortunately senior management is still underestimatingcomplexity and sending teams to face virtually impossible projects.

Leading to advantage

Thus it is that in war the victorious strategist seeks battle after the victory has been won, whereas he who is destined to defeat first fights and afterwards looks for victory in the midst of the fight.

Tactical Dispositions, p. 15

Complexity emerges from the sources of uncertainty mentioned above. Successful organizations plan to remove known uncertainties and have a plan to handle the ones that will emerge.  Getting into a project before understanding the magnitude of the uncertainty is a recipe for failure.

Energy

That the impact of your army may be like a grindstone dashed against an egg—this is effected by the science of weak points and strong

Energy, p. 4

The team’s energy needs to be directed against uncertainty with appropriate force at the appropriate time. When this happens uncertainty will be minimized and the chances for success will increase.

It is imperative to create a table of all risks that might affect your software project. If you aggressively minimize the probability of risks triggering you will reduce uncertainty and increase your chances of success.

Software projects succeed when there is a rhythm to the attack on complexity. High intensity problem solving needs to be followed by lower intensity stability building.  The team must move at a sustainable pace or risk burning out.  Software teams do not succeed when they are working 10+ hours a day; they become like dull swords — unable to do anything.

Using Spies

Foreknowledge cannot be elicited from ghosts and spirits; it cannot be inferred from comparison of previous events, or from the calculations of the heavens, but must be obtained from people who have knowledge of the enemy’s situation.

                                                                                                    Using Spies, 5 —6

The enemy is complexity and is intangible, i.e. invisible, odorless, and untouchable. Your spies are your business analysts, architects, and project managers.

Your business analysts will work with the business to define the scope of the complexity.  Ask for anything you want, but commit to build all you ask!

Remember that all large complex systems that work are built from small simple systems that work, so aim to build the smallest usable product initially.  Asking for too much and providing insufficient resources and/or time will lead to a failed project.

The architects provide checks and balances on the business analysts to make sure that the project is feasible. The architects will provide key dependency information to the project managers who make sure that a proper execution plan is created and followed.

Each of these spies sees a different aspect of complexity that is not visible to other people. Unless the reports of three types is combined effectively you risk not knowing the extent of the software that you are trying to build.  If you go into battle without proper intelligence you are back to the scenario of the Charge of the Light Brigade.

Strengths and weaknesses

Military tactics are like unto water; for water in its natural course runs away from high places and hastens downwards. So in war, the way is to avoid what is strong and to strike at what is weak.

                                     Weak Points and Strong, p. 29—30

Water exhibits ordered flexibility. It is ordered because it seeks to flow downhill; however, it is flexible and will go around rocks and other obstacles. A software project needs to make continual progress without getting sandbagged with obstacles. Methodologies like RUP or Agile software development can make sure that you exhibit ordered flexibility.

Winning whole

Winning whole in a software project means delivering the software on time and on budget without destroying the health and reputations of your team (including management). Failed projects extend their effects to every member of the team and everyone’s resume.

When you engage in actual fighting, if victory is long in coming, then men’s weapons will grow dull and their ardor will be damped.

                                                               Waging War, p. 3

When organizations bite off more than they can chew they exert tremendous pressures on the team resources to work extended hours to make deadlines that are often unrealistic. In the pressure cooker you can expect key personnel to defect and put you into a worse position. How many times have you found yourself on a Death March?

Conclusion

Both waging war and software development are serious topics that involve important struggles. If software development is a war against ignorance, uncertainty, and complexity then many of the strategies and tactics outlined in The Art of War give us direction on how to execute a successful project. 

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Understanding your chances of having a successful software project

We have been building software systems for over 50 years, and yet success rates remain extremely low, see Dan Galorath for some information.  Different reports put the success rates at different levels but successful projects are rarely higher than 30-40%.

Report Year Successful Challenged Failure
Standish Chaos Reports 2009 32% 44% 24%
Saur & Cuthbertson 2003 16% 74% 10%
Tata Consultancy 2007 38% 62%

It might seem strange, but we don’t all have the same definition of success for software projects.  Success is when a project delivers the expected benefits within 10% of cost and schedule.  For me, a project is NOT successful when:

  • It does not deliver what was promised
  • It has cost or time over runs of over 10%
  • It has its scope dramatically reduced so that victory can be claimed
  • It does not have a positive net present value, i.e. it never breaks even

Under those conditions I’m guessing that there are even fewer successful projects out there.  Let’s make software success extremely concrete.  Imagine that you are on a street corner watching people cross the street to the other street corner.  Imagine that out of every 10 people trying to cross the street only 3 people cross successfully, the other 7 get maimed or killed.

How interested would you be in crossing the street?

Sun Tzu wrote:

War is of vital importance to the state; hence it is a subject of inquiry which can on no account be neglected.

In our modern world, software is of vital importance to your organization.   If you can solve your business issues by building software consistently and reliably you will gain a tremendous advantage over your competition.

One misconception is that software projects fail because “I am surrounded by idiots!   Just because we get frustrated at being unable to get software built does not make this statement true.  In fact, the exact opposite is true, the average IQ of Computer System Analysts is 111.3[1], and any IQ above 110 is considered to be Superior Intelligence[2]. 

Don’t get me wrong; you might have a bunch of developers from the shallow end of the gene pool, but that does not explain how thousands of organizations fail to build quality software.  The point is that software does not fail because there are not enough smart people looking at the problem.

There are plenty of consultants, a.k.a. snake oil salesmen, that are willing to sell you “silver bullet” solutions that will solve your every problem.  Have you ever seen any of these really work?  Each of these solutions will generally solve one aspect of your problem and leave you with a larger one to fix later; they will also leave big holes in your budget. Unfortunately, we often succumb to “silver bullet” solutions because they tell us what we want to hear.

To really fix your software development problems requires better understanding of basic principles; after all, there are software projects that succeed out there.  Not surprisingly, the companies that have figured out how to develop software consistently and reliably tend to have the fewest failures.  Learning how to develop software consistently and reliably requires that you learn how the following 5 elements intersect and affect each other:

  1. Requirements
  2. Project Management
  3. Principles
  4. Developers
  5. Executives

Each of these elements will intersect with all of the others.  You will continually stumble through software development until you get the minimum level of execution and synchronicity between these 5 rings.

Organizations that do not understand these 5 rings will create organizational structures and processes that are doomed to fail.  Poor organizational structure and processes will create systemic problems that will lead to the following problems:

  • Constant fire fighting (blog)
  • Inflexible software (blog)
  • Poor architecture (blog)

Problems from poor organizational structure and processes will lead to failed software projects because of the SYSTEM, not the PEOPLE.  However, people always assume that someone is to blame; they rarely look for problems inside the system.  This will lead to severe morale problems and the loss of competent personnel.

 Fixing your software development is a matter of understanding the principles of good organizational and process design.  Once you understand how to balance the 5 elements you will begin to experience success in building software.

 Appendix: Modern IQ Ranges for Various Occupations

According to modern IQ ranges, computer system analysts have one of the highest intelligence quotients of all professions.


[1] Average IQ by occupation (estimated from wordsum scores), January 22, 2011.  Available from http://anepigone.blogspot.ca/2011/01/average-iq-by-occupation.html

[2] What Different IQ Scores Mean, April 12, 2004.   Available from http://wilderdom.com/intelligence/IQWhatScoresMean.html

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)