What do you do if the customer is not right?

We often hear the customer is always right!, but is this really true? Haven’t we all been in situations where the customer is asking for something unreasonable or is simply downright wrong?  Aren’t there times when the customer is dead wrong?

This general strategy reflects the fact that 5:1 it is more expensive to acquire a new customer than to retain an existing one.  So even when the customer is wrong, accommodating their idiosyncrasies is worth losing a battle to win the profit war because acquiring customers is so expensive.

Today we take a more nuanced position.

If the customer is unreasonable and unprofitable then it makes no sense to adopt the motto that the customer is always right!.  Unprofitable accounts are retained if they are strategic and either 1) become profitable, or 2) draw enough profitable accounts into the company to make up for the loss. This strategy is employed by start-ups to get their first customer.

Recognizing bad customers is usually not difficult. Transactional customers are often bad customers; especially those that want the lowest price and act as though every product is a commodity; they try to play vendors off against each other despite quality requirements.

It is often useful to allow transactional customers interested in the lowest price to purchase from competitors on price and let the lack of a quality solution come back to haunt them. Reducing quality to meet customer price objectives will leave you with customer complaints when product and service quality is substandard.

When sales executives chase all opportunities hoping for a sale is when transactional buyers are courted and you get pulled into pricing concessions from demanding customers. The problem is demanding transactional buyers won’t just ask for the best price, they will also ask for product changes.

There is no doubt that customer requirements need to be a driver for product management it is an early indication of changing markets. But, accepting all customization requests is impossible and would cripple your product and brand.

Constant unreasonable requests leave internal resources believing that sales people have extremely low IQs and morals.

Good sales people understand these principles and don’t chase bad customers.  But, there are not enough good sales people to go around, so virtually every company has a less-than-excellent sales person making trouble for product management and engineering.

To make matters worse, sales people are very good at making a case that all customers are strategic and argue that a short term loss will eventually turn into a long term gain. This behavior is normal and expected because sales is driven by commissions, which are often revenue based. Hopefully, your sales process is robust enough to catch these attempts which would saddle you with bad customers.

If you find yourself in a position where you have acquired one or more bad customers (you know who they are..) then your best course of action is to find some way to send them to your competitors. This will increase your profitability and reduce the stress of unreasonable requests flooding into product management and engineering.

The customer is not always right, but with due diligence and account reviews you can determine the customers that should be retained and those that should be let go.

Don’t be afraid to let unprofitable and non-strategic customers go. You will feel less stressed and be better off in the long run.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Do Project Managers need Domain Experience?

Opinions vary on whether a project manager needs to have domain experience.  Certainly project managers that do not have domain experience will be the first to say that domain experience is not necessary as long as they have access to excellent subject matter experts.

I would advocate a more nuanced position; that is, a project manager does not need domain experience IF his subject matter experts understand the risks and dependencies that are inherent to the domain.

Let’s go through a couple of personal projects that I have been involved with where the project manager did not have domain experience.

Telco Project

I am currently involved in a project that involves a LAN/WAN/WIFI upgrade of a large customer for a large telecommunications company.  The project manager does not have domain expertise in networks and is counting on the subject matter experts to provide him sufficient input to execute the project.

The subject matter experts are so advanced in their knowledge of networks that they no longer understand what beginners (i.e. the project manager) do not know.  They have assumed that when they indicate things to the project manager that he understands what they mean and will take appropriate actions.

The project manager is continually running into situations where he did not understand the implications of certain risks and dependencies.  This has caused a certain amount of rework and delays.

Fortunately, this is not a project with tremendous amounts of risk or dependencies so the project will be late but will succeed.

Mobile Handset Project

In the distant past ,I was part of a team that was building a mobile POS terminal that worked over cellular (GSM, CDMA).  The project manager in this situation did not have domain experience and was counting on the subject matter experts.  In this case, the subject matter experts were very good at general design, but not experts in building cellular devices.

Because the subject matter experts were not specialists, they knew most of the key principles of designing mobile handsets but did not understand all the nuances of handset design.  There were several key issues required by practical handset manufacturing that were overlooked by the generalists and ended up creating such a strong cost over-run that the start-up went out of business.

Sumary

In the first project, the subject matter experts were extremely good, however, the project manager failed to understand the implication of some of their statements and this introduced large delays in the project.

In the second project, the subject matter experts were generalists and did not understand all the risks and dependencies of the project.  The project manager (and start-up) were doomed to fail because “you don’t know what you don’t know”.

Both these projects show that a project can be delayed or fail because a project manager does not have domain experience.

Conclusion

So if a project does not have many uncertainties and dependencies then it is extremely likely that the project manager does not require domain experience and can rely to some degree on his subject matter experts.

However, if the project has complex uncertainties and/or dependencies then a good project manager without domain experience is likely to find himself in a several positions where the consequences of not understanding the uncertainties and dependencies will either introduce serious rework or torpedo the project.

VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)

Not using UML on Projects is Fatal

UMLThe Unified Modeling Language (UML) was adopted as a standard by the OMG in 1997, almost 20 years ago.  But despite its longevity, I’m continually surprised at few organizations actually use it.

Code is the ultimate model for software, but it is like the trees of a forest.  You can see a couple, but only few people can see the entire forest by just looking at the code.  For the rest of us, diagrams are the way to see the forest, and UML is the standard for diagrams.

They say, “A picture is worth a thousand words“, and this is true for code; even on a large monitor you can only see so many lines of code.  Every other engineering discipline has diagrams for complex systems, e.g. design diagrams for airplanes, blueprints for buildings.  In fact, the diagrams need to be created and approved  BEFORE the airplane or building is created.

Contrast that with software where UML diagrams are rarely produced, or if they are produced, they are produced as an after thought.  The irony is that the people pushing to build the architecture quickly say that there is no time to make diagrams, but they are the first people to complain when the architecture sucks.  UML is key to planning (see Not planning is for losers)

I think this happens because developers, like all people, are focused on what they can see and touch right now.  It is easier to try to code a GUI interaction or tackle database update problems than it is to work at an abstract level through the interactions that are taking place from GUI to database.

Yet this is where all the architecture is.  Good architecture makes all the difference in medium and large systems.  Architecture is the glue that holds the software components in place and defines communication through the structure.  If you don’t plan the layers and modules of the system then you will continually be making compromises later on.

In particular, medium to large projects (>10,000 function points) are at a very high risk of failure if you don’t consider the architectural issues.  Considering only 3 out of 10 software projects are successful only a fool would skip planning the architecture (see Failed? You get what you deserve!)

Good diagrams, in particular UML, allow you to abstract away all the low level details of an implementation and let you focus on planning the architecture.  This higher level planning leads to better architecture and therefore better extensibility and maintainability of software.

If you are a good coder then you will make a quantum leap in your ability to tackle large problems by being able to work through abstractions at a higher level.  How often do we find ourselves unable to implement simple features simply because the architecture doesn’t support it?

Well the architecture doesn’t support it because we spend very little time developing the blueprint for the architecture of the system.

UML diagrams need to be produced at two levels:

  • the analysis or ‘what’ level
  • the design or ‘how’ level

Analysis UML diagrams (class, sequence, collaboration) should be produced early in the project and support all the requirements.  Ideally you use a requirements methodology that allows you to trace easily from the requirements onto the diagrams.

Analysis diagrams do not have implementation classes on them, i.e. no vendor specific classes.  The goal is to identify how the high level concepts (user, warehouse, product, etc) relate to each other.

These analysis level UML diagrams will help you to identify gaps in the requirements before moving to design.  This way you can send your BAs and product managers back to collect missing requirements when you identify missing elements before you get too far down the road.

Once the analysis diagrams validate that the requirements are relatively complete and consistent, then you can create design diagrams with the implementation classes.  In general the analysis diagrams are one to many to the design diagrams.

Since you have validated the architecture at the analysis level, you can now do the design level without worrying about compromising the architectural integrity.  Once the design level is complete you can code without compromising the design level.

When well done the analysis UML, design UML, and code are all in sync.  Good software is properly planned and executed from the top down.  It is mentally tougher to create software this way, but the alternative is continuous patches and never ending bug-fix cycles.

So remember the following example from Covey’s The 7 Principles of Highly Effective People:

You enter a clearing where a man is furiously sawing at a large log, but he is not making any progress.  You notice that the saw is dull and is unable to cut the wood, so you say, “Hey, if you sharpen the saw then you will saw the log faster”.  To which the man replies, “I don’t have time, I’m too busy sawing the log”.

Don’t be the guy sawing with a dull

UML is the tool to sharpen the saw, it does take time to learn and apply, but you will save yourself much more time and be much more successful.

Bibliography

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Pair Programming for Team Building

Extreme programming (XP) introduced most people to pair programming.

The theory was that the sooner that code was reviewed, the more effective the review — so how much more effective can you be if you do that review right away?

Pair programming increases productivity by 3% and quality by 5%

The reason it isn’t a better practice is that two people are being used to produce a single result and so it is not very efficient.  For more information about the marginal productivity see Capers Jones1.

However, as a team building tool, pair programming can be extremely effective used in specific situations where high productivity is maintained:

  • Training new team members in coding conventions
  • Sharing individual productivity techniques
  • Working through complex sections of code

New Team Members

The first issue is self explanatory, pair programming allows you to explain your coding conventions while working on actual projects.

It also gives you a fairly good glimpse into how that team member will work with the group.

The key here is that the new member should pair program with different people every day until they have worked with the entire team.  This will speed up the integration of new members and get everyone familiar with each other.

Sharing Productivity Practices

One of the key benefits of pair programming is that it is an ideal time to share productivity practices.  Surprisingly, it isn’t just the less experienced programmers that learn from the more experienced ones.  Often, more experienced programmers get surprised by newer programmers that point out habits that they are not even aware of.

Working with newer programmers can expose you to information on IDEs and new productivity tools that you are not aware of.  As much as we do keep up, there is continually new stuff coming out and the newer programmers are aware of it.  In addition, there are sub-optimal habits that we all pick up and no longer notice because we do them all the time.

Working Through Complex CodeObfusticated Code

Once you have planned a complex section of code, it can be very helpful to build that section of code as a pair.

For information on planning complex code see:

Planning is 1/2 the work, making sure that you implement that plan can often require two people to make sure that all loose ends (exceptions, boundary cases, etc) are taken care of.  In particular, these are the sections of code that you want two pairs of eyes on as you are much more likely to recognized a missed alternative or work through weird conditions.

Summary

Used appropriately, pair programming can be a great tool for integrating new members into a team, sharing productivity techniques, and reduce defects and improve quality of difficult sections of code.

References

  1. Jones, Capers and Bonsignour, Olivier.  The Economics of Software Quality.  Addison Wesley.  2011
  2. Jones, Capers. SCORING AND EVALUATING SOFTWARE METHODS, PRACTICES, AND RESULTS. 2008.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Team Conflict is for Losers

Loser, smallIt is a guarantee that don’t like someone on your development team and they have behaviors or habits that you might find objectionable:

But as irritating as you find your co-workers, odds are:

You do something that they find annoying…

Annoyances and poor communication can lead to conflicts that range from avoidance to all out war where people get drawn into taking sides.  But consider the cost of team conflict :

Issue Productivity Software Quality
Internal team conflict -10% -15%
Management conflict -14% -19%

The table above is only showing the average result of conflict, some of us have been in situations that get much, much worse.

Software development is not a popularity contest, you don’t have to like everyone that you work with.  However, if you allow your feelings of annoyance escalate into conflict then there is a real cost to your project and ultimately in your stress levels.

All conflicts start with disagreements.  The Communications Catalyst2 talks about the following cycle:Disagree, Defend, Destroy

  • Disagree
  • Defend
  • Destroy

When you disagree with your coworkers then they don’t feel listened to.  They will then defend their position by digging in their heels, then you will dig in your heels and the road to destruction starts. If there are any annoying habits present then the conflict will escalate quickly.

If things get out of hand then people start taking sides and productivity takes a major hit. In the worst conflicts this leads to loss of key personnel, which has been measured to be:

Loss of key personnel, productivity -16%, quality -22%

Losing key personnel who have comprehensive knowledge of business rules and organizational practices tied up in their heads often causes projects to face fault and come to a stand still.

You may feel justified in starting a conflict or escalating one, however, as clever as you think you are, conflict hurts everyone — yourself included.  Just remember:

It is virtually impossible to start a conflict that doesn’t boomerang back and bite you in the @ss!

4 Ways to Avoid or Reduce Conflicts

Things to consider to avoid conflict:

  • Don’t disagree first, signal that the other person has been heard
    • You will rarely agree with everything that someone else says, but start by agreeing with the part that you do agree with.1 This will at least signal that you have heard them and reduce their anxiety that you are not listening to them.
    • Even mechanically echoing everything that they just said is a way to signal that you heard what was said.
    • Once this is done, then talk about what you don’t agree with.
  • Don’t interrupt people.
    • When you are excited and thoughts are springing to mind then you may be tempted to do all the talking and stop listening; get this under control, take a breath, and let others talk.
    • People generally consider it rude when you interrupt and will assume arrogance on your part.  If you are not trying to be arrogant and someone tells you this then wake up — you need to listen.
  • Don’t be frustrated when people don’t understand you
    • If you really know something that others don’t then simply restating your point of view will not improve their understanding.
    • If your friend is lost in a new shopping mall then describing your location will not be helpful in helping him find you.  You need to find out where he is and walk him through the steps of getting to your location.
    • Be open to the idea that there might be something that you are not seeing.  With additional information you might revise your point of view.
  • Don’t automatically assume that someone is insulting you
    • In virtually every case where someone feel insulted this is a knee-jerk reaction to a misunderstanding where no insult was intended.
    • Jumping to conclusions is not good under any circumstance, but is lethal in social interactions.

Managers should be on the lookout for the signs of conflict and clear them up while they are still small.  Most conflicts arise from simple misunderstandings.

You will notice that most organizations will promote people based on their ability to work with others and resolve conflicts over competence.

Learning how to resolve conflicts is often your ticket to an overdue promotion…

References

  1. Carnegie, Dale.  How to Win Friends and Influence People. 1998.
  2. Connolly, Mickey and Rianoshek, Richard.  The Communication Catalyst, 2002.
  3. Jones, Capers and Bonsignour, Olivier.  The Economics of Software Quality.  Addison Wesley.  2011
  4. Kahneman, Daniel. Thinking Fast and Slow. 2011

Side Note

My best friend also works in the tech sector, and despite being friends for almost 25 years we have very few beliefs or habits in common.  There are subjects that we agree on, but then we don’t agree on how they should be handled.  We virtually never take the same action under the same conditions.

Even though we are very different people this has never stood in the way of us being able to do things together.  If you look around you will see radically different people that manage to cooperate and even thrive.

The key to all working relationships especially when the other person is very different from you is respect.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Schedule Risk is a Red Herring!!!

Red HerringWe often hear the term schedule risk, however, it is generally a Red Herring. Stating that the schedule might stretch is about as useful as saying that eating can cause you to gain weight.

You may be correct but it gives you no leverage to solve the problem

Schedules slip as a result of a problem, if you want to solve the problem then you must identify the root cause.  Any problem will result in a task taking longer than expected and potentially affecting the schedule.

TwoSidesSameCoinRisk and uncertainty are two sides of the same coin.  Without uncertainty there is no risk.

No Uncertainty = No Risk

A risk is a contingent liability; a risk is a future event that is uncertain that has consequences.

The key words are future and uncertain.

If 6 months remain and the deadline is in 2 months then there is no schedule risk because there is no uncertainty.

6 months late means that the earliest that the critical path items can finish is in 6 months. Just because the project has not hit the deadline and senior staff doesn’t understand the project is late does not qualify the team to talk as if the outcome is uncertain.

It is disingenuous and cowardly to suggest to senior staff that a deadline is possible when you know that it is not.

When the team knows that they are late, they often talk about tasks as being risky simply because they hope that miracles can happen1.

Hope is not a strategy

In fact, Kahneman points out all of us are wired to bet (pray?) on unlikely outcomes when faced with certain losses, i.e. we double down when faced with a loss.  Team members know about the negative consequences of failure and make projects seem possible simply because they want to delay the pain. Even worse, as the situation gets more desperate people will take bigger and bigger risks.

IntestinalFortitudeUsing the term schedule risk when a project is not feasible essentially robs the managers of making a course correction until the point where very little can be done.

At a minimum, money can be saved by winding the project down. Few people have the intestinal fortitude to speak out when they know that a project is late.  Unfortunately, cowardice is very common.

If you take a paycheck then you have an obligation to your organization to tell them when a project is late.

So it makes no sense to talk about schedule risk when:

  • The project is late and you know it
  • The project is not late but you see schedule items slipping

In the latter case you are much better to talk about why things are slipping rather than using the term schedule risk.  By talking about the root cause of the slippage, especially early in a project, can lead to you either solving the problem or adjusting the project deadline.  Either way, you will have a greater chance of ending up with a feasible project.

Related Articles

References

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Project failed? You get what you deserve!

3 of 10 software projects fail, 3 succeed, and 4 are ‘challenged’1.  When projects fail because you cut corners and exceed your capabilities then — you get what you deserve.  You don’t deserve pity when you do it to yourself.

We estimate that between $3 trillion and $6 trillion dollars are wasted every year in IT.  Most of this is wasted by organizations that are unskilled and unaware that they are ignorant.

Warning this article is long!

However, there are organizations that succeed regularly because they understand development, implement best practices, and avoid worst practices (see Understanding Your Chances).

In fact, McKinsey and Company in 2012 stated:

A study of 5,400 large scale IT projects  finds that the well known problems with IT Project Management are persisting. Among the key findings quoted from the report:

    • 17 percent of large IT projects go so badly that they can threaten the very existence of the company
    • On average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted

law-unintended-consequencesProjects fail consistently because organizations choose bad practices and avoid best practices and wonder why success is elusive (see Stop It! No… really stop it. to understand the common worst 5 practices)

What is amazing is that failures do not prompt the incompetent to learn why they failed.

RinseAndRepeatEven worse, after the post-fail finger pointing ceremony, people just dust themselves off and rinse and repeat.

The reality is that we have 60 years of experience in building software systems.  Pioneers like Watts Humprey, M.. E. Fagan, Capers Jones, Tom DeMarco, Ed Yourdon, and institutions like the Software Engineering Institute (SEI)  have demonstrated that software complexity can be tamed and that projects can be successful2.

The worst developers are not even aware that there is clear evidence about what works or what doesn’t in software projects.  Of course, let’s not let the evidence get in the way of their opinions.

IngredientsIngredients of a Successful Project

Successful software projects generally have all the following characteristics:

  • Proper business case justification and good capital budgeting
  • Very good core requirements for primary functionality
  • Effective sizing techniques used before executing the project
  • Appropriate project management to the size of the project and to the philosophy of the organization
  • Properly trained personnel
  • Focus on pre-test defect removal

MissingPiecesEvery missing characteristic reduces your chance of success by an order of magnitude.  If you know that one or more of these characteristics are missing then you get what you deserve!

Missing some of these elements doesn’t guarantee failure, but it severely decreases your chance at success.

Let’s go through these elements in order.

This article is very long, so this is a good place to bail if you don’t have time.

Proper Business Case

IfYouBuildItThis is the step that many failed projects skip over, the hard work behind determining if a project is viable or not.

Organizations take the Field of Dreams approach, i.e. “If you build it, they will come...” and skip this step due to ignorance, often resulting from executives who do not understand software (see No Business Case == Project Failure). These are executives that do not have experience with software projects and assume that their force of personality can will software projects to success.

Some organizations claim to build business cases, but these documents are worthless.  I even know of public companies that write the business case AFTER the project has started, simply to satisfy Sarbanes-Oxley requirements.

A proper business case attempts to quantify the requirements and technical uncertainty of a software project.  It does due diligence into what problem is being solved and who it is solving the problem for.  It at least verifies with a little effort that the cash flows resulting from the project will be NPV positive.

Business cases are generally difficult to write because they involve getting partial information.  This can be very difficult if your analysts are substandard (see When BA means B∪ll$#!t Artist).

Very Good Core Requirements

Skeleton Once a project has a proper business case then you need to capture the skeleton of the core requirements.  This is a phase where you determine the primary actors of the system and work out major use case names.

Why expand requirements before starting the project?

Executives have a business to run and need to know when software will be available.  If you don’t know how big your project is then you can’t create an effective project plan.  You don’t want to capture all the requirements so core requirements (i.e. a good skeleton) helps you to size the project without having to get the detailed requirements.

This is why executives like the waterfall methodology. On the surface, this methodology seems to have a predictable timeline — which is what they need to synchronize other parts of the business.  The problem is that the waterfall methodology DOES NOT WORK (see last page).

The only way for managers to get a viable estimate of a software project is to expand the business case into requirements that allow you to determine the project’s size before you start it.

Calculate House SizeThis process is just like determining the cost for a house by the square footage and the quality, i.e. 2500 sq. ft at normal quality (~$200 per sq. ft.) would be approximately $500K, even without detailed blueprints.  Very accurate estimates can be derived by sizing a project using function points.

Effective Estimation

RulersNow that you have core requirements, you can determine the size of the project and get an approximate cost.  You are fooling yourself if you think that you can size large projects without formal estimates (see Who needs formal measurement?)

Just like you can determine the approximate cost of a house if you know the square footage and the quality, you can estimate a software project pretty accurately if you know how many function points (i.e. square footage) and quality requirements of the project3.

There is so much literature available on how to effectively size projects, so do yourself a favor and look it up.  N.B. There are quite a few reliable tools for an accurate estimate of software projects, i.e. COCOMO II, SLIM, SEER-SEM.  See also Namcook Analytics,

If you don’t size a project then your project plan predicts nothing

Of course, you could always try a management declared deadline which is guaranteed to fail (see Why Senior Management Declared Deadlines lead to Disaster)

Appropriate Project Management

AgileManifestoYou must select a project methodology appropriate to the organization.  Many developers are trying to push their organizations towards Agile software development, although many developers are actually quite clueless about what Agile development is.

Agile software development needs buy-in from the top of the organization.  Agile software development will probably do very little for you if you are not doing business cases and gathering core requirements before a project.

Discover how developers who claim that they are ‘Agile’ have fooled themselves into thinking that they are doing Agile development. (see Does Agile hide Development Sins?)

Trained Personnel

IncompetenceManagement often confuses seniority with competence.  After all, if someone has been with the company for 10 years they must be competent, no?  The reality is that most people with 10 years of experience only have 1 year repeated 10 times.  They are no more skilled then someone with 1 year under their belt.

Learn why in general it may be useful to get rid of older developers that are not productive (see No Experience Required!).  Also when it comes to development, you are definitely better off with people that do not rush to write code (see Productive Developers are Smart and Lazy)

Focus on Pre-Test Defect Removal

I’ve written extensively on pre-test defect removal, see Are Debuggers Crutches? for more information.

Conclusion

It is likely that you know all these ingredients that make for a successful project, you’ve just assumed that even though all these characteristics are not present that you just can’t fail.

Quite often projects fail under the leadership of confident people who are incompetent and don’t even know that they are incompetent.  If you want to know why intelligent people often do unintelligent things see Are You are Surrounded by Idiots?  Unfortunately, You Might be the Idiot..

There probably are projects that fail out there because of circumstances out of their control (i.e. natural disasters, etc) but in most failed projects you get what you deserve!

Ingredients


Fallacy of the Waterfall Methodology

The waterfall methodology is widely attributed to Winston W. Royce.

The irony is that the paper he published actually concludes that:

In my experience, however, the simpler method (i.e. waterfall) has never worked on large software development and efforts and the costs to recover far exceeded those required to finance the five step process listed.

That is Mr. Royce said that the waterfall process would never work.  So much for the geniuses that only read the first 2 pages of the paper and then proceeded to create the “waterfall method” and cost organizations trillions of dollars in failed projects each year.

The waterfall methodology was pushed down our throats by ignorant managers that saw that the waterfall seemed to mimic factory processes.  Because this was the process they understood, they icorrectly assumed that this was the right way to develop software.

If any of these guys had bothered to read more than 2 pages from the Royce paper they would have realized that they were making a colossal blunder.

Back to article


1 Challenged means that the project goes significantly over time or budget. In my estimation, ‘challenged’ simply means politically declaring victory on a project that has really failed.

2 This applies to projects that are 10,000 function points or less. We still have problems with projects that are larger than this, but the vast majority of projects are under this threshold.

3 Quality requirements depend on how reliable the project must be. If the risk is that someone might die because of a software malfunction the quality, and therefore cost, must be much higher than if software failures only constitute an annoyance.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Don’t be a Slave to Your Tools

Abstract SlaveDevelopers attach quickly to tools because they are concrete and have well defined behavior.  It is easier to learn a tool than to learn good practices or methodology.

Tools only assist in solving problems, they can’t solve the problem by themselves. A developer who understands the problem can use tools to increase productivity and quality.

Poor developers don’t invest the time or effort to understand how to code properly and avoid defects.  They spend their time learning how to use tools without understanding the purpose of the tool or how to use it effectively.

To some degree, this is partially the fault of the tool vendors.  The tool vendors perceive an opportunity to make $$$$$ based on providing support for a common problems, such as:

  • defect trackers to help you manage defect tracking
  • version control systems to manage source code changes
  • tools to support Agile development (Version One, JIRA)
  • debuggers to help you find defects

There are many tools out there, but let’s just go through this list and point out where developers and organizations get challenged.  Note, all statistics below are derived from over 15,000 projects over 40 years.1

Defect Trackers

Believe it or not, some organizations still don’t have defect tracking software. I’ve run into a couple of these companies and you would not believe why…

Inadequate defect tracking methods: productivity -15%, quality -21%

So we are pretty much all in agreement that we need to have defect tracking; we all know that the ability to manage more than a handful of defects is impossible without some kind of system.

Automated defect tracking tools: productivity +18%, quality +26%

The problem is that developers fight over which is the best defect tracking system. The real problem is that almost every defect tracking system is poorly set-up, leading to poor results. Virtually every defect tracking system when configured properly will yield tremendous benefits. The most common pitfalls are:

  • Introducing irrelevant attributes into the defect lifecycle status, i.e. creation of statuses like deferred, won’t fix, or functions as designed
  • Not being able to figure out if something is fixed or not
  • Not understanding who is responsible for addressing a defect

The tool vendors are happy to continue to provide new versions of defect trackers. However, using a defect tracker effectively has more to do with how the tool is used rather than which tool is selected.

One of the most fundamental issues that organizations wrestle with is what is a defect?  A defect only exists if the code does not behave according to specifications. But what if there are no specifications or the specifications are bad?  See It’s not a bug, it’s… for more information.

Smart organizations understand that the way in which the defect tracker is used will make the biggest difference.  Discover how to get more out of you defect tracking system in Bug Tracker Hell and How to Get Out.

Another common problem is that organizations try to manage enhancements and requirements in the defect tracking system.  After all whether it is a requirement or a defect it will lead to a code change, so why not put all the information into the defect tracker?  Learn why managing requirements and enhancements in the defect tracking system is foolish in Don’t manage enhancements in the bug tracker.

Version Control Systems

Like defect tracking systems most developers have learned that version control is a necessary hygiene procedure.  If you don’t have one then you are likely to catch a pretty serious disease (and at the least convenient time)

Inadequate change control: productivity -11%, quality -16%

Virtually all developers dislike version control systems and are quite vocal about what they can’t do with their version control system.  If you are the unfortunate person who made the final decision on which version control system is used just understand that their are hordes of developers out their cursing you behind your back.

Version control is simply chapter 1 of the story.  Understanding how to chunk code effectively, integrate with continuous build technology, and making sure that the defects in the defect tracker refers to the correct version are just as important as the choice of version control system.

Tools to support Agile

Sorry Version One and JIRA, the simple truth is that using an Agile tool does not make you agile, see this.

These tools are most effective when you actually understand Agile development. Enough said.

Debuggers

I have written extensively about why debuggers are not the best tools to track down defects.  So I’ll try a different approach here.

One of the most enduring sets of ratios in software engineering has been 1:10:100.  That is, if the cost of tracking down a defect pre-test (i.e. before QA) is 1, then it will cost 10x if the defect is found by QA, and 100x if the defect is discovered in deployment by your customers.

Most debuggers are invoked when the cost function is in the 10x or 100x part of the process.  As stated before, it is not that I do not believe in debuggers — I simply believe in using pre-test defect removal strategies because they cost less and lead to higher code quality.

Pre-test defect removal strategies include:

  • Planning code, i.e. PSP
  • Test driven development, TDD
  • Design by Contract (DbC)
  • Code inspections
  • Pair programming for complex sections of code

You can find more information about this in:

Seldom Used Tools

Tools that can make a big difference but many developers don’t use them:

Automated static analysis: productivity +21%, quality +31%

Automated unit testing: productivity +17%, quality +24%

Automated unit testing generally involves using test driven development (TDD) or data driven development together with continual build technology.

Automated sizing in function points: productivity +17%, quality +24%

Automated quality and risk prediction: productivity +16%, quality +23%

Automated test coverage analysis: productivity +15%, quality +21%

Automated deployment support: productivity +15%, quality +20%

Automated cyclomatic complexity computation: productivity +15%, quality +20%

Important Techniques with No Tools

There are a number of techniques available in software development that tool vendors have not found a way to monetize on. These techniques tend to be overlooked by most developers, even though they can make a huge difference in productivity and quality.

The Personal Software Process and Team Software Process were developed by Watts Humphrey, one of the pioneers of building quality software.

Personal software process: productivity +21%, quality +31%2

Team software process: productivity +21%, quality +31%3

The importance of inspections is covered in:

Code inspections: productivity +21%, quality +31%4

Requirement inspections: productivity +18%, quality +27%4

Formal test plans: productivity +17%, quality +24%

Function point analysis (IFPUG): productivity +16%, quality +22%

Conclusion

There is definitely a large set of developers that assume that using a tool makes them competent.

The reality is that learning a tool without learning the principles that underly the problem you are solving is like assuming you can beat Michael Jordan at basketball just because you have great running shoes.

Learning tools is not a substitute for learning how do do something competently. Competent developers are continually learning about techniques that lead to higher productivity and quality, whether or not that technique is supported by a tool.

References

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

What the Heck are Non-Functional Requirements?

What the Heck are Non-Functional Requirements?  Simply put, if functional requirements create code that will address the needs of the end-users (customers), then non-functional requirements address the needs of the people who install, operate, and configure the code.

Those people are the operations personnel and help desk personnel in whatever organization that uses your software.  Every developer needs to be aware of what those non-functional requirements are and why operations personnel and help desk personnel are customers that are just as important as the end-users.

Functional Requirements

Functional requirements are baked into the code that developer’s deliver (interpreted or compiled).   Events from input devices (network, keyboard, devices) trigger functions to convert input into output — all functions have the form:
Function

This is true whether you use an object-oriented language or not.  Non-functional requirements involve everything that surrounds a functional code unit.  Non-functional requirements concern things that involve time, memory, access, and location:

  • Performance
  • Availability
  • Capacity
  • Continuity
  • Security

Non-functional requirements are slightly different between desktop applications and services; this article is focused on non-functional requirements for services.

If you have any knowledge of ITIL you will recognize that the last 4 items deal with the warranty of a service.  In fact, the functional requirements involve the utility of a service, the non-functional requirements involve the warranty of a service.

Availability

Availability is about making sure that a service is available when it is supposed to be available. Availability is about a Configuration Item (CI) in the environment of the operations center that specifies how the code is accessed.  Availability is decided independently of the code and is at best part of the Service Design Package (SDP) that is delivered to the operations department, at worst it is simply code dumped on the operations personnel.

Developer’s need to be aware of the difficulty of creating the CI for the operations personnel.  If a CI is manually created then there will always be a potential for an error when the service is installed or updated.  The requirement to create a CI is a non-functional requirement and the ability to minimize errors is another non-functional requirement.

Developer’s need to be aware of single-points of failure (i.e. services hard-coded to a specific IP) which causes fits in operations that are not running virtual machines (VM) that can have virtual IPs .  The requirement to create code that is not reliant on static IPs or specific machines is a non-functional requirement.  Availability is simplified in operations if the code is resilient enough to allow itself to easily move (or be replicated) among servers.

Availability non-functional requirements include:

  • Ability to easily make the CI
  • Automatic installation of CI or mechanisms
  • Ability to detect and prevent manual errors for a CI
  • Ability to easily move code between servers

Capacity

Capacity is about delivering enough functionality when required.  If you ask a web service to supply 1,000 requests a second when that server is only capable of 100 requests a second then some requests will get dropped.  This may look like an availability issue, but it is caused because you can’t handle the capacity requested.

Internet services almost always can’t provide enough capacity with a single machine and operations personnel need to be able to run multiple servers with the same software to meet capacity requirements.  The ability to run multiple servers without conflicts is a non-functional requirement. The ability to take a failing node and restart it on another machine or VM is a non-functional requirement.

Capacity non-functional requirements include:

  • Ability to run multiple instances of code easily
  • Ability to easily move a running code instance to another server

Continuity

Continuity involves being able to be robust against major interruptions to a service, these include power outages, floods or fires in an operational center, or any other disaster that can disrupt the network or physical machines.

Where availability and capacity often involve redundancy inside a single operation center, continuity involves geographic and network redundancy.  Continuity at best involves having multiple servers that can work in geographically distributed operation centers.  At worst, you need to be able to have a master-slave fail over model with the ability to journal transactions and eventually bring the master back up.

Security

SecurityBouncer, smallSecurity non-functional requirements concern who has access to functions and preventing the integrity of data from being corrupted.

Where access is concerned, how difficult will it be for operations personnel or help desks to set up security for users?  Developer’s build in different levels of access into their applications without considering how difficult it will be for a 3rd party (help desk or operations) to set-up end users.  The ease of setting up security is a non-functional requirement.

Data integrity is another non-functional requirement.  Developer’s need to consider how their applications will behave if the program encounters corrupted data due to machine or network failures.  This is not as important an issue in environments using RAID or redundant databases.

What Happens When You Forget Non-Functional Requirements

Commonly start-ups are so busy setting up their services that the put non-functional requirements on the back burner.  The problem is that there are non-functional requirements that need to be designed into the architecture when software is created.

For example, it is easy to be fooled into building software that is tied to a single machine, however, this will not scale in operations and cause problems later on.  One of the start-ups I was with built a server for processing credit/debit card transactions without considering non-functional requirements (capacity, continuity).  It cost more to add the non-functional requirements than it cost to develop the software!

Every non-functional requirement that is not thought through at the inception of a project will often represent significant work to add later on.  Every such project is a 0 function point project that will require non zero cost!

Generally availability, capacity, and continuity is not a problem for services developed with cloud computing in mind.  However, there are thousands of legacy services that were developed before cloud computing was even possible.

If you are developing a new service then make sure it is cloud enabled!

Operations People are People Too

Make no mistake, operations and help desk personnel are fairly resourceful and have learned how to manage software where non-functional requirements are not handled by the code. Hardware and OS solutions exist for making up for poorly written software that assumes single machines or does not take into account the environment that the code is running in, but that can come at a fairly steep cost in infrastructure.

The world has moved to services and it is no longer possible for developers to ignore the non-functional requirements involved with the code that they are developing.  Developer’s that think through the non-functional requirements can reduce costs dramatically on the bottom line and quality of service being delivered.

The guys that run  operational centers and help desks are customers that are only slightly less important than the end-user.  Early consideration of the non-functional requirements makes their lives easier and makes it much easier to sell your software/services.

It is no longer possible for competent developers to be unaware of non-functional requirements.

Other articles:

  • No Experience Necessary
    • Counter-intuitive evidence why years of experience does not make developers more productive
  • Shift Happens
    • Why scope shift on development projects is inevitable and why not capturing requirements at the start of a project can doom it to failure.
  • Inspections are not Optional
    • Software inspections are intensive but evidence shows that for each hour of inspection you can reduce QA by 4 hours!
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Enhancements don’t belong in the bug tracker

Not my faultAs development progresses we inevitably run into functionality gaps that are either deemed as enhancements.

These issues often get captured by QA in the bug tracker and assigned to a developer.

Enhancements should not be managed from the bug tracker

The life cycle of a defect and the life-cycle of a enhancement are two entirely different things.  A defect is a difference between a stated requirement and the code. If there is no documentation there is no code defect (see It’s not a bug, it’s…) — in fact, most enhancements will eventually be coded by some developer; they just should not be managed from the bug tracker.

Defect Life-cycleDefectLifecycle

The defect life-cycle is well known:

  • Defect is identified as a departure from the requirements
  • Defect is assigned to a developer
  • Defect is corrected
  • Correct is verified
    • If not corrected re-open and re-assign to developer
  • The defect is closed

This is the incorrect way to manage enhancements.  When a functionality gap is determined by QA and it is not covered by the requirements then we have an issue.  It is rarely the case that the issue can be resolved by the developer.

Enhancement Life-cycle

If enhancements are assigned to a developer then they are likely to try to resolve the issue. The problem is that “enhancements” determined at the QA level may be phantom problems caused by either:

  1. Insufficient requirements
  2. Correct requirements but incorrect test plans

Enhancements may or may not become code changes.  Even when enhancements turn into code change requests they will generally not be implemented as the developer or QA think they should be implemented.

Enhancements are really requirement defects. Enhancements should be logged as such in the bug tracker and assigned to the person in charge of requirements (business analyst or product manager).  Those individuals should be responsible to track down how these issues should be handled.

If the requirements are correct and the test plans are defective then it should be logged as a test defect.  This is tricky because QA often controls the bug tracker and will not log errors that they have made.

At a minimum, the implementation of requirements and test defects can do several positive things for you:

  1. It removes the responsibility to find a solution from development.
  2. It makes it clear how many defects are in the requirements or test plans.
  3. It reduces stress; no developer wants to be blamed for an issue that is not his.
  4. Many enhancements call for updated project plans and pushing back the deadline.

Put Responsibility Where it Belongs

The creation of requirements and test defects in the bug tracker goes a long way to cleaning up the bug tracker.  In fact, requirements and test defects represent about 25% of defects in most systems (see Bug Tracker Hell and How to Get Out!).  The percentages break down as follows:

  • Requirements defects: 9.58%
  • Testing defects: 15.42%

The creation of requirement and test defects in the bug-tracker alleviate pressure on the engineering department and redirect it  to either the product manager or QA. Eventually enough data will accumulate in the bug-tracker to get management’s attention.

At a minimum, these categories should help reduce the amount of fire-fighting in late projects (Root cause of ‘Fire-Fighting’ in Software Projects)

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)