User stories or use cases?

Alistair Cockburn says “A user story is to a use case as a gazelle is to a gazebo“.

If you are extremely familiar with both techniques then this makes sense.  If you are not familiar with both then suffice it to say that you can put a gazelle in a gazebo but not vice versa.

A user story is of the form As a <type of user>, I want <some goal> so that <some reason> (see here).  

A use case for withdraw money has many more components to it, and a skeleton of such a use case can be found here.  I develop this skeleton use case in a four part tutorial:

  • Part 1: Use case as a dialog
  • Part 2: What is an actor?
  • Part 3: Adding interface details
  • Part 4: Adding application context

The above user story is only a part of the entire use case.  For an ATM, a user story might be As a user, I want to withdraw money so that I can have physical cash.  This user story is only the summary of the withdraw money use case without the details.

User stories come from the Extreme Programming methodology where the assumption was that there will be a high degree of interaction between the developers and the end customer and that QA will largely be done through test driven development.  It is important to realize that Extreme Programming does not scale.

Once your development team gets large, i.e. you have 3+ agile teams and your code base gets large enough to warrant a formal testing environment, then you will outgrow user stories as your only method of capturing requirements.  You can still use user stories for new modules
developed with an end customer to take advantage of the light weight and rapid nature of user stories, but at some point those user stories should transition to use cases.

Unfortunately, there are quite a few ways to build use cases and not all of them are effective.  Alistair Cockburn’s method (found here) is an excellent way to capture use cases for more sophisticated systems.  There is no doubt that a full use cases is heavier weight than a user story, but consider this:

  • Use cases can more easily be turned into test cases by QA
  • With use cases you more easily prove that you have all the requirements
  • Use cases annotated with screens and reports can be used to collaborate with remote off site customers
  • Well written use cases can easily be broken up into a sprint back log
  • Use cases will work if you are not using Agile development methods

So don’t forgo the speed and dynamic nature of user stories, just recognize that there are limits to user stories and that you will need to transition to use cases when your project team or application grows.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Ready, Fire, Aim: Why correct requirements are rarely gathered

missed_target-300x204Connecting with your customers and delivering value depends on understanding your customer’s requirements and selling the correct product or solution that solves your customer’s problems.

Commonly, the requirements gathering process is done hastily or not at all in the rush to get the sale. After all, the faster you can make the sales process go, the faster the money is in the bank — correct?

tailoringThe more that product customization or creation is required, the more it is important understand the customer’s actual problem.  The more time required to build and implement a solution means will not only lead to a failed sale but also to a more disappointed customer and a loss of future revenue.

If you build software, which has long lead times, it is even more important to make sure that what you are building will satisfy the customer’s requirements as changing the final software solution will be nearly impossible and very costly.

Why do we continually misunderstand and sell the wrong solutions and building the wrong software?

Human Nature

Time pressures to make a sale put us under pressure and this stress leads to making quick decisions about whether a product or solution can be sold to a customer.  We listen to the customer but interpret everything he says according to the products and solutions that we have.

Often the customer will use words that seem to match exactly the products we have.  But then after the sale once we have to implement we often find that either we deliver a poor solution to the customer or a solution is infeasible and we must refund the customers money.

However, behind every word that the customer is using there is an implied usage, and understanding that implied usage is where we fail to gather requirements.

For example, suppose the customer says I need a car. Suppose that you sell used cars:

  • the customer asked for a car
  • you sell cars

Ergo problem solved!

  • What if the customer needs an SUV but you don’t have any?
  • What if the customer really needs a truck?
  • What if the customer needs a car with many modifications?
  • What if the car the customer needs has never been built?

We hear the word car and we think that we know what the customer means.  The order-taker sales person will spring into action and sell what he thinks the customer needs. Behind the word car is an implied usage and unless you can ferret out the meaning that the customer has in mind, you are unlikely to sell the correct solution.

If you don’t understand how the customer will to use your solution then you don’t understand the problem.

Comedy of Errors

For products that require customization, the sale will get transferred to professional services that will dig deeper into the customer’s requirements.  At this point you discover that the needs of the customer cannot be met.  This leads to sales people putting pressure on professional services and product management to ‘find a solution’, after all, losing the sale is not an option.

Sometimes heroic actions by the product management, professional services, and software development teams lead to a successful implementation, but usually not until there has been severe pain at the customer and midnight oil burned in your company.  You can eventually be successful but that customer will never buy from you again.

Consequences

As WIlliam Ralph Inge said, There are no rewards or punishments — only consequences.

The consequence of selling the wrong solution to a customer is:

  • Whether you lose the sale or not, the customer loses faith in you
  • The sales person is perceived as incompetent in the rest of the organization especially by professional services and software devleopment
  • Your cost of sales and implementation is much higher than expected
  • Your reputation is damaged

Conclusion

Selling the correct solution to the customer requires that you understand the customer’s problem before you sell the solution.

When customization is required, good sales people engage resources that can capture the customer’s requirements accurately and assess that you can deliver a solution to the customer.

Slowing down to understand the customer requirements and how you will solve his problems is the key.  By understanding the customer’s requirements and producing the correct solutions you become a trusted adviser to the customer.

 

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Not using UML on Projects is Fatal

UMLThe Unified Modeling Language (UML) was adopted as a standard by the OMG in 1997, almost 20 years ago.  But despite its longevity, I’m continually surprised at few organizations actually use it.

Code is the ultimate model for software, but it is like the trees of a forest.  You can see a couple, but only few people can see the entire forest by just looking at the code.  For the rest of us, diagrams are the way to see the forest, and UML is the standard for diagrams.

They say, “A picture is worth a thousand words“, and this is true for code; even on a large monitor you can only see so many lines of code.  Every other engineering discipline has diagrams for complex systems, e.g. design diagrams for airplanes, blueprints for buildings.  In fact, the diagrams need to be created and approved  BEFORE the airplane or building is created.

Contrast that with software where UML diagrams are rarely produced, or if they are produced, they are produced as an after thought.  The irony is that the people pushing to build the architecture quickly say that there is no time to make diagrams, but they are the first people to complain when the architecture sucks.  UML is key to planning (see Not planning is for losers)

I think this happens because developers, like all people, are focused on what they can see and touch right now.  It is easier to try to code a GUI interaction or tackle database update problems than it is to work at an abstract level through the interactions that are taking place from GUI to database.

Yet this is where all the architecture is.  Good architecture makes all the difference in medium and large systems.  Architecture is the glue that holds the software components in place and defines communication through the structure.  If you don’t plan the layers and modules of the system then you will continually be making compromises later on.

In particular, medium to large projects (>10,000 function points) are at a very high risk of failure if you don’t consider the architectural issues.  Considering only 3 out of 10 software projects are successful only a fool would skip planning the architecture (see Failed? You get what you deserve!)

Good diagrams, in particular UML, allow you to abstract away all the low level details of an implementation and let you focus on planning the architecture.  This higher level planning leads to better architecture and therefore better extensibility and maintainability of software.

If you are a good coder then you will make a quantum leap in your ability to tackle large problems by being able to work through abstractions at a higher level.  How often do we find ourselves unable to implement simple features simply because the architecture doesn’t support it?

Well the architecture doesn’t support it because we spend very little time developing the blueprint for the architecture of the system.

UML diagrams need to be produced at two levels:

  • the analysis or ‘what’ level
  • the design or ‘how’ level

Analysis UML diagrams (class, sequence, collaboration) should be produced early in the project and support all the requirements.  Ideally you use a requirements methodology that allows you to trace easily from the requirements onto the diagrams.

Analysis diagrams do not have implementation classes on them, i.e. no vendor specific classes.  The goal is to identify how the high level concepts (user, warehouse, product, etc) relate to each other.

These analysis level UML diagrams will help you to identify gaps in the requirements before moving to design.  This way you can send your BAs and product managers back to collect missing requirements when you identify missing elements before you get too far down the road.

Once the analysis diagrams validate that the requirements are relatively complete and consistent, then you can create design diagrams with the implementation classes.  In general the analysis diagrams are one to many to the design diagrams.

Since you have validated the architecture at the analysis level, you can now do the design level without worrying about compromising the architectural integrity.  Once the design level is complete you can code without compromising the design level.

When well done the analysis UML, design UML, and code are all in sync.  Good software is properly planned and executed from the top down.  It is mentally tougher to create software this way, but the alternative is continuous patches and never ending bug-fix cycles.

So remember the following example from Covey’s The 7 Principles of Highly Effective People:

You enter a clearing where a man is furiously sawing at a large log, but he is not making any progress.  You notice that the saw is dull and is unable to cut the wood, so you say, “Hey, if you sharpen the saw then you will saw the log faster”.  To which the man replies, “I don’t have time, I’m too busy sawing the log”.

Don’t be the guy sawing with a dull

UML is the tool to sharpen the saw, it does take time to learn and apply, but you will save yourself much more time and be much more successful.

Bibliography

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Why User Stories Rarely Work

All tools are useful when used appropriately, and User stories are no different.

User stories are fantastic when used in small teams on small projects where the team is co-located and has easy access to customers.

User stories can quickly fall apart under any of the following situations:

  • the team is not small
  • the project is not small
  • the team is not in a single location
  • customers can be accessed in a timely fashion
  • project end date must be computed

AgileManifestoIndividualsUser stories were introduced as a core part of Extreme Programming (XP). Extreme Programming assumes that none of the above are happening; relax any of these constraints and you can end up with a process out of control.  XP, and hence user stories, works in high intensity environments where there are strong feedback loops.

User stories need intense communication 

User stories are a light-weight methodology that facilitates intense interactions between customers and developers and put the emphasis on the creation of code, not documentation.  Their simplicity makes it easy for customers to help write them, but they must be complemented with timely interactions so that issues can be clarified.

Large teams make intense interactions between each pair of developers difficult; intense interactions keep everyone on the same page.  Most organizations break teams into smaller groups where communication is through email or managers — this kills communication and interaction.

Larger projects have non-trivial architectures.  Building non-trivial architecture by only looking at the end user requirements is impossible. This is like only having all the leaves of a tree and thinking you can figure out all the branches and the trunk must be, good luck.

User stories don’t work with teams where intense interaction is not possible.  Teams distributed over multiple locations or time zones do not allow intense interaction.  You are delusional if you think regular conference calls constitute intense interaction.

When emphasis is on the writing of code then it is critical that customers can be accessed in a timely fashion.  If your customers are indirectly accessible through product managers or account representatives every few days then you will end up with tremendous latency.

 

Live weekly demos with customers are necessary to flush out misunderstandings quickly and keep you on the same page

User stories are virtually impossible to estimate. Often, we use user stories because there is a high degree of requirements uncertainty either because the requirements are unknown or it is difficult to get consistent requirements from customers.

Since user stories are difficult to estimate, especially since you don’t know all the requirements, project end dates are impossible to predict with accuracy.

To summarize, intense interactions between customers and developers are critical for user stories to be effective because this does several things:

  • it keeps all the customers and developers on the same page
  • it flushes out misunderstandings as quickly as possible

Diluted

All of the issues listed initially dilute the intensity of communication either between the team members or the developers and customers.  Each issue that increases latency of communication will increase misunderstandings and increase the time it takes to find and remove defects.

So if you have any of the following:

  • Large or distributed teams
  • Project with non-trivial architecture
  • Difficult access to customers
  • Projects in new domains
  • Projects where knowing the end-date is necessary
Then user stories are probably not your best choice of requirements methodology.  At best you may be able to complement your user stories with storyboards, at worst you may need some form of use case.
Other requirements articles:
VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Project failed? You get what you deserve!

3 of 10 software projects fail, 3 succeed, and 4 are ‘challenged’1.  When projects fail because you cut corners and exceed your capabilities then — you get what you deserve.  You don’t deserve pity when you do it to yourself.

We estimate that between $3 trillion and $6 trillion dollars are wasted every year in IT.  Most of this is wasted by organizations that are unskilled and unaware that they are ignorant.

Warning this article is long!

However, there are organizations that succeed regularly because they understand development, implement best practices, and avoid worst practices (see Understanding Your Chances).

In fact, McKinsey and Company in 2012 stated:

A study of 5,400 large scale IT projects  finds that the well known problems with IT Project Management are persisting. Among the key findings quoted from the report:

    • 17 percent of large IT projects go so badly that they can threaten the very existence of the company
    • On average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted

law-unintended-consequencesProjects fail consistently because organizations choose bad practices and avoid best practices and wonder why success is elusive (see Stop It! No… really stop it. to understand the common worst 5 practices)

What is amazing is that failures do not prompt the incompetent to learn why they failed.

RinseAndRepeatEven worse, after the post-fail finger pointing ceremony, people just dust themselves off and rinse and repeat.

The reality is that we have 60 years of experience in building software systems.  Pioneers like Watts Humprey, M.. E. Fagan, Capers Jones, Tom DeMarco, Ed Yourdon, and institutions like the Software Engineering Institute (SEI)  have demonstrated that software complexity can be tamed and that projects can be successful2.

The worst developers are not even aware that there is clear evidence about what works or what doesn’t in software projects.  Of course, let’s not let the evidence get in the way of their opinions.

IngredientsIngredients of a Successful Project

Successful software projects generally have all the following characteristics:

  • Proper business case justification and good capital budgeting
  • Very good core requirements for primary functionality
  • Effective sizing techniques used before executing the project
  • Appropriate project management to the size of the project and to the philosophy of the organization
  • Properly trained personnel
  • Focus on pre-test defect removal

MissingPiecesEvery missing characteristic reduces your chance of success by an order of magnitude.  If you know that one or more of these characteristics are missing then you get what you deserve!

Missing some of these elements doesn’t guarantee failure, but it severely decreases your chance at success.

Let’s go through these elements in order.

This article is very long, so this is a good place to bail if you don’t have time.

Proper Business Case

IfYouBuildItThis is the step that many failed projects skip over, the hard work behind determining if a project is viable or not.

Organizations take the Field of Dreams approach, i.e. “If you build it, they will come...” and skip this step due to ignorance, often resulting from executives who do not understand software (see No Business Case == Project Failure). These are executives that do not have experience with software projects and assume that their force of personality can will software projects to success.

Some organizations claim to build business cases, but these documents are worthless.  I even know of public companies that write the business case AFTER the project has started, simply to satisfy Sarbanes-Oxley requirements.

A proper business case attempts to quantify the requirements and technical uncertainty of a software project.  It does due diligence into what problem is being solved and who it is solving the problem for.  It at least verifies with a little effort that the cash flows resulting from the project will be NPV positive.

Business cases are generally difficult to write because they involve getting partial information.  This can be very difficult if your analysts are substandard (see When BA means B∪ll$#!t Artist).

Very Good Core Requirements

Skeleton Once a project has a proper business case then you need to capture the skeleton of the core requirements.  This is a phase where you determine the primary actors of the system and work out major use case names.

Why expand requirements before starting the project?

Executives have a business to run and need to know when software will be available.  If you don’t know how big your project is then you can’t create an effective project plan.  You don’t want to capture all the requirements so core requirements (i.e. a good skeleton) helps you to size the project without having to get the detailed requirements.

This is why executives like the waterfall methodology. On the surface, this methodology seems to have a predictable timeline — which is what they need to synchronize other parts of the business.  The problem is that the waterfall methodology DOES NOT WORK (see last page).

The only way for managers to get a viable estimate of a software project is to expand the business case into requirements that allow you to determine the project’s size before you start it.

Calculate House SizeThis process is just like determining the cost for a house by the square footage and the quality, i.e. 2500 sq. ft at normal quality (~$200 per sq. ft.) would be approximately $500K, even without detailed blueprints.  Very accurate estimates can be derived by sizing a project using function points.

Effective Estimation

RulersNow that you have core requirements, you can determine the size of the project and get an approximate cost.  You are fooling yourself if you think that you can size large projects without formal estimates (see Who needs formal measurement?)

Just like you can determine the approximate cost of a house if you know the square footage and the quality, you can estimate a software project pretty accurately if you know how many function points (i.e. square footage) and quality requirements of the project3.

There is so much literature available on how to effectively size projects, so do yourself a favor and look it up.  N.B. There are quite a few reliable tools for an accurate estimate of software projects, i.e. COCOMO II, SLIM, SEER-SEM.  See also Namcook Analytics,

If you don’t size a project then your project plan predicts nothing

Of course, you could always try a management declared deadline which is guaranteed to fail (see Why Senior Management Declared Deadlines lead to Disaster)

Appropriate Project Management

AgileManifestoYou must select a project methodology appropriate to the organization.  Many developers are trying to push their organizations towards Agile software development, although many developers are actually quite clueless about what Agile development is.

Agile software development needs buy-in from the top of the organization.  Agile software development will probably do very little for you if you are not doing business cases and gathering core requirements before a project.

Discover how developers who claim that they are ‘Agile’ have fooled themselves into thinking that they are doing Agile development. (see Does Agile hide Development Sins?)

Trained Personnel

IncompetenceManagement often confuses seniority with competence.  After all, if someone has been with the company for 10 years they must be competent, no?  The reality is that most people with 10 years of experience only have 1 year repeated 10 times.  They are no more skilled then someone with 1 year under their belt.

Learn why in general it may be useful to get rid of older developers that are not productive (see No Experience Required!).  Also when it comes to development, you are definitely better off with people that do not rush to write code (see Productive Developers are Smart and Lazy)

Focus on Pre-Test Defect Removal

I’ve written extensively on pre-test defect removal, see Are Debuggers Crutches? for more information.

Conclusion

It is likely that you know all these ingredients that make for a successful project, you’ve just assumed that even though all these characteristics are not present that you just can’t fail.

Quite often projects fail under the leadership of confident people who are incompetent and don’t even know that they are incompetent.  If you want to know why intelligent people often do unintelligent things see Are You are Surrounded by Idiots?  Unfortunately, You Might be the Idiot..

There probably are projects that fail out there because of circumstances out of their control (i.e. natural disasters, etc) but in most failed projects you get what you deserve!

Ingredients


Fallacy of the Waterfall Methodology

The waterfall methodology is widely attributed to Winston W. Royce.

The irony is that the paper he published actually concludes that:

In my experience, however, the simpler method (i.e. waterfall) has never worked on large software development and efforts and the costs to recover far exceeded those required to finance the five step process listed.

That is Mr. Royce said that the waterfall process would never work.  So much for the geniuses that only read the first 2 pages of the paper and then proceeded to create the “waterfall method” and cost organizations trillions of dollars in failed projects each year.

The waterfall methodology was pushed down our throats by ignorant managers that saw that the waterfall seemed to mimic factory processes.  Because this was the process they understood, they icorrectly assumed that this was the right way to develop software.

If any of these guys had bothered to read more than 2 pages from the Royce paper they would have realized that they were making a colossal blunder.

Back to article


1 Challenged means that the project goes significantly over time or budget. In my estimation, ‘challenged’ simply means politically declaring victory on a project that has really failed.

2 This applies to projects that are 10,000 function points or less. We still have problems with projects that are larger than this, but the vast majority of projects are under this threshold.

3 Quality requirements depend on how reliable the project must be. If the risk is that someone might die because of a software malfunction the quality, and therefore cost, must be much higher than if software failures only constitute an annoyance.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Don’t be a Slave to Your Tools

Abstract SlaveDevelopers attach quickly to tools because they are concrete and have well defined behavior.  It is easier to learn a tool than to learn good practices or methodology.

Tools only assist in solving problems, they can’t solve the problem by themselves. A developer who understands the problem can use tools to increase productivity and quality.

Poor developers don’t invest the time or effort to understand how to code properly and avoid defects.  They spend their time learning how to use tools without understanding the purpose of the tool or how to use it effectively.

To some degree, this is partially the fault of the tool vendors.  The tool vendors perceive an opportunity to make $$$$$ based on providing support for a common problems, such as:

  • defect trackers to help you manage defect tracking
  • version control systems to manage source code changes
  • tools to support Agile development (Version One, JIRA)
  • debuggers to help you find defects

There are many tools out there, but let’s just go through this list and point out where developers and organizations get challenged.  Note, all statistics below are derived from over 15,000 projects over 40 years.1

Defect Trackers

Believe it or not, some organizations still don’t have defect tracking software. I’ve run into a couple of these companies and you would not believe why…

Inadequate defect tracking methods: productivity -15%, quality -21%

So we are pretty much all in agreement that we need to have defect tracking; we all know that the ability to manage more than a handful of defects is impossible without some kind of system.

Automated defect tracking tools: productivity +18%, quality +26%

The problem is that developers fight over which is the best defect tracking system. The real problem is that almost every defect tracking system is poorly set-up, leading to poor results. Virtually every defect tracking system when configured properly will yield tremendous benefits. The most common pitfalls are:

  • Introducing irrelevant attributes into the defect lifecycle status, i.e. creation of statuses like deferred, won’t fix, or functions as designed
  • Not being able to figure out if something is fixed or not
  • Not understanding who is responsible for addressing a defect

The tool vendors are happy to continue to provide new versions of defect trackers. However, using a defect tracker effectively has more to do with how the tool is used rather than which tool is selected.

One of the most fundamental issues that organizations wrestle with is what is a defect?  A defect only exists if the code does not behave according to specifications. But what if there are no specifications or the specifications are bad?  See It’s not a bug, it’s… for more information.

Smart organizations understand that the way in which the defect tracker is used will make the biggest difference.  Discover how to get more out of you defect tracking system in Bug Tracker Hell and How to Get Out.

Another common problem is that organizations try to manage enhancements and requirements in the defect tracking system.  After all whether it is a requirement or a defect it will lead to a code change, so why not put all the information into the defect tracker?  Learn why managing requirements and enhancements in the defect tracking system is foolish in Don’t manage enhancements in the bug tracker.

Version Control Systems

Like defect tracking systems most developers have learned that version control is a necessary hygiene procedure.  If you don’t have one then you are likely to catch a pretty serious disease (and at the least convenient time)

Inadequate change control: productivity -11%, quality -16%

Virtually all developers dislike version control systems and are quite vocal about what they can’t do with their version control system.  If you are the unfortunate person who made the final decision on which version control system is used just understand that their are hordes of developers out their cursing you behind your back.

Version control is simply chapter 1 of the story.  Understanding how to chunk code effectively, integrate with continuous build technology, and making sure that the defects in the defect tracker refers to the correct version are just as important as the choice of version control system.

Tools to support Agile

Sorry Version One and JIRA, the simple truth is that using an Agile tool does not make you agile, see this.

These tools are most effective when you actually understand Agile development. Enough said.

Debuggers

I have written extensively about why debuggers are not the best tools to track down defects.  So I’ll try a different approach here.

One of the most enduring sets of ratios in software engineering has been 1:10:100.  That is, if the cost of tracking down a defect pre-test (i.e. before QA) is 1, then it will cost 10x if the defect is found by QA, and 100x if the defect is discovered in deployment by your customers.

Most debuggers are invoked when the cost function is in the 10x or 100x part of the process.  As stated before, it is not that I do not believe in debuggers — I simply believe in using pre-test defect removal strategies because they cost less and lead to higher code quality.

Pre-test defect removal strategies include:

  • Planning code, i.e. PSP
  • Test driven development, TDD
  • Design by Contract (DbC)
  • Code inspections
  • Pair programming for complex sections of code

You can find more information about this in:

Seldom Used Tools

Tools that can make a big difference but many developers don’t use them:

Automated static analysis: productivity +21%, quality +31%

Automated unit testing: productivity +17%, quality +24%

Automated unit testing generally involves using test driven development (TDD) or data driven development together with continual build technology.

Automated sizing in function points: productivity +17%, quality +24%

Automated quality and risk prediction: productivity +16%, quality +23%

Automated test coverage analysis: productivity +15%, quality +21%

Automated deployment support: productivity +15%, quality +20%

Automated cyclomatic complexity computation: productivity +15%, quality +20%

Important Techniques with No Tools

There are a number of techniques available in software development that tool vendors have not found a way to monetize on. These techniques tend to be overlooked by most developers, even though they can make a huge difference in productivity and quality.

The Personal Software Process and Team Software Process were developed by Watts Humphrey, one of the pioneers of building quality software.

Personal software process: productivity +21%, quality +31%2

Team software process: productivity +21%, quality +31%3

The importance of inspections is covered in:

Code inspections: productivity +21%, quality +31%4

Requirement inspections: productivity +18%, quality +27%4

Formal test plans: productivity +17%, quality +24%

Function point analysis (IFPUG): productivity +16%, quality +22%

Conclusion

There is definitely a large set of developers that assume that using a tool makes them competent.

The reality is that learning a tool without learning the principles that underly the problem you are solving is like assuming you can beat Michael Jordan at basketball just because you have great running shoes.

Learning tools is not a substitute for learning how do do something competently. Competent developers are continually learning about techniques that lead to higher productivity and quality, whether or not that technique is supported by a tool.

References

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

When BA means B∪<<$#!t Artist

BA typically means Business Analyst, but what makes for a good BA?  When do you have a good BA and when don’t you? Many projects fail at the beginning due to incomplete, inconsistent, and overly verbose analysis produced by BAs.

Are your BAs any good?  The success of many projects depend on the ability of the BA do correctly identify the problem you are trying to solve.  If your BA is not competent then you are doomed before you start.

RubiksCubeBusiness analysis consists of all facets of solving business problems. Business analysis is a role executed by project, product, and engineering managers.  However, there are people that do business analysis as their main role and we simply label them as business analysts or product managers. We shall stick to the term business analyst for simplicity.

Business analysts perform a range of tasks including:

  • Gathering requirements
  • Writing business cases and project charters
  • Performing gap analysis between products and corporate processes

Ever suspect that the people responsible for performing business analysis for you are not up for the challenge?  Here are three simple questions; good business analysts will get all three correct and not take longer than 20 seconds.

  Context Question
#1  MullerLyerLines Which of these two lines is longer?
#2  paris_spring_puzzle What does this sign say?
#3 I have two products whose prices add up to $1.10 and one product is $1 more than the other. How much is the more expensive product?

The answers are:

  1. The lines are the same length
    1. Take a ruler if you are not convinced
  2. Paris in the the spring
    1. Notice that the occurs twice
  3. The more expensive product is $1.05
    1. If the more expensive one had been $1 then the cheaper one would be $0.10 but then the expensive one would only be $0.90 more expensive than the cheaper one.

Jumping+to+conclusionsThese three questions illustrate that the mind can jump to conclusions that are often wrong.  The mind can be tricked because it assesses things quickly and only some people have the instinct to double check their results.

Most BAs are well educated; however, the interesting thing is that studies show that educated people are less able to see their biases and they jump to conclusions more readily than other people.1

A good business analyst realizes that during analysis there will be situations where the mind will jump to conclusions.  Many business analysts are asked to resolve conflicting requirements, recognize missing requirements, and deal with biases coming from many different sources.  Unless you have the reflex of checking facts for consistency and eliminating bias then you won’t make a good business analyst.

So what is the quality of your BAs?

Other articles

Bibliography

West, Richard F. and Meserve, Russel J. Cognitive Sophistication does not Attenuate the Bias Blind Spot. Journal of Personality and Social Psychology.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

What the Heck are Non-Functional Requirements?

What the Heck are Non-Functional Requirements?  Simply put, if functional requirements create code that will address the needs of the end-users (customers), then non-functional requirements address the needs of the people who install, operate, and configure the code.

Those people are the operations personnel and help desk personnel in whatever organization that uses your software.  Every developer needs to be aware of what those non-functional requirements are and why operations personnel and help desk personnel are customers that are just as important as the end-users.

Functional Requirements

Functional requirements are baked into the code that developer’s deliver (interpreted or compiled).   Events from input devices (network, keyboard, devices) trigger functions to convert input into output — all functions have the form:
Function

This is true whether you use an object-oriented language or not.  Non-functional requirements involve everything that surrounds a functional code unit.  Non-functional requirements concern things that involve time, memory, access, and location:

  • Performance
  • Availability
  • Capacity
  • Continuity
  • Security

Non-functional requirements are slightly different between desktop applications and services; this article is focused on non-functional requirements for services.

If you have any knowledge of ITIL you will recognize that the last 4 items deal with the warranty of a service.  In fact, the functional requirements involve the utility of a service, the non-functional requirements involve the warranty of a service.

Availability

Availability is about making sure that a service is available when it is supposed to be available. Availability is about a Configuration Item (CI) in the environment of the operations center that specifies how the code is accessed.  Availability is decided independently of the code and is at best part of the Service Design Package (SDP) that is delivered to the operations department, at worst it is simply code dumped on the operations personnel.

Developer’s need to be aware of the difficulty of creating the CI for the operations personnel.  If a CI is manually created then there will always be a potential for an error when the service is installed or updated.  The requirement to create a CI is a non-functional requirement and the ability to minimize errors is another non-functional requirement.

Developer’s need to be aware of single-points of failure (i.e. services hard-coded to a specific IP) which causes fits in operations that are not running virtual machines (VM) that can have virtual IPs .  The requirement to create code that is not reliant on static IPs or specific machines is a non-functional requirement.  Availability is simplified in operations if the code is resilient enough to allow itself to easily move (or be replicated) among servers.

Availability non-functional requirements include:

  • Ability to easily make the CI
  • Automatic installation of CI or mechanisms
  • Ability to detect and prevent manual errors for a CI
  • Ability to easily move code between servers

Capacity

Capacity is about delivering enough functionality when required.  If you ask a web service to supply 1,000 requests a second when that server is only capable of 100 requests a second then some requests will get dropped.  This may look like an availability issue, but it is caused because you can’t handle the capacity requested.

Internet services almost always can’t provide enough capacity with a single machine and operations personnel need to be able to run multiple servers with the same software to meet capacity requirements.  The ability to run multiple servers without conflicts is a non-functional requirement. The ability to take a failing node and restart it on another machine or VM is a non-functional requirement.

Capacity non-functional requirements include:

  • Ability to run multiple instances of code easily
  • Ability to easily move a running code instance to another server

Continuity

Continuity involves being able to be robust against major interruptions to a service, these include power outages, floods or fires in an operational center, or any other disaster that can disrupt the network or physical machines.

Where availability and capacity often involve redundancy inside a single operation center, continuity involves geographic and network redundancy.  Continuity at best involves having multiple servers that can work in geographically distributed operation centers.  At worst, you need to be able to have a master-slave fail over model with the ability to journal transactions and eventually bring the master back up.

Security

SecurityBouncer, smallSecurity non-functional requirements concern who has access to functions and preventing the integrity of data from being corrupted.

Where access is concerned, how difficult will it be for operations personnel or help desks to set up security for users?  Developer’s build in different levels of access into their applications without considering how difficult it will be for a 3rd party (help desk or operations) to set-up end users.  The ease of setting up security is a non-functional requirement.

Data integrity is another non-functional requirement.  Developer’s need to consider how their applications will behave if the program encounters corrupted data due to machine or network failures.  This is not as important an issue in environments using RAID or redundant databases.

What Happens When You Forget Non-Functional Requirements

Commonly start-ups are so busy setting up their services that the put non-functional requirements on the back burner.  The problem is that there are non-functional requirements that need to be designed into the architecture when software is created.

For example, it is easy to be fooled into building software that is tied to a single machine, however, this will not scale in operations and cause problems later on.  One of the start-ups I was with built a server for processing credit/debit card transactions without considering non-functional requirements (capacity, continuity).  It cost more to add the non-functional requirements than it cost to develop the software!

Every non-functional requirement that is not thought through at the inception of a project will often represent significant work to add later on.  Every such project is a 0 function point project that will require non zero cost!

Generally availability, capacity, and continuity is not a problem for services developed with cloud computing in mind.  However, there are thousands of legacy services that were developed before cloud computing was even possible.

If you are developing a new service then make sure it is cloud enabled!

Operations People are People Too

Make no mistake, operations and help desk personnel are fairly resourceful and have learned how to manage software where non-functional requirements are not handled by the code. Hardware and OS solutions exist for making up for poorly written software that assumes single machines or does not take into account the environment that the code is running in, but that can come at a fairly steep cost in infrastructure.

The world has moved to services and it is no longer possible for developers to ignore the non-functional requirements involved with the code that they are developing.  Developer’s that think through the non-functional requirements can reduce costs dramatically on the bottom line and quality of service being delivered.

The guys that run  operational centers and help desks are customers that are only slightly less important than the end-user.  Early consideration of the non-functional requirements makes their lives easier and makes it much easier to sell your software/services.

It is no longer possible for competent developers to be unaware of non-functional requirements.

Other articles:

  • No Experience Necessary
    • Counter-intuitive evidence why years of experience does not make developers more productive
  • Shift Happens
    • Why scope shift on development projects is inevitable and why not capturing requirements at the start of a project can doom it to failure.
  • Inspections are not Optional
    • Software inspections are intensive but evidence shows that for each hour of inspection you can reduce QA by 4 hours!
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Enhancements don’t belong in the bug tracker

Not my faultAs development progresses we inevitably run into functionality gaps that are either deemed as enhancements.

These issues often get captured by QA in the bug tracker and assigned to a developer.

Enhancements should not be managed from the bug tracker

The life cycle of a defect and the life-cycle of a enhancement are two entirely different things.  A defect is a difference between a stated requirement and the code. If there is no documentation there is no code defect (see It’s not a bug, it’s…) — in fact, most enhancements will eventually be coded by some developer; they just should not be managed from the bug tracker.

Defect Life-cycleDefectLifecycle

The defect life-cycle is well known:

  • Defect is identified as a departure from the requirements
  • Defect is assigned to a developer
  • Defect is corrected
  • Correct is verified
    • If not corrected re-open and re-assign to developer
  • The defect is closed

This is the incorrect way to manage enhancements.  When a functionality gap is determined by QA and it is not covered by the requirements then we have an issue.  It is rarely the case that the issue can be resolved by the developer.

Enhancement Life-cycle

If enhancements are assigned to a developer then they are likely to try to resolve the issue. The problem is that “enhancements” determined at the QA level may be phantom problems caused by either:

  1. Insufficient requirements
  2. Correct requirements but incorrect test plans

Enhancements may or may not become code changes.  Even when enhancements turn into code change requests they will generally not be implemented as the developer or QA think they should be implemented.

Enhancements are really requirement defects. Enhancements should be logged as such in the bug tracker and assigned to the person in charge of requirements (business analyst or product manager).  Those individuals should be responsible to track down how these issues should be handled.

If the requirements are correct and the test plans are defective then it should be logged as a test defect.  This is tricky because QA often controls the bug tracker and will not log errors that they have made.

At a minimum, the implementation of requirements and test defects can do several positive things for you:

  1. It removes the responsibility to find a solution from development.
  2. It makes it clear how many defects are in the requirements or test plans.
  3. It reduces stress; no developer wants to be blamed for an issue that is not his.
  4. Many enhancements call for updated project plans and pushing back the deadline.

Put Responsibility Where it Belongs

The creation of requirements and test defects in the bug tracker goes a long way to cleaning up the bug tracker.  In fact, requirements and test defects represent about 25% of defects in most systems (see Bug Tracker Hell and How to Get Out!).  The percentages break down as follows:

  • Requirements defects: 9.58%
  • Testing defects: 15.42%

The creation of requirement and test defects in the bug-tracker alleviate pressure on the engineering department and redirect it  to either the product manager or QA. Eventually enough data will accumulate in the bug-tracker to get management’s attention.

At a minimum, these categories should help reduce the amount of fire-fighting in late projects (Root cause of ‘Fire-Fighting’ in Software Projects)

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

It’s not a bug, it’s…

When does a bug become a bug?
Who decides that it is a bug?

Caution, BugHow many legs does a lamb have if I say the tail is a leg?  The answer is 4, just because I say the tail is a leg does not make it a leg!

Bugs should be obvious, but we say It’s not a bug, it’s a feature because often it isn’t obvious.  Watson Humphrey felt that we should use the term defect and not bug because most people don’t take bugs seriously, so let’s use the term defect instead.

So when does a defect become a defect?

  • When quality assurance tells you that you have a defect?
  • When product management says that it is a defect?
  • When the customer says that it is a defect?

The answer is: none of the above.

Now it might turn out that there is a problem and that code needs to change, but a defect only exists if:

code behaves differently than the requirements specification

This is important because most systems are under specified (if they are specified at all) and so when code misbehaves it is only a defect if the code behavior differs from the specification.  We call defects undocumented features because we know that the problem is that the requirements were never written.

Incomplete and Inconsistent Requirements

Many organizations do not create sufficiently complete requirements before starting development, either because they don’t know how to capture requirements properly or because they don’t have resources capable of capturing complete requirements. Incomplete (and inconsistent) requirements and unrealistic deadlines often force developers into making decisions about how to implement features.  The end result is that developers are regularly told that they have defects in their code.

While this process is common, it is destructive.  When requirements are under specified and inconsistent developers end up needing to perform serious rework. The rework will can require dramatic changes that will impact the architecture of the code.

The time required to find a work around (if it is possible) is rarely included in the project plan. Complicating matters is that the organizations that are reluctant to spend time creating requirements also tend to underestimate their projects.  This puts tremendous pressure on the engineering department to deliver; this promotes the 5 worst practices in software development (see Stop It! No… really stop it.)

Only 54% of Issues are Resolved by Engineers

The attitude that all defects must be resolved by the engineering department is severely misguided.  Analysis by Capers Jones of over 18,000+ projects shows that only about 54% of all defects can be resolved by the engineers! (only the 3 highlighted rows below)

Defect Role Category Frequency Role
Requirements defect 9.58% BA/Product Management
Architecture or design defect 14.58% Architect
Code defect 16.67% Developer
Testing defect 15.42% Quality Assurance
Documentation defect 6.25% Technical Writer
Database defect 22.92% Data base administrator
Website defect 14.58% Operations/Webmaster

This means that precious time will be wasted assigning issues to developers that they can not resolve.  The time necessary to redirect the issue to the correct person is a major contributing factor to fire-fighting

Getting Control of the  Defect Process

For most organizations fixing the defect process involves understanding and categorizing defects correctly.  Organizations that are not tracking the different sources of defects probably have a bug tracker that has gone to hell.  Here is how you can fix that problem, see Bug Tracker Hell and How To Get Out!

At a minimum you need to implement the requirements defect, once you identify issues that are caused by poor requirements it will shine the white hot light of shame onto the resources that are capturing your requirements.  Once you realize how many requirements defects exist in your system you can begin to inform senior management about the requirements problem.

Reducing Fire-Fighting

Fire fightingThe best way to reduce fire fighting is to start writing better requirements (or writing requirements 🙂 ).  To do so you need to figure out which of the following are broken:

  1. Not enough time is allocated to the requirements phase
  2. Unskilled people are capturing your requirements

In all likelihood both of these issues need to be fixed in your organization.  When requirements are incomplete and inconsistent you will have endless fire-fighting meetings involving everyone (see Root cause of ‘Fire-Fighting’ in Software Projects)

Stand your ground if someone tells you that you have coded a defect when there is no documentation for the requirement.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)