I understand that this is a very strong statement and I’m definitely not trying to insult anyone; so apologies in advance. I’m trying to challenge the belief that testing is mandatory and that there should be one testing resource for every two developers. Quality development is about quality assurance and zero defects not about having testing departments.
One testing resource for every two developers is not a good solution
Quality Assurance (QA) is about making a positive statement about product quality. QA is about positive assurance, which is stating, “We are certain that there are few, if any, defects in this product.” Airplanes are a product with quality assured, the manufacturers will stand by their quality and the statistics back them up. Contrast this with this article or this article which wonders what would happen if Microsoft made airplanes — would you fly in them?
The reality is that most testing departments simply discover defects and forward them back to the engineering department to fix. By the time the software product gets released we are basically saying, “We got rid of as many defects as we could find before management forced us to release this product, however, we really have no idea how many other defects are in the code”. This is not assuring quality; at best you get negative assurance out of this.
Everyone understand that buggy software kills sales (and start-ups :-)), however, testing is often an after thought in many organizations. When software products take longer than expected they are forwarded to the testing department. The testing department is often expected to test and bless code in less time than allocated.
To compound problems, many testing departments don’t even receive proper requirements against which to test the code and/or sufficient tools to work with. Large testing departments and/or large amounts of manual testing are not healthy or efficient.
Humphrey Watts was emphatic that calling defects “bugs” trivializes the issue and downplays the negative impact that defects cause on a development organization.
Calling defects “bugs” trivializes an important issue
Defects are not introduced into software by goblins and elves. Defects are injected into the code by developers that:
- don’t understand the requirements or architecture
- misunderstand how to use their peer’s components
- misunderstand 3rd party libraries
- having a bad day because of home troubles or work environment
- are careless because someone else will test their code
Defects are injected by the team
No one is more aware of how code can break down than the developer who writes it. Any line of code that is written without concentration and planning becomes a potential defect. It is impossible for testers to understand every pathway through the code and make sure that every possible combination of variables is properly taken care of.
There are many techniques that can increase code quality and dramatically reduce the amount of testing that is necessary:
- test driven development (TDD)
- database driven testing
- design by contract (DbC)
- pair programming
- minimizing cyclomatic complexity
- using static and dynamic code tools
- proper code planning techniques
Test Driven Development
Properly written tests require a developer not only to think about what a code section is supposed to do but also plan how the code will be structured. If you know that there are five pathways through the code then you will write five tests ahead of time. A common problem is that you have coded n paths through the code when there are n+1 conditions.
TDD is white box testing and can reach every pathway that the developer codes. TDD is proactive and can test pathways from end to end, it does not just have to be used for unit testing. When TDD is hooked up to a continuous integration engine then defects are located and fixed before they make it to testing.
Database Driven Testing
Using actual test data to test existing routines during development is an excellent way to make sure that there are fewer production problems. The test data needs to be a copy (or subset) of production data.
Database driven testing can also be hooked up to a continuous integration engine and prevent defects from getting to testing.
Design By Contract
The Eiffel programming language introduced design by contract (DbC). DbC is orthogonal to TDD because its goal is to ensure that the contract defined by the preconditions and postconditions for each function call is not violated. DbC can be used in virtually any language for with their is an Aspect Oriented Programming (AOP) solution.
During development, the minute a developer violates the expected contract of any function (his or a peers) then the developer will get feedback to fix the problem before it gets to testing.
Since the 1970s we have statistical evidence that one of the best ways to eliminate defects from code is through inspections. Inspections can be applied to the requirements, design, and code artifacts and projects that use inspections can eliminate 99% of the defects injected into the code. Se Inspections are not Optional and Software Professionals do Inspections.
Each hour of inspections will save you 4 hours of testing
Pair programming can be selectively used to prevent and eliminate defects from code. When developers work in pairs they not only review code as quickly as possible but also learn productivity techniques from each other. Pair programming should only be done on complex sections of code.
Pair programming not only eliminates defects but allows developers to get enough feedback that they can prevent defects in the future.
Minimizing Cyclomatic Complexity
There is evidence that routines with high cyclomatic complexity will have more latent defects than other routines. This makes sense because the number of code pathways goes up dramatically as cyclomatic complexity increases and increases the chance that the developer does not handle all of them. In most cases, testing departments can not reproduce all of the pathways in routines of high cyclomatic complexity.
Use Dynamic and Static Code Checking
There are many code problems caused by a careless use of pointers and other powerful language constructs. Many of these problems can be detected by having the development team use dynamic and static code checking problems.
Proper Code Planning Techniques
There are developers that try to write code at the keyboard without planning, which is neither efficient nor effective. This is like having to do errands in 5 locations and driving to the locations randomly — you might get your errands done, but odds are it won’t be efficient.
Watts Humphrey talks directly to the idea of planning in the Personal Software Process. In addition techniques like diagramming with UML or using decision tables can go a long way to thinking through code structure before it is implemented.
Developers are the ones who inject defects into the code and therefore they are the best line of defense to remove them. The developer has the best information on what needs to be tested in his code at the time that he writes it. The longer it takes for testing or a customer to discover a code defect the longer the developer will spend in a debugger chasing down the problem.
Developers need to be trained in preventing and eliminating defects. Developers who learn to get the code correct the first time will reduce and eliminate effort in testing.
The goal of development should be to catch defects early; this is the only way to assure quality. Hence quality assurance starts and finishes in the development department, not the testing department.
I agree with your reasoning but there are “clients” that would not trust “Development” to test their own product quality!
Acceptance has to be done within the “Client/User” community. I agree that a good developer is one that doesn’t test with reluctance and who acknowledges that he will inject errors (unwillingly) in the code….
@stephane you are right. We commonly accept that: 1) testing does QA, 2) senior management sets deadlines, and 3) project end-dates are a single date and not a range. These are not good practices, but they are common.
It depends what you use your Test department for. If you employ them as button-clickers, then yes I would agree with your article – TDD could replace them.
But Tester roles can be different! I have worked in places where they border on Business Analysts / QA and UX. They collate business knowledge across projects, across applications and they know everything about how the “company” works (NOT the code or individual applications).
Developers cannot do this; they have to think in terms of application rules, code paths (like you mentioned). They don’t think about things in terms of the company. A good Tester does. Like you said, Testers cannot cover all “code” path; but they do know how the BUSINESS works.
They won’t just say: “that button is 5px to the left”, or “this error appeared when I followed this script”. Instead, they will say: “the Danish currency only has one decimal place and you’ve shown two”, or “we only sell that product when using a credit card”, or “‘Error Code 7’ is not a good error message for users”. They will instinctively know to look for these sort of things in your new app. Not based on requirements (which BA, Developers and TDD would rely on), but based on a wealth of previous knowledge about that company.
So I actually get worried when I DON’T see a Test department. Business Analysts tend to be single project-based – just gathering requirements for a single app and then moving on to the next one. Developers are code/application-based. The only people who are company-based are your Testers.
So give me a good Test Team, soaked in business knowledge, any day!
I think you trivialize the role of a tester; as another commenter has noted, the type of testing that a developer does (with intimate knowledge of and focus on the “how”) is very different from the testing and approach of a dedicated test professional (who generally focuses on the “what” and doesn’t much care about the “how”).
You also talk about bugs/defects as if they’re distinct code entities which are actively “injected” by incompetence or negligence. Most defects in my experience arise from either simple oversights (which are inevitable in any human endeavour), or from simply not solving a complex problem 100% effectively, the first time around, in a very complex domain. They are rarely distinct entities themselves but rather part of the fabric of the whole solution which is not quite right, or (often) perfectly correct in 95% of scenarios but not in the other 5%.
The point that is inescapable in my experience, though, is that defects get picked up and fixed much more quickly with dedicated testers than without. Only in software would someone suggest that not having a dedicated testing process is somehow preferable. Imagine if car manufacturers relied only on assemblers to find defects (test drives are too expensive!).
As with software engineering itself, separation of concerns between creation and validation reaps real benefits here. Yes, it adds expense, but for any non-trivial project with defined business requirements and expectant clients (internal or external), it pays to have someone whose mindset isn’t to defend the code they wrote themselves.
@Mark, I’m not trying to trivialize the role of the tester. As stated, I don’t think that the test department is going anywhere anytime soon.
My point is that there are so many techniques available for quality assurance and the prevention and elimination of defects BEFORE getting to formal testing. Formal testing can only provide quality control, i.e. check to make sure if there are any defects present. I believe that this has lead to the attitude that we should have one tester for every two developers, a focus on quality control and not quality assurance.
If we apply all the known quality assurance techniques which apply during the requirements, design, and coding phases then we greatly reduce the amount of testing that is necessary. Without quality assurance techniques it is hard to get rid of more than about 80% of the defects; with them you can get to 97% defect removal.
All the techniques you mention are great, and I use most of them myself (as a developer). But no technique can make up for the difference in mindset and approach between a developer and a dedicated tester.
A developer should notice if their calculator adds 2 to 3 correctly to come up with 5. Unit tests can verify that remains the case over time. Code analysis can ensure that rounding and conversion errors are found and fixed easily.
Given that they’ve already built a calculator, not many developers would realize that the requirements actually asked for a word processor. A tester, working from the requirements rather than from the code, would pick that up in the first pass.
You mention design by contract; software development as a whole is such a process, and the contract comes in the form of a set of requirements (whether that be a specification, a list of user stories, acceptance criteria, whatever). However deeply you analyze the code, the requirements are more important. A good developer will do all he/she can to test the “how”, but to objectively test the “what” requires someone whose focus is on the requirements rather than the code. A separation of concerns in the delivery team, effectively: developers develop, testers test. Sure, you can rely on developers to do that job, but they’ll never be able to do it in the same way.
I’m not saying that it’s always worth the extra cost to have testers on the team, but IMO the vast majority of non-trivial projects always benefit from more testing resource.
@Mark, I agree with you. I just want us as an industry to start focusing on preventing and eliminating defects. At this time I feel that there are many organizations that are putting too much time and effort into testing.
Ideally an organization has just a few experienced testers that can help train the developers into using better techniques for testing their code 🙂
Coming from a testing background I have mixed opinions about your post. I agree that developers should be doing alot more to test their code before passing it to a dedicated tester. I’ve seen developers build a release and promote it and then the tester tries to open up the application to begin testing it and the first thing that they see is a big red X application error on the screen… not good…
However, I also think that you are very misinformed as to what tasks a tester actually performs (and I mean a good tester, not a button clicker who follows scripts). I’ve worked with tons and tons of developers and I’ve had many conversations about the misconceptions of the role of a tester with them. I’ve taught them how testers think differently from a developer.
Did you know that there are over 40 ways to test a plain text input field? The most I think I’ve ever got out of a developer before I told them about the various tests was maybe around 3 or 4 different ways to test a field. Most just say “put in text and see if it works…”.
Do developers perform requirement analysis? or do they accept that the requirements that are supplied are 100% correct?
I’ve seen so many ambiguous requiirements from customers that developers automatically pick up and start developing from. This isnt good as the requirements might be missing vitally important information or might not have covered certain scenarios.
My point is that testers and developers do think differently. Developers have a constructive mindset and are very much “critical” thinkers, where they are given a task and they do that task in the quickest, most efficient way.
But testers have a destructive mindset and are “lateral” thinkers, where they look at teh function (or requirement, or system as a whole) and start to question it. “What if I try this, or do this?…”, “What about this scenario?”, “Can I get this to break if I push this data in at this point, whilst doing this?”…
Developers dont think like this, which is good as their role is completely different!
@Daniel, I agree with you. I’m taking an extreme position to get people thinking about the issue.
I believe that we need to reduce the size of the testing department and get people who know what they are doing (like yourself :-)) to train the developers on what testing that they need to do.
Not only do they need to be trained but then they need to create tests via TDD that actually do the testing on an automated basis. Code inspection teams should make sure that they are adhering to the testing that needs to be done.
People with formal testing backgrounds need to become trainers and be the last line of defense in the quality control process. The testing department should be staffed with high quality testing resources that know how to check for code coverage, compliance to requirements, performance testing, but they should not be doing unit testing for the developers.
If the articles’s techniques are used the testing process will be shorter and the testing department staffed with fewer people. Software releases will get faster, better, and less expensive — a winning combination!
One thing that I’ve heard about that sounds really interesting is paired dev/test (similar to paired developing and paired testing), where a tester and a developer sit at the one machine while the code is being developed.
The developer talks through the code that he’s writing and describes the functionality that the code reprisents (teaching the tester some coding skills and back end knowledge of the system), while the tester details possible scenarios and discusses test ideas (teaching the developer how to think laterally like a tester and also highlights any scenarios or things that he’she may have missed thinking about when writing the code).
This would be a win-win situation for the project! 🙂
@Daniel, there is no doubt that developers need to be educated on testing. Developers need to become aware of all the different code pathways and decisions that are made to get through the pathways. I’ll check with Capers Jones if he has data on paired programming with a developer/tester pair.