Sunday, December 14, 2014

Writing of Test Cases ?

Basics of Writing Test Cases:

#1. If test scenarios were all about, “What we are going to test” on the AUT – the test cases are all about “How we are going to test a requirement”.

For example, if the test scenario is “Validate the Admin login functionality” – This would yield in 3 test cases (or conditions) – Login (successful), Login-unsuccessful when incorrect username is entered, Login-unsuccessful when incorrect password is entered. Each test case would in turn have steps to address how we can check a particular test condition is satisfied or not.

#2. The input to create a test case document is FRD, Test scenarios created in the earlier step and any other reference documents if present.

#3. The test cases documentation is an important deliverable by the QA team and is shared to BA, PM and other teams, when done for their feedback.

#4. Work is divided among the team members and each member is going to be responsible for creating test cases for a certain module or a part of a certain module.

#5. Just like with the test scenarios, before we begin Test case documentation, a common template has to be agreed upon. Practically anything can be used to create test cases. The 2 most often used choices are MS Excel and MS word.

#6. The MS word template looks something like this:

#7. The Excel template could look like the following:

#8. From the above two templates it can be observed that the fields (or the components) that make up for a test case are the same, the only difference is the way in which they are organized.
So, as long as there is a field for each of the type of information to be included in a test, the format of the template does not matter. However, my personal favorite happens to be the excel sheet, because it is easy to expand, collapse, sort, etc. But again, choose any format that works best for you.

 Below is a list and please check out the links provided for more information on these methods.
  • Boundary value analysis
  • Equivalence partitioning
  • Error guessing - This is a very simple method and relies on a tester’s intuition. An example is: Say there is a date field on a page. The requirements are going to specify that a valid date is to be accepted by this field. Now, a tester can try “Feb 30” as a date- because as far as the numbers are concerned, it is a valid input, but February is a month that never has 30 days in it- so an invalid input.
  • State transition diagrams
  • Decision tables
Using the above techniques and following the general test case creation process, we create a set of test cases that would effective test the application on hand.

Thursday, December 11, 2014

Telling programmers to test own code?

Would you trust a programmer to test your application?  It’s like asking a Fox to guard the chicken-house, right?

Well, sometimes you don’t have a choice, or you do but that choice is to release the application untested.

As part of an Agile testing consulting engagement I am doing at a friend’s company, yesterday I started providing short training sessions for programmers who need to learn how to test better.  It’s not that this company doesn’t have testers, they have good testers!  But as many other agile teams they have much more testing tasks than available testers, and they want programmers to take part in at least some of their testing tasks.

Programmers are not good at testing or they might be but they don't want to do it properly because its their baby.

A couple of months ago I wrote a post explaining why I think programmers are poor testers, and I still think that in a sense it is like asking a dog to fly. On the other hand I also believe that with some good intentions, hard work, and perseverance you can teach a dog to skip far enough and that way to jump over large obstacles.

In the same way, you will not be able to magically transform programmers into a good testers overnight, but you can start by working with them on their weaknesses and then teaching them some simple and effective testing techniques.  With good intentions, a little practice, some hand-holding, and constant feedback you will be able to get some good testing help in your project.

Step 1 – understand the limitation and weaknesses of a programmer

I started my session yesterday by going over the points I listed in my earlier blog that make programmers “less-than-perfect-testers”:

- Parental feeling towards their own code.
– Placing the focus mainly on positive scenarios, instead of actively looking for the bugs.
– Tendency to look at a “complex problem” as a collection of “smaller, simpler, and isolated cases”.
– Less End-to-End or User-Perspective oriented.
– Less experience and knowledge of the common bugs and application pitfalls.

It made a big difference during the session to not only list the weaknesses but talk about why each of them is naturally present in programmers due to the nature of their work, their knowledge and training.

Step 2 – how to plan tests
I’ve seen that many (most?) programmers have the perception that testing requires little or no planning.

Maybe we testers are guilty of this misconception since we don’t get them involved in our test planning process as much as we should, but the truth of the matter is that when we ask them to test they they automatically grab a mouse and start pressing on the buttons without much consideration or thought.

Test Planning is a cardinal aspect of good tested, so I gave them a couple of ideas and principles planing:

1.  DON’T TEST YOUR OWN CODE!
When as a team they are asked to test, they should make sure to divide the tasks so that each of them is testing the code developed by other programmers as much as possible.

2.  Work with your testing team to create test sets.
This specific team is using PractiTest (the hosted QA and Test Management platform my company is developing), but it can be any other tool or format (even word and excel!).  They should sit with their testers and define what test cases need to be run, reusing the testing scripts already available in their repositories.

3.  Expand your scenarios by making “Testing Lists”
- Features that were created or modified (directly or indirectly)
– User profiles and scenarios to be verified.
– Different environments / configurations / datasets to be tested

The use of these lists is two-fold.
They help you get a better idea of what you want to test while you are running your manual test scripts, and they also serve as a verification list to consult towards the end of the tests when you are looking for additional ideas ow when you want to make sure you are not missing anything of importance.

4.  Testing Heuristics – SFDEPOT
The use of (good) heuristics greatly improve the quality of your testing.
I provided the programmers with the heuristic I learned fist and still helps me up to this day.  I read about SFDEPO(T) from James Bach some years ago – you can check one of the sources for it from Jame’s Site.

SFDEPOT stands for:
Structure (what the product is)
Function (what the produce does)
Data (what it processes)
Platform (what it depends upon)
Operations (how will it be used)
Time (when will it be used)
There are other heuristics and Mnemonics you can take from the Internet…

Step 3 – what to do when running tests
We talked a lot about tips to help perform good testing sessions.
1. Have a notebook handy.
In it you can take notes such as testing ideas you will want to test later, bugs you ran into and want to report later in order to “not to cut your testing thoughts”, etc.

2.  Work with extreme data.
Big files vs. Small files
Equivalent input classes [ -10 ; 0 ; 1 ; 10,000,000 ; 0.5 ; not-a-number ]
Dates: Yesterday, now, 10 years from now
etc

3.  Think about negative scenarios.
– How would Mr. Bean use your software?
– How would a hacker try to exploit the system? or you can say a mouse name Jerry enter your house and you are a Tom.
– What would happen if…? (blackout, run out of space, exceptions, etc)
– What if your  2 year old would hijack the keyboard in the middle of an operation?
– etc

4.  Focus & Defocus:
I used this technique to explain to them that during their testing process they need to make a conscious effort to always look at the bigger picture (application, system, process) and not only focus on the specific function they were testing.

5.  Fight Intentional Blindness:
I used the following film of the kids passing the balls to explain the concept of Inattentional Blindness and it worked great!
We were 8 people in the room, and only I had seen the video before.  Out of the 7 participants one is currently a tester, another one is a former tester turned programmer, the rest are “regular programmers”.
The cool thing is that only the 2 testers saw the Gorilla the first time…  talk about making a point!

Step 4 – what to do when (you think) you are done testing ?
We talked about how even when you think you are done testing you should always make sure there is nothing else that should be tested, and what techniques they can use in order to find these additional places:

1.  Walking the dog
Taking a break from your tests, doing another tasks for 30 – 60 minutes, and then returning to review the tests you did and what you found.  This break usually helps to refresh the mind and to come up with more ideas.

2.  Doing a walk-through session to review the tests you did.
Most of the time you will be able to get more ideas as you explain to your peers about tests you just did.
The funny part is that many of these ideas will come from yourself and the things you think about when you are explaining your tests to others out loud.

3.  Ask for ideas from other developers or testers.
Simple as it sounds, come to others in your team and ask them to pitch you ideas of stuff they think you could test.  90% of the stuff you will already have tested, but the other 10% might prove useful too!

In the end is a questions of mindset and motivation
One of the things I like most of agile teams (and many non-agile but yes smart development teams) is that they define Quality to be the responsibility of the whole team and not only of the testing guys running the tests tasks at the end of the process.



My final piece of advice to the group was that everything starts from them, and their understanding that testing is not a trivial task and definitely not a sequence of doing a bad job or for finishing their tasks ahead of time.  What’s more, I am sure that once these guys start testing better and gaining a testing perspective of their application they will also start developing better software too.

Why can’t developers be good testers?

I’ve been trying to explain to a couple of Agile Teams why developers are usually not good testers; so after working hard to remember all the reasons I could think of (based on my experience so far) I decided to put together a short list and post it.

Don’t get me wrong I think developers should take part in the testing tasks, specially on Agile Teams, but I am also aware of their limitations and the cognitive blind-spots that tend to harm their testing; and as it has been said before, the first step to improve your weaknesses is to understand you have them.

Why developers (usually) suck at testing?

1. “Parental feelings” towards their code
Developers are emotionally linked to the stuff they write.  It may sound silly but it is hard to be objective towards the stuff you create.
For example, I know my kids are not perfect and still I am sure I would have a hard time if someone would come to me and starts criticizing them in any way (after all they are perfect, right? :) ) .

2. Focus on the “Positive Paths”
Development work is based on taking positive scenarios and enabling them on the product.  Most of their efforts are concentrated on how to make things work right, effectively, efficiently, etc.  The mental switch required to move them from a positive/building mind-set to a negative/what-can-go-wrong mind-set is not trivial and very hard to achieve in a short time.

3. Work based on the principle of simplifying of complex scenarios
One of the basic things a tester does as part of his work is to look for complex scenarios (e.g. do multiple actions simultaneously, or make an operation over and over again, etc) in order to break the system and find the bugs.  So we basically take a simple thing and look for ways in which we can complicate it.

On the other hand our developer counterparts are trained into taking a complex process or project and breaking it down into the smallest possible components that will allow them to create a solution (I still remember my shock in college the first time I understood that all a computer could do was work with AND, OR, NOT, NAND, NOR, XOR and XNOR operations on Zeros & Ones).

4. Inability to catch small things in big pictures
I can’t really explain the reason behind this one, but I have seen it many times in my testing lifetime.
One of the side-effects from becoming a good tester is to develop a sense to (almost unconsciously) detect what “doesn’t fit” in the picture.  The best way to describe it is by the feeling one gets when something “doesn’t fit” in the picture but we just can’t put our hand on it; then by applying some systematic processes we are able to find the specific problem.

I had a developer once tell me that good testers can “smell bugs”, and maybe he was not very far from the truth.

5. Lack of end-to-end & real-user perspective
Do the nature of their tasks most developers concentrate on a single component or feature in their product, while they still maintain a vague idea of how their users work with their end-to-end system.
Testers need to have a much broader perspective of our products, we are required to understand and test them as a whole while using techniques that allow us to simulate the way users will eventually work in the real world.

6. Less experience with common bugs & application pitfalls

Again something that comes with time and experience is our knowledge of the common bugs and application pitfalls.  Obviously as a developer accumulates KLOCs on his keyboard he will also get to meet many bugs and pitfalls, but as a tester we are going to gain this experience faster and in a more deeper sense.

An experienced tester sees a form and automatically starts thinking about the common bugs and failures he may find in it and starts testing for them.

My bottom line
It’s not they don’t want to do it, developers simply are not able to test in the same way we tester do.  This doesn’t mean they cannot help in testing, and in some specific areas they will be able to do it even better than we do, but before they start it may help if they are able to map their testing-blind-spots in a way that will allow them to compensate for them.

Developer testing adds a lot of value to the general testing process…  I am even thinking as I write this about a future post on the subject of the added value gained from pairing developers and testers.

Don’t write a single test, Until you know how to do it !!

We've all heard about the “Infinite Monkey Theorem” whereby a monkey hitting keys on a keyboard (typewriter) at random will eventually come up even with the complete works of Shakespeare.

But the problem is that it would take it an infinite number of years, and maybe more importantly the work would be buried under so much junk and gibberish that it would be impossible to find it.

What’s the status of your test cases?

Now take a look at your test repository.  What do you see?
Is there anything in common with the work of the monkey above?

I am not implying your team is made up of chimps typing at random – even though if it is, please take a picture of them and send it back to me, I promise to publish it!!!

But something I see a lot in the context of my work with PractiTest’s customers is that people tend to concentrate on the quantity of their test cases, and fail to put enough efforts on the quality of their resources.

The result is repository with a lot more test cases than it should actually have, many of them covering the same feature too many times, others describing functionality that was modified a number of releases back, and some that have not been run in a number of years because they are not really important anymore.

I don’t think this comes from incompetence, but I do believe that a big factor for this is the fact that it is easier to create a new test than to find an existing case (or cases) and modifying it accordingly.

Another cause is the fact that it is a lot easier to measure the number of tests than the measure the quality of your testing coverage (and the quality of the individual tests cases themselves).

Process and rules of thumb for writing test cases

A good way of stopping problems of this type is to have some process and rules of thumb in place to help testers write better cases.

Some examples for rules of thumb you can define with the team can be the following:
Setting upper and lower limits for the number of steps per test case.
Setting maximum number of test cases per feature or functional area.
Working with modular test cases that you can use as building blocks for your complex test scenarios.
Have separate test cases for positive and negative scenarios.
Use attachments and spreadsheets to decouple testing data from testing processes.
Regarding process, this is a little harder but it is also a lot more effective in the long run.  Some examples might be:

Before starting to write test cases have a small team create the test list and their scope, only then fill out the steps.
Break your test repository into areas, assign each area to a tester in your team and make him/her responsible for all aspects of its maintenance.
Have peer test creating or review sessions.
Visit and validate “old” test cases & tests that have not run in the last 6 months.
Review tests that do not find any issues in the last year.
Create visibility and accountability into your test management system

A big factor that will help or hamper the way you maintain your test cases is how you manage and organize them.  You can obviously do this in your file system repository or using something like Dropbox to share these resources with your team.

But  after your tests grow in size or your team expands above a certain level it makes more sense to find a professional test management system that will help you to organize your test cases and generate good visibility and control over them.

I don’t want to make this a marketing post, but I do recommend you take a look at PractiTest and the way we provide excellent visibility into your test cases with the use of our exclusive Hierarchical Filtering system.

Other than creating visibility, make sure there is accountability for each area of your testing repository.  As I wrote above, it is important to assign testing areas and their cases to specific players in your team.

Give them tasks (and the time) to update and maintain their tests, both during the preparation stages but also during the regular testing process.  You should not expect people to maintain their test cases during the frenzy and craziness of your regular testing cycle.

Monday, December 8, 2014

Rebuilding Your Agile QA Strategy and Plan ?

Here are three problem patterns I’ll speak about:

We want to do Agile development but it keeps not going well
We want to do good QA with Agile but how do we do that well
The plans we have which map our sprints to a timeline keep not working
I’ll talk about some possible root causes to the above.

I’ll often ask what your goals are as an organization and how you are applying a QA strategy to ensure those goals.  Then I’ll ask how do you know its a good strategy?  A good conversation talks about measurement, evaluation, and action plans.

I’ll pose these scenarios when I am interviewing technical people.  The general landscape to a good answer is that a good QA strategy can answer the conceptual questions about who, what, and how you are testing your systems.  The way that you build a QA strategy, therefore, is to build a conceptual framework for how you think about QA strategy.  Any conceptual framework is better than none, and I will offer a framework here.

So, strategy and tactics are different at both the conceptual level and the level of applying them.

Conceptually, a strategy is an emergent accumulation of the answers to the following 5 questions:

Who are we?
What are our ultimate goals here?  (Note: Goals answer the question why, and are tightly coupled with who you are.  From this conversation you get to know your Values.)
What do we need to do to achieve those goals?
How will we do it? (This answer together with your Values form your Mission.)
What [people, procedures, technology, human] resources will we employ in doing it?
A QA strategy traces the big picture of the system under test into functional modules, test data, operational correctness, and interdependencies into priorities.  Remember: Strategy shows priorities.  A QA strategy:

combines values and mission
to produce a workable plan that
can be used as a measuring stick to
see how you are doing at any point along the way in order to
feed expected value decisions that balance [cost, risk, and reward]
Your QA strategy should identify the anatomy of your system under test, the users, user personas perhaps, user roles probably, how users interact with the system, how the system processes input to produce output, timing diagrams, physical architecture decisions, and recommendations on end to end testing as well as module specific testing.  A strategy must be a living document, so keep it short, use pictures whenever possible, and avoid acronyms and technical identifiers.

A QA Plan sequences the resources and actions needed to accomplish the Mission.  Remember: Plans show sequencing.  A QA Plan:

Is produced by a strategic expected value decisions which
Contain measurable, workable, deliverable, verifiable goals which are
Broken down by the roles needed to accomplish them and
Assigned to responsible people in those roles who can own the goal
Your QA plan should show who is doing what work and when they’ll probably be done.  Your plan shows dependencies and provides solutions for them.  Your plan is tied to a staffing plan and why you need the people you need, and possible plays around uncertainties.

The art of good QA management is practicing the above, noting your failures and determining what was missing, fixing it, and doing it again.  Good luck!

What’s your testing approach?

Approach you should follow :

Too often we complain about our daily routine. We wake up most everyday of the week and go through the same motions and home and at work. Our mood might change, so we approach our daily tasks differently perhaps, but the routine remains the same. However, this is not a bad thing!

You don’t realize how great routine can be until it is gone or taken away, and I don’t mean because you’ve gone on vacation. We are creatures of habit, and rightly so. Routine gives us structure and makes us feel safe, confident and is comforting. The same applies to our work practices and the routine approaches we use on the job.

This got me thinking about our different testing approaches as testers, and how they dictate the routine of our work. While we probably vary in our approaches, we all have the same professional goal – to do the best job possible, “leave no bug unturned”. So what is your testing approach?

For instance, when you get a new feature you :
1. Go over the documentation for it
2. Run a short exploratory testing session to get to know it first hand
3. Create some high level or low level testing scenarios for it
4. Run your scenarios while taking notes on how to improve them
5. Run a short session with your team to give your feedback on the feature, its stability and some functional improvement ideas