top of page
Search
  • Writer's pictureArpit Shah

Test Status Update



Test leads are expected to publish a status update/report iteratively so management gets an insight into how testing is progressing. As soon as management asks for a status update, the test engineer/lead starts to look for a piece of information to publish. In most cases, test engineers/lead will publish the following data and think the job is well done.

  • Total versus executed tests count

  • Pass versus failed test count

Let us think for a moment.

People say numbers do not lie, but numbers can provide wrong information if presented out of context. The information that appears to be the most appropriate and relevant at first glance can sometimes be the information that does the most damage if received/understood differently.

In this article, we will discuss why the test status update/report should not publish the two most relevant looking counts listed above and what can be published instead.


Understand the requirement


You are test subject matter expert and your customer (in this case manager/management) tells you "WHAT, WHY and WHEN" they want something but "HOW" to achieve the goal should be left on test engineer/lead to define.

Imagine your car is broken. You take your car to a technician.

  • You (customer) define "What is wrong with the car and When do you want it fixed"

  • The technician will ask relevant questions and decide "how to fix it?" and "how much it will cost?".

  • Based on the technician's answer and information, you (customer) decides the next step.

Same way, A seasoned test engineer/lead will ask relevant questions to understand why a status update is requested and how will it be used and then publish the data accordingly. You do not let the customer (in this case management) define "HOW" because you are the subject matter expert.

Each case could be unique but for the purpose of this article, we will discuss the most common requirements that test engineers must satisfy. The first step is to understand WHAT, WHY and WHEN from the management.

WHAT: Need a test status update from the test team

WHY: Test status update/report helps management achieve the following:

  • Conclude if release quality is reasonable.

  • Make informed decisions about the release date and/or customer promises.

  • Timely inform other stakeholders of any delays or serious findings.

  • Identify blockers and have an action plan around it.

  • Identify critical issues/bugs so it can be triaged appropriately.

WHEN: Daily

We have clear requirements now then let us validate if published data (discussed above) were appropriate in meeting those requirements.


Total versus executed tests count


Five well written test cases find more bugs compare to hundred poorly written test cases, so test engineers should stay away from discussing/publishing/reporting count of test cases. One should be more curious about test coverage than the test count. Management may ask for the number of test cases developed/run but as test SME, it is your responsibility to educate them.

Test suites are not static and should not be. As product changes/evolves, along with that test cases are also improved, modified, added and/or removed, Thus, reporting the total number of test cases does not give a reliable reference.

If reported test count is lower than the historical count then it attracts unnecessary questions which result in test lead spending precious time in explaining why some test cases were removed.

Executed test count becomes redundant without total count reference so avoid publishing it.

If we are concluding to not publish both the total and executed test counts then how does the test team show test progress? The answer is, publish the percentage of test completion. It is derived from the test counts but it steers everyone clear from focusing on irrelevant information.


Pass versus failed test count


Disclosing the pass-fail count announces the same information that we discussed not to publish in the above section. Then what shall we publish? just a failure count?

Let us understand this by taking two published test results by the team

  1. 2 x test cases failed (out of 1000)

  2. 10 x test cases failed (out of 50)

Management may make release decisions based on provided results but it could be wrong because failure count on its own is not that useful. providing total test count further adds to the wrong decision-making process. If management is in rush to get a release out, they will most likely release based on first result, because only 2 test appears to have failed out of 1000.

What ingredient is missing here that encourages the right decision-making process?

  • Missing ingredients/information is called "Test Value" or "Test Importance Indicator".

Let us add "Test Importance Indicator" into the above example:

  1. 2 x test case failed (1 x Critical, 1 x High)

  2. 10 x test case failed (5 x Low, 5 x Medium)

We can all agree that the release decision will be different now.


Summary


Do not publish

  • Total or executed test count

  • Pass count

Publish

  • Percentage of progress

  • Failure count with Importance Indicator

  • Amount of time remaining test cases will take

  • Test coverage and/or confidence level (if available)


31 views0 comments

Recent Posts

See All
bottom of page