One of the important facets of the software development life cycle is testing. Product is repeatedly tested to ensure that product quality meets the expectation. Each test cycle, customer/stake holder's (internal or external) interest is in knowing if software quality is improving or degrading and by how much so they can make an educated decision. The mechanism of presenting tests result data in the correct format to cater for a variety of customers is called "Test Result Reporting".
Stats are historical result sets that can be utilized to graph a trend. Looking at the trend, the team can easily grasp if their efforts are making an impact in the correct direction. Stats are generally derived from test results so it is important that correct information is captured during test result reporting. In this article, we will discuss major components of the report and look at solutions provided in Artos test framework.
Structured vs Unstructured text-based reports
The latest trend suggests that structured text-based reports are more famous, that includes HTML, XML, and JSON based reports.
Advantages: Structured text is easy to parse and can be integrated with various web components. HTML can produce a beautiful looking report if combined with CSS.
Disadvantages: Structured text can be heavy. Some of the HTML reports can not be emailed in one part. In case of a system crash, the entire report is lost. Incomplete structured text reports can not be rendered/opened easily.
Unstructured text-based reports are exactly the opposite. Hard to parse but can be lightweight, easy to email, can be written frequently, does not need end node so can be opened even though partially written, etc.
Understanding both the requirements, Artos produces both unstructured and structured text-based reports, user can enable/disable either reports based on their requirements. In addition to standard reports, Artos produces professional-looking Extent report that can be given out to the customer.
Pass/Fail/Skip makes sense but KTF?
All software test reports include pass/fail/skip counts. One would agree that bug-free code is not a reality with today's complex software stack. Bugs can be known but are not always fixed due to reasons like project priority, time constraints, not worth fixing, require re-architecture, the customer does not want code change to reduce risk, etc.
A question you may ask is "Which category those test case should fall under?".
PASS and SKIP categories can not be used for known to fail test-cases because those are failures. Assigning those under FAIL category blurs the line between new failures and known to fail test-cases. Doing so can lead to announcing wrong stats and may also result in re-investigating failures that are already known. Only an efficient solution is to assign them their own category called "KTF" (Known to fail).
Artos framework tracks known to fail test cases under a separate category called "KTF" and alerts the user when those test cases start to pass.
Simple segregation is not enough
Presenting failure count to stake-holders does not help them make a decision. A statement like "100 test-case failure out of 1000 executed" does not mean anything until it is supported by their importance. Result report should enable stake-holder to make an educated decision without diving into technical details. One of the ways to achieve it is by showing the importance indicator next to each test cases. Going one step further, if the report indicates how many critical, high or non-critical test cases have failed then it is quick to know if the release can be made or require work.
Here is how the Artos runner reports an important indicator.
Graphical indications are better than text
Data presented in the graphical format are much easier to digest compare to text or detailed sheets. This does not mean to say that text is irrelevant. The ideal report contains both to a reasonable level and customized to cater to various customer's needs.
Artos presents results in graphical format using professional-looking Extent report.
Artos produces JUnit XML report besides standard reports, which is used with Jenkins to produce build trend.
Generics cannot be ignored
Reports should contain test case/unit name, start time, finish time, duration, test group, snapshots for visual regression, etc. Artos reports are equipped with all generic requirements besides providing extra features like bug reference, separate tracking for the test case(s) and test unit(s), separate reports per test suite during parallel testing, test plan, test case writers name, organized report directory, system information, etc.