[liblouis-liblouisxml] Re: Why is there an error in this test run?

  • From: "Michael Whapples" <dmarc-noreply@xxxxxxxxxxxxx> (Redacted sender "mwhapples@xxxxxxx" for DMARC)
  • To: liblouis-liblouisxml@xxxxxxxxxxxxx
  • Date: Fri, 06 Jun 2014 18:17:28 +0100

One point I would raise about expected failing tests, even with a bug tracker it can be useful as if an expected failure does not happen, then some other code change (may be totally unintentional) has fixed an existing bug and one could then go into the tracker and mark the bug as fixed with unknown solution.


Would this test system report expected failures actually passing?

Michael Whapples
On 06/06/2014 17:29, Mesar Hameed wrote:
On Fri 06/06/14,15:58, Keith Creasy wrote:
If you don't mind can we talk about the test suite itself? Really the summary 
doesn't make sense.

Tests run: 13, Failures: 1, Expected failures: 7, Errors: 0

What does this really mean? The only part that really adds up is that 13 tests were run. 
It says 1 failed and I guess that much seems to make sense. Then it says "expected 
failures: 7" and I don't understand what that means. Does it means that 7 failures 
were expected or that there were actually 7 failures that were expected?
I understand this to mean:
13 tests in total
1 new failure
7 other failures that have been marked as expected because no one has
had the time to fix them in the code.

FAIL: run_test_suite.sh
=============================================
1 of 3 tests failed
Then, finally, it reports that 1 of 3 tests failed, which doesn't really add up.
This is not ideal, but when automake is setup in a recursive manner,
tests in different directories are not tallied up, but are only per
directory.
As it comes back up the tree, only one of the subdirectories had failing
tests and hence it reports 1/3 for that level.
This is something that is also visible when running make check in the
liblouis source tree.

This and more is described in the recursive make considered harmful
paper:
http://miller.emu.id.au/pmiller/books/rmch/


so yet another task that someone need to set aside a chunk of time for
to get this cleaned up.

Most importantly, no test failures should ever be expected. If we
perform an operation with known input we should always get known
output. If we don't get the output we expect then either we have to
change what we expect the output to be or fix the code to give us the
output we expect. The goal should always be that all tests pass and no
code is committed to the main repository until they do.
It all depends on how a particular project decides to track bugs.
If its just done through a mailing list, then adding these sort of
expected bugs is acceptable, because it means that a particular bug has
been identified, investigated but not found important enough to fix.

If the project uses a bug tracker, then one could enforce that master
always has no errors, while at the same time not forgetting old bugs.

Is anyone in love with the existing tests?
I am not envolved with utdml so have not looked at them.
But for liblouis I have plans to make the harness tests
more readable in the near term.

Mesar

For a description of the software, to download it and links to
project pages go to http://www.abilitiessoft.com

Other related posts: