[haiku-qa] Re: More discussion about exploratory testing...?

  • From: Dennis Catt <dcatt.haiku@xxxxxxxxx>
  • To: haiku-qa@xxxxxxxxxxxxx
  • Date: Tue, 28 Aug 2012 21:25:57 -0400

Ryan,

> So I don't know if we need to get all that formal, but it would be
> good to document what works, what was tested, and then log bugs for
> what doesn't work (or to note existing tickets for those bugs.)

Well one approach to that is just going through all the current
functionality and writing formal test cases around it using the Haiku
Users Guide as reference when/where necessary to keep test cases
themselves lean and mean.  I work in an agile environment and the
requirements and expected behavior is either in code, story docs
and/or test cases and it's usually based on knowledge gained through
IRC, word of mouth, some wiki, an email and/or maybe (if you're lucky)
in the description of a Trac (or name your ticketing system here)
ticket that just gets lost in the haystack fast becoming a little
needle.

I have nothing against writing formal test cases (I do it nearly
everyday), but it might take time to piece together a test regression
suite as we'll need to break up the work amongst the folks involve to
get it done.  Obviously we won't have anything for execution with the
testing cycle for Alpha 4, maybe we'll have something in time for Beta
1.

What I am afraid of is putting a lot of effort into this and we don't
actually take advantage of it or the testing process is an
afterthought to the development and releases processes.  I'm not
wanting to be some bottleneck, but I would like to help establish a
more formal testing process that could help sign off on an RC worthy
of a planned release.  Also help cleanup the archive of defects when
they've either been fixed or are no longer relevant.

Of course we can do a mish mash of formally scripted (albeit manual)
testing with exploratory testing (which usually catches the juicy
bugs) and automate testing over time.

I think we should (based on James' recommendations)...

1. develop formal test case format (to standardize on)
2. figure out how to generate test reports from test results submitted
and gathered
3. develop a defect ticket template for use in reporting defect
consistently (Trac makes this easy)
4. central method in querying defects (e.g., Trac wiki makes this easy
using custom queries)
5. anything else...?

More on this later!

Regards...

Dennis

Other related posts: