## page was renamed from QATeam/Specs/ExpandCheckboxCoverage * '''Launchpad Entry''': [[https://blueprints.launchpad.net/ubuntu/+spec/lucid-qa-checkbox-integrate-regression-testing/|qa-expand-checkbox-coverage]] * '''Created''': 2008-12-05 * '''Contributors''': sbeattie, apulido, schwuk, cr3 * '''Packages affected''': checkbox == Summary == Checkbox would benefit from integrating the QA regression testing suite. == Rationale == This integration should make it more appealing to run the QA regression testing suite more regularly. == Use Cases == * Provide a larger pool of tests to be run by the security and SRU validation teams to test for general regressions when preparing updates to -security and -updates * Create a wider base for platform team milestone testing and certification testing == Assumptions == == Design == Integrate test cases from other collections. Questions: * Should other test collections be pulled in in bulk, or should we cherry-pick? * How to organise the tests? * How to maintain them? == Implementation == Candidate suites: * Security team collection * Permission checker (to avoid [[https://bugs.edge.launchpad.net/ubuntu/+source/ubiquity/+bug/288479|#288479]] and [[https://bugs.edge.launchpad.net/ubuntu/+source/ubiquity/+bug/290798|#290798]]) == BoF agenda and discussion == qa-regression-testing * in a bzr branch not a package at this point in time * it is mostly for server packages or command line applications * should be incorporated for testing proposed packages and development releases Organization * granularity - be able to run specific tests individually (a fix for a bug), then run the full test suite Have a command line interface to checkbox to be able to run specific tests or suites of tests Cherry picking tests is expensive because it is like forking from a project - so just start running all of the tests * If a test fails, that failure should be recorded and future failures of that test should be expected * create your baseline in the first run of the test suite so 45 out of 50 passed, the next test run you expect at least 45 passes - anything less than 45 is a problem We should not write tests that have already been written, we should leverage upstream tests and trust that those tests are good * Extending them to cover more situations is ok, though, right? ---- CategorySpec