DistroQualityMetrics

bdmurray's notes

Are we responding effectively and appropriately to bug reports? Are we doing the right thing?

Are we replying within in a good time frame?

  • mdz has mentioned the time to first response as a metric before

"QA Team should be a mirror for the platform team" - mdz

How many known regressions did we ship in 9.04? How are doing with 9.10 in comparison?

Might also be interesting to look at the number of SRU's done a per release basis

  • however this should only compare LTS to LTS and non-LTS to non-LTS

How long does it take to fix the hottest bugs?

  • perhaps use a database query to find the 'hottest' / highest gravity bugs and look at their time to fix
  • or look at karmic fixes report and add-in gravity there

mdz is also concerned about backwater packages that don't get watched but are important to the distribution

  • consider querying for packages without subscribers for example net-tools - https://edge.launchpad.net/ubuntu/+source/net-tools

    • dhcp is one also but it is only in dapper
  • perhaps also looking to see if the packages are seeded as a way of determining their importance to the distro

Bug gravity should be modified to add points for bug importance

Add importance to -fixes reports to and counts of importance of bugs fixed

Bug gravity should be added to team assigned lists for engineering managers

Take a look at a representative sample of bug reports to see how things went with those bug reports

  • Did the right thing happen with them?
  • Should they really be in the state they are in?
  • How many are apport-bug / apport-crash reports?
    • Does this have an effect on their triaging? Are they less likely to be incomplete?
  • How quickly did the first person respond?
  • If the bug is new speculate as to why it hasn't been looked at / triaged yet.

Try to estimate the time it takes to fix a bug report

  • break this down into bugs w/ patches, bugs w/ debdiffs, bugs w/ branches
    • are different kinds fixed faster(?)
  • how many get fixed on a per release basis?

mdz's notes

These are the key areas for measuring and reporting that I mentioned on the call today:

  • regressions
  • performance over time
  • neglected packages
  • importance
  • fixing and triage (separately)

Suggestion: produce a report at certain points in the release cycle, comparing our performance at that point to a similar one in the previous release cycle

QATeam/DistroQualityMetrics (last edited 2009-07-16 18:45:47 by c-24-21-43-9)