= bdmurray's notes = Are we responding effectively and appropriately to bug reports? Are we doing the right thing? Are we replying within in a good time frame? * mdz has mentioned the time to first response as a metric before "QA Team should be a mirror for the platform team" - mdz How many known regressions did we ship in 9.04? How are doing with 9.10 in comparison? * look at quantity of tags that went from regression-potential to regression-release * How can I find out the numbers of bugs that were tagged regression- at any point in time? * need this for to find out how many regression-potential bugs there were at alpha 3 for example * http://people.canonical.com/~brian/complete-graphs/regression-potential/regression-potential.data Might also be interesting to look at the number of SRU's done a per release basis * however this should only compare LTS to LTS and non-LTS to non-LTS How long does it take to fix the hottest bugs? * perhaps use a database query to find the 'hottest' / highest gravity bugs and look at their time to fix * or look at karmic fixes report and add-in gravity there mdz is also concerned about backwater packages that don't get watched but are important to the distribution * consider querying for packages without subscribers for example net-tools - https://edge.launchpad.net/ubuntu/+source/net-tools * dhcp is one also but it is only in dapper * perhaps also looking to see if the packages are seeded as a way of determining their importance to the distro Bug gravity should be modified to add points for bug importance Add importance to -fixes reports to and counts of importance of bugs fixed Bug gravity should be added to team assigned lists for engineering managers Take a look at a representative sample of bug reports to see how things went with those bug reports * Did the right thing happen with them? * Should they really be in the state they are in? * How many are apport-bug / apport-crash reports? * Does this have an effect on their triaging? Are they less likely to be incomplete? * How quickly did the first person respond? * If the bug is new speculate as to why it hasn't been looked at / triaged yet. Try to estimate the time it takes to fix a bug report * break this down into bugs w/ patches, bugs w/ debdiffs, bugs w/ branches * are different kinds fixed faster(?) * how many get fixed on a per release basis? * this will be useful for estimating the amount of work we can reasonably do * How are the bug-fixing reports inadequate for this? - http://qa.ubuntu.com/reports/bug-fixing/ = mdz's notes = These are the key areas for measuring and reporting that I mentioned on the call today: * regressions * performance over time * neglected packages * importance * fixing and triage (separately) Suggestion: produce a report at certain points in the release cycle, comparing our performance at that point to a similar one in the previous release cycle