ExtendedAudioTesting

Differences between revisions 2 and 4 (spanning 2 versions)
Revision 2 as of 2009-05-29 10:48:14
Size: 4998
Editor: 80
Comment: Added notes from UDS discussion
Revision 4 as of 2009-06-08 14:30:08
Size: 8647
Editor: cpc4-oxfd8-0-0-cust39
Comment:
Deletions are marked like this. Additions are marked like this.
Line 6: Line 6:
 * '''Packages affected''':  * '''Packages affected''': Checkbox
Line 10: Line 10:
Extend both manual and automated test cases to test a range of use cases and hardware configurations on a regular basis. Extend both manual and automated audio test cases to test a range of use cases and hardware configurations on a regular basis.
Line 14: Line 14:
This section should include a paragraph describing the end-user impact of this change. It is meant to be included in the release notes of the first release in which it is implemented. (Not all of these will actually be included in the release notes, at the release manager's discretion; but writing them is a useful exercise.)

It is mandatory.
Extended audio testing is available. Certain tests are conduced automatically by the System Testing application, but extended tests are available for users who want to help ensure that a wide variety of audio hardware works correctly with Ubuntu.
Line 20: Line 18:
This should cover the _why_: why is this change being proposed, what justifies it, where we see this justified. Functioning audio is crucial to the Ubuntu desktop experience. Users should be able to expect sound to work properly from install onward. Testing audio in a variety of configurations across a range of hardware will help us to ensure that our sound support meets user expectations.
Line 24: Line 22:
 * George wants to assist with the Ubuntu development effort but has no development or testing experience. He runs Checkbox and performs some simple audio tests. He is pleased that his efforts will help indicate which hardware needs further development efforts in Ubuntu.
 * Harriet has a wide array of audio hardware. Some of this hardware is not properly supported by Ubuntu. She is able to run Checkbox in an extended mode and report bugs on nonfunctioning hardware.
 * Ronald runs multiple automated tests on a wide variety of systems on a daily basis. He wants to be able to tell when a system is having audio problems and file appropriate bugs. He can run Checkbox in an automated fashion across these machines and see which ones have simple audio issues.
Line 25: Line 27:

 * Automated testing:
  * Hardware can be appropriately configured to perform loopback tests.
  * External sound sources are available for testing internal microphones.
 * Manual testing:
  * A wide array of hardware exists and can be tested by the user community.
Line 28: Line 36:
You can have subsections that better describe specific parts of the issue. === Automated Testing ===

Tested systems will have hardware configuration changes made to enable automated testing. This will consist of simply connecting the default audio output to the default audio input with a patch cable. This will allow automated tests to produce a sound on the audio output and listen for it on the input. Automated Checkbox tests will be created relying on this configuration to test audio.

=== Manual Testing ===

A series of manual Checkbox tests will be created to test audio and added to {{{checkbox-extras}}}. The test dependencies will be designed as to reduce redundant tests.

Testing will start at the top of the audio stack -- if the most complex audio test cases pass, then it can be assumed that lower-level parts of the audio stack are functioning properly. Users will also be asked what hardware they have available and will be prompted to test only the hardware that they indicate they have.
Line 32: Line 48:
This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like: === Automated Testing ===
Line 34: Line 50:
=== UI Changes === A Checkbox test or suite will be written to generate a sound on the audio output and listen for it on the input. This ''MUST'' generate a pass/fail value as follows:
 * Silence is detected, as indicated by a recorded audio file of sufficiently small size: '''FAIL'''
 * Audio is detected, as indicated by a recorded audio file of sufficiently large size: '''PASS'''
 * No audio hardware is detected in the list of hardware:
  * This ''MAY'' generate a '''SKIP''' value in normal system audio testing, as there is currently no way to indicate expected hardware existence on a per-machine basis.
  * This ''SHOULD'' generate a '''FAIL''' value on hardware known to have audio hardware installed. (E.g. OEM hardware.) Ideally Checkbox should be able to be made aware of expected hardware detection, but this currently can be covered via Extended Manual Testing as described below.
Line 36: Line 57:
Should cover changes required to the UI, or specific UI that is required to implement this The size threshold of the file indicating silence or recorded audio will be determined during implementation.
Line 38: Line 59:
=== Code Changes === For the purposes of this specification, it will be sufficient to determine if audio or silence is detected by the Checkbox test. In the future, more advanced analysis of received audio may be performed to test for the correctness of the generated/received audio.
Line 40: Line 61:
Code changes should include an overview of what needs to change, and in some cases even the specific details. === Manual Testing ===
Line 42: Line 63:
=== Migration === ==== Default Tests ====
The current Checkbox test that simply produces a sound and asks if the user heard it will remain in place. This will remain the default audio test presented to users in a standard Checkbox test.
Line 44: Line 66:
Include:
 * data migration, if any
 * redirects from old URLs to new ones, if any
 * how users will be pointed to the new way of doing things, if necessary.
==== Extended Tests ====
An additional suite of Checkbox tests will be written to test audio and added to the checkbox-extras package.

These tests ''MUST'' present the user with a list of detected audio hardware and ask the user to confirm their existence. The tests ''SHOULD'' ask the user if any audio hardware exists in the system that was not automatically detected. The tests ''SHOULD'' also prompt the user to select (via check boxes or similar mechanism) what hardware they possess that cannot be automatically detected, such as microphones, headphones, non-connected USB audio devices, etc.

For each detected piece of hardware, the suite ''MUST'' execute a series of tests. These tests ''SHOULD'' start at the 'top' of the audio stack; i.e. tests should start with the most complex case. If the initial case succeeds, it can be assumed that all lower level tests would also succeed, and thus they ''SHOULD'' not be presented to the user.

As an example, in order to test the internal microphone and speakers of a system, Checkbox might ask the user to speak into the microphone and then play back the recorded audio. If the user indicates that he or she heard the proper recorded audio, it can be assumed that both the internal microphone and speakers are working properly and no more tests for these devices need to be executed. If, however, the user indicates that the recorded audio was improper or not heard, Checkbox might begin by testing the speakers and then testing various aspects of the microphone.

Finally, for each selected piece of hardware that cannot be automatically detected, Checkbox ''SHOULD'' perform a series of tests. These tests ''MAY'' vary depending on the hardware and ''SHOULD'' strive to test as much of the capabilities of the hardware as possible. These tests also ''SHOULD'' start at the top of the relevant parts of audio stack as described above.
Line 51: Line 79:
It's important that we are able to test new features, and demonstrate them to users. Use this section to describe a short plan that anybody can follow that demonstrates the feature is working. This can then be used during testing, and to show off after release. Please add an entry to http://testcases.qa.ubuntu.com/Coverage/NewFeatures for tracking test coverage. Testing will be performed by the Checkbox team executing the Checkbox test scripts during development. After this the automated tests will be rolled out to the certification testing environment. As this is a controlled environment with tests run and analysed daily, this will be a sufficient test for including these tests in the next release of Checkbox.
Line 53: Line 81:
This need not be added or completed until the specification is nearing beta. == Future work ==
Line 55: Line 83:
== Unresolved issues == The proposed automated audio testing is a large leap forward from its current state but still fairly basic. Future specifications should develop more in-depth testing procedures.
Line 57: Line 85:
This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved. In the future additional hardware configuration changes will be considered. One possibility is producing a detectable sound in the environment containing the tested systems which can be picked up by internal microphones.
Line 61: Line 89:
==== Goal ====
 * Extend testing of audio, breadth and depth
 * can be manual or automated
 * What sort of testing would be valuable?
'''Make sure the key bits from here are integrated in the spec:'''
Line 69: Line 94:
 * make sure that we're hitting the right output(s) on devices that have multiple outputs
  * more the case on "more interesting" hardware (pro audio devices) but still valuable on laptops and desktops
Line 74: Line 97:
 * attach cable from output to input and run some sounds through
  * can run automated tests through Checkbox
  * want to check and see if there is not sound and submit that info
  * issues with this
   * may not be able to tell if it is microphone or speaker jack not working
   * if we plug in a USB device to test this, we can narrow down where the problem is but it introduces many new variables into the sound test
 * testing internal microphone might be fairly easy
 * audio fidelity
Line 84: Line 99:
 * "Does sound work?" Datacenter test
  * test playback
   * assume a headset... we can move onto jack sensing later
Line 103: Line 116:
==== Tools ====
Tests may require kernel changes (debugging enabled, rebuild ALSA modules, etc.). May be able to build packages with these modules enabled. (We may want to detect problems more than debug them, so this might not be a huge deal for testing.)
  * hda-analyzer
   * python-based
   * does some automated testing
   * on alsa wiki http://www.alsa-project.org
  * hda-verb
   * simple C program
   * allows sending of HDA commands to devices for testing

Summary

Extend both manual and automated audio test cases to test a range of use cases and hardware configurations on a regular basis.

Release Note

Extended audio testing is available. Certain tests are conduced automatically by the System Testing application, but extended tests are available for users who want to help ensure that a wide variety of audio hardware works correctly with Ubuntu.

Rationale

Functioning audio is crucial to the Ubuntu desktop experience. Users should be able to expect sound to work properly from install onward. Testing audio in a variety of configurations across a range of hardware will help us to ensure that our sound support meets user expectations.

User stories

  • George wants to assist with the Ubuntu development effort but has no development or testing experience. He runs Checkbox and performs some simple audio tests. He is pleased that his efforts will help indicate which hardware needs further development efforts in Ubuntu.
  • Harriet has a wide array of audio hardware. Some of this hardware is not properly supported by Ubuntu. She is able to run Checkbox in an extended mode and report bugs on nonfunctioning hardware.
  • Ronald runs multiple automated tests on a wide variety of systems on a daily basis. He wants to be able to tell when a system is having audio problems and file appropriate bugs. He can run Checkbox in an automated fashion across these machines and see which ones have simple audio issues.

Assumptions

  • Automated testing:
    • Hardware can be appropriately configured to perform loopback tests.
    • External sound sources are available for testing internal microphones.
  • Manual testing:
    • A wide array of hardware exists and can be tested by the user community.

Design

Automated Testing

Tested systems will have hardware configuration changes made to enable automated testing. This will consist of simply connecting the default audio output to the default audio input with a patch cable. This will allow automated tests to produce a sound on the audio output and listen for it on the input. Automated Checkbox tests will be created relying on this configuration to test audio.

Manual Testing

A series of manual Checkbox tests will be created to test audio and added to checkbox-extras. The test dependencies will be designed as to reduce redundant tests.

Testing will start at the top of the audio stack -- if the most complex audio test cases pass, then it can be assumed that lower-level parts of the audio stack are functioning properly. Users will also be asked what hardware they have available and will be prompted to test only the hardware that they indicate they have.

Implementation

Automated Testing

A Checkbox test or suite will be written to generate a sound on the audio output and listen for it on the input. This MUST generate a pass/fail value as follows:

  • Silence is detected, as indicated by a recorded audio file of sufficiently small size: FAIL

  • Audio is detected, as indicated by a recorded audio file of sufficiently large size: PASS

  • No audio hardware is detected in the list of hardware:
    • This MAY generate a SKIP value in normal system audio testing, as there is currently no way to indicate expected hardware existence on a per-machine basis.

    • This SHOULD generate a FAIL value on hardware known to have audio hardware installed. (E.g. OEM hardware.) Ideally Checkbox should be able to be made aware of expected hardware detection, but this currently can be covered via Extended Manual Testing as described below.

The size threshold of the file indicating silence or recorded audio will be determined during implementation.

For the purposes of this specification, it will be sufficient to determine if audio or silence is detected by the Checkbox test. In the future, more advanced analysis of received audio may be performed to test for the correctness of the generated/received audio.

Manual Testing

Default Tests

The current Checkbox test that simply produces a sound and asks if the user heard it will remain in place. This will remain the default audio test presented to users in a standard Checkbox test.

Extended Tests

An additional suite of Checkbox tests will be written to test audio and added to the checkbox-extras package.

These tests MUST present the user with a list of detected audio hardware and ask the user to confirm their existence. The tests SHOULD ask the user if any audio hardware exists in the system that was not automatically detected. The tests SHOULD also prompt the user to select (via check boxes or similar mechanism) what hardware they possess that cannot be automatically detected, such as microphones, headphones, non-connected USB audio devices, etc.

For each detected piece of hardware, the suite MUST execute a series of tests. These tests SHOULD start at the 'top' of the audio stack; i.e. tests should start with the most complex case. If the initial case succeeds, it can be assumed that all lower level tests would also succeed, and thus they SHOULD not be presented to the user.

As an example, in order to test the internal microphone and speakers of a system, Checkbox might ask the user to speak into the microphone and then play back the recorded audio. If the user indicates that he or she heard the proper recorded audio, it can be assumed that both the internal microphone and speakers are working properly and no more tests for these devices need to be executed. If, however, the user indicates that the recorded audio was improper or not heard, Checkbox might begin by testing the speakers and then testing various aspects of the microphone.

Finally, for each selected piece of hardware that cannot be automatically detected, Checkbox SHOULD perform a series of tests. These tests MAY vary depending on the hardware and SHOULD strive to test as much of the capabilities of the hardware as possible. These tests also SHOULD start at the top of the relevant parts of audio stack as described above.

Test/Demo Plan

Testing will be performed by the Checkbox team executing the Checkbox test scripts during development. After this the automated tests will be rolled out to the certification testing environment. As this is a controlled environment with tests run and analysed daily, this will be a sufficient test for including these tests in the next release of Checkbox.

Future work

The proposed automated audio testing is a large leap forward from its current state but still fairly basic. Future specifications should develop more in-depth testing procedures.

In the future additional hardware configuration changes will be considered. One possibility is producing a detectable sound in the environment containing the tested systems which can be picked up by internal microphones.

BoF agenda and discussion

Make sure the key bits from here are integrated in the spec:

Desired Testing

  • verify that all inputs and outputs (both digital and analog) work for each piece of tested hardware
    • test that plugging in headphones mutes speakers, plugging mic mutes internal mic, etc.
  • test each audio layer independently
    • test that driver works
  • tests will likely require manual parts as well as automatic

Baseline Testing

  • test recording
    • easiest support for automation is looping mic <-> headphones

    • can we find a level changer so that the headphone output is detected as a mic?
  • What can users test?
    • We should focus on top-down testing
      • start with gstreamer pipeline first
      • if it works from the highest level, don't go down through the rest of the stack -- assume it works
    • ask if they have hardware to test... headphones, mics, etc.
      • if they have the right hardware, ask them to plug it in and give a test
      • make sure we don't ask them to plug in something their hardware doesn't support
        • e.g. don't ask them to plug in a line-level input if they don't have one
      • all sorts of hardware that can be tested
        • USB sound devices
        • bluetooth devices (hard to setup / configure in Ubuntu)
        • 1/8" jacks, etc.


CategorySpec

QATeam/Specs/ExtendedAudioTesting (last edited 2009-06-12 13:24:13 by cpc4-oxfd8-0-0-cust39)