Deployment

Differences between revisions 1 and 12 (spanning 11 versions)
Revision 1 as of 2012-11-26 12:32:12
Size: 3667
Editor: ev
Comment: initial commit
Revision 12 as of 2013-01-11 23:11:26
Size: 4224
Editor: brian-murray
Comment:
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:

== Setting up Juju ==
Line 6: Line 8:
export AWS_SECRET_ACCESS_KEY=$EC2_SECRET_KEY
export AWS_ACCESS_KEY_ID=$EC2_ACCESS_KEY
Line 16: Line 16:
  canonistacktwo:
    type: openstack_s3
    default-instance-type: m1.medium
    control-bucket: juju-replace-me-with-your-bucket
    admin-secret: <secret>
    auth-url: https://keystone.canonistack.canonical.com:443/v2.0/
    access-key: <access key>
    secret-key: <secret key>
    default-series: precise
    juju-origin: ppa
    ssl-hostname-verification: True
    default-image-id: bb636e4f-79d7-4d6b-b13b-c7d53419fd5a
  canonistackone:
  canonistack:
Line 32: Line 20:
    ec2-uri: http://91.189.93.65:8773/services/Cloud
    s3-uri: http://91.189.93.65:3333
    ec2-uri: https://ec2-lcy02.canonistack.canonical.com:443/services/Cloud
    s3-uri: http://s3-lcy02.canonistack.canonical.com:3333
Line 39: Line 27:
    origin: ppa     juju-origin: ppa
Line 43: Line 31:
Now you're ready to deploy the individual charms that make up daisy.ubuntu.com and errors.ubuntu.com: == Deploying the error tracker ==
Now you're ready to checkout and deploy the individual charms that make up daisy.ubuntu.com and errors.ubuntu.com, which is done by a single script:
Line 45: Line 35:
mkdir -p ~/bzr/precise
bzr branch lp:~ev/charms/precise/daisy/trunk ~/bzr/precise/daisy
bzr branch lp:~ev/charms/precise/daisy-retracer/trunk ~/bzr/precise/daisy-retracer
bzr branch lp:~ev/charms/precise/errors/trunk ~/bzr/precise/errors
bzr branch lp:~ev/+junk/whoopsie-daisy-deployment ~/bzr/whoopsie-daisy-deployment
bzr branch lp:error-tracker-deployment
Line 52: Line 37:
JUJU_ENV=canonistackone ~/bzr/whoopsie-daisy-deployment error-tracker-deployment/deploy
Line 55: Line 40:
Follow along with {{{JUJU_ENV=canonistackone juju status}}}. Follow along with {{{juju status}}}.
Line 57: Line 42:
Once all the nodes and relations are out of the pending state, you should be able to start throwing crashes at it. You'll probably want to use tmux or some other terminal multiplexer for this. Once all the nodes and relations are out of the pending state, you should be able to start throwing crashes at it.
Line 59: Line 44:
Shell #1: == Using the Juju error tracker ==

The following command sets up various SSH tunnels to the Juju instances of daisy and errors, redirects the local whoopsie daemon to report crashes against the Juju daisy instance instead of errors.ubuntu.com, and shows the local whoopsie and remote daisy-retracer logs until you press Control-C:
Line 61: Line 49:
DAISY_ADDRESS="$(juju status daisy/0 | grep public-address | sed 's,.*public-address: \(.*\)$,\1,')"
ssh -N -L 8080:$DAISY_ADDRESS:80 $DAISY_ADDRESS
error-tracker-deployment/run-juju-daisy
Line 65: Line 52:
Shell #2: This script has a commented out alternative of the ssh command to daisy which shows the Apache logs. Enable this, and disable the default one below if you want to debug problems with uploading the .crash files.

== Generating and uploading crashes ==

You can generate a simple crash report with e. g.
Line 67: Line 58:
ERRORS_ADDRESS="$(juju status errors/0 | grep public-address | sed 's,.*public-address: \(.*\)$,\1,')"
ssh -N -L 8081:$ERRORS_ADDRESS:80 $ERRORS_ADDRESS
bash -c 'kill -SEGV $$'
Line 71: Line 61:
Shell #3: and elect to report the crash in the popping up Apport window.

Now open a browser to http://localhost:8081. You should have one problem in the most common problems table.

For a more systematic and regular integration test you can use an automatically generated set of .crash files for various application classes (GTK, Qt, CLI, D-BUS, Python crash) from the [[https://code.launchpad.net/~daisy-pluckers/+recipe/apport-test-crashes|test crashes recipe]], which currently builds the crashes for i386, amd64, and armhf for precise, quantal, and raring. You can download the current ones with
Line 73: Line 68:
sudo stop whoopsie
bzr branch lp:whoopsie ~/bzr/whoopsie
cd ~/bzr/whoopsie
make
sudo LD_LIBRARY_PATH=src CRASH_DB_URL=http://localhost:8080 ./src/whoopsie -f
error-tracker-deployment/fetch-test-crashes
Line 80: Line 71:
Shell #4: which will download them into `./test-crashes/`''release''`/`/''architecture''`/*.crash`. Then you can use the `submit-crash` script to feed them individually or as a whole into whoopsie:
Line 82: Line 74:
JUJU_ENV=canonistackone juju ssh daisy/0
tail -f /var/log/apache2/error.log -f /var/log/apache2/access.log
error-tracker-deployment/submit-crash test-crashes # uploads all of them
error-tracker-deployment/submit-crash test-crashes/raring/amd64
error-tracker-deployment/submit-crash test-crashes/precise/armhf/_usr_bin_apport-*.crash
Line 86: Line 79:
Shell #5: == Debugging tricks ==

You can purge the whole Cassandra database with
Line 88: Line 84:
JUJU_ENV=canonistackone juju ssh daisy/0 ~/bzr/error-tracker-deployment/purge-db
}}}

Call it with `--force` to do this without confirmation.


You might want to watch out for exceptions thrown by daisy or errors themselves:
{{{
juju ssh daisy/0
Line 92: Line 96:
Shell #6:
{{{
JUJU_ENV=canonistackone juju ssh daisy-retracer/0
tail -f /var/log/retracer.log
}}}

Shell #7:
{{{
gedit &; PID="$\!"; sleep 3; kill -SEGV $PID
# Elect to submit the error report when the apport dialog appears.
}}}

Now open a browser to http://localhost:8081. You should have one problem in the most common problems table.
If you want to use the Launchpad functionality in errors you'll need to setup Launchpad OAuth tokens and put them in `/var/www/daisy/local_config.py` on your errors server. Information regarding setting up OAuth tokens can be found [[https://wiki.ubuntu.com/ErrorTracker/Contributing/Errors|here]].

This document will help you create instances of http://daisy.ubuntu.com and http://errors.ubuntu.com deployed in the cloud.

Setting up Juju

First you'll need to create an environment for Juju to bootstrap to. Follow the directions here to get a basic environment going. I'd suggest doing something akin to the following to bootstrap the initial node:

source ~/.canonistack/novarc
juju bootstrap -e canonistackone --constraints "instance-type=m1.medium"

This will ensure that the juju bootstrap node doesn't take ages to perform basic tasks because it's constantly going into swap.

You should end up with something similar to the following in your ~/.juju/environments.yaml:

environments:
  canonistack:
  type: ec2
    control-bucket: juju-replace-me-with-your-bucket
    admin-secret: <secret>
    ec2-uri: https://ec2-lcy02.canonistack.canonical.com:443/services/Cloud
    s3-uri: http://s3-lcy02.canonistack.canonical.com:3333
    default-image-id: ami-00000097
    access-key: <access key>
    secret-key: <secret key>
    default-series: precise
    ssl-hostname-verification: false
    juju-origin: ppa
    authorized-keys-path: ~/.ssh/authorized_keys

Deploying the error tracker

Now you're ready to checkout and deploy the individual charms that make up daisy.ubuntu.com and errors.ubuntu.com, which is done by a single script:

bzr branch lp:error-tracker-deployment
source ~/.canonistack/novarc
error-tracker-deployment/deploy

Follow along with juju status.

Once all the nodes and relations are out of the pending state, you should be able to start throwing crashes at it.

Using the Juju error tracker

The following command sets up various SSH tunnels to the Juju instances of daisy and errors, redirects the local whoopsie daemon to report crashes against the Juju daisy instance instead of errors.ubuntu.com, and shows the local whoopsie and remote daisy-retracer logs until you press Control-C:

error-tracker-deployment/run-juju-daisy

This script has a commented out alternative of the ssh command to daisy which shows the Apache logs. Enable this, and disable the default one below if you want to debug problems with uploading the .crash files.

Generating and uploading crashes

You can generate a simple crash report with e. g.

bash -c 'kill -SEGV $$'

and elect to report the crash in the popping up Apport window.

Now open a browser to http://localhost:8081. You should have one problem in the most common problems table.

For a more systematic and regular integration test you can use an automatically generated set of .crash files for various application classes (GTK, Qt, CLI, D-BUS, Python crash) from the test crashes recipe, which currently builds the crashes for i386, amd64, and armhf for precise, quantal, and raring. You can download the current ones with

error-tracker-deployment/fetch-test-crashes

which will download them into ./test-crashes/release//architecture/*.crash. Then you can use the submit-crash script to feed them individually or as a whole into whoopsie:

error-tracker-deployment/submit-crash test-crashes  # uploads all of them
error-tracker-deployment/submit-crash test-crashes/raring/amd64
error-tracker-deployment/submit-crash test-crashes/precise/armhf/_usr_bin_apport-*.crash

Debugging tricks

You can purge the whole Cassandra database with

~/bzr/error-tracker-deployment/purge-db

Call it with --force to do this without confirmation.

You might want to watch out for exceptions thrown by daisy or errors themselves:

juju ssh daisy/0
watch ls /srv/local-oopses-whoopsie

If you want to use the Launchpad functionality in errors you'll need to setup Launchpad OAuth tokens and put them in /var/www/daisy/local_config.py on your errors server. Information regarding setting up OAuth tokens can be found here.

ErrorTracker/Deployment (last edited 2014-05-26 11:54:50 by brian-murray)