TestingOpenStack
Introduction
This page aims to help someone set up OpenStack in a standalone virtual machine using libvirt and has been tested on 12.04 LTS and 12.10 OpenStack should be accessible to other virtual machines in the libvirt network (ie, the one that the OpenStack vm is on). Other machines on the libvirt network should be able to use euca2ools or juju to interface with the OpenStack vm and have it start instances, etc. Note, this is not intended to by a guide for a production deployment of OpenStack and as such does not enable security features that may be present in OpenStack (such as using swift for S3 storage).
VM host configuration
This document assumes you have created a VM with enough memory and disk space. The VM should have (at least) the following characteristics):
- 2048M RAM
- 20G disk
- 2 network interfaces
Networking on the OpenStack VM
Since the OpenStack VM is a host on the 192.168.122.0/24 libvirt network, we need to make sure that it is setup correctly so it is reachable by other hosts on the VM network. As mentioned, the OpenStack VM will have two interfaces:
- eth0 (the public interface)
- eth1 (the private interface)
OpenStack will create a network on the eth1 interface and use dnsmasq, etc via libvirt for private addressing of instances via a bridge. We then will associate public addresses to private ones, and then expose them via euca-authorize (EC2 security groups). To make this all work seemlessly, lets create a static address for eth0 that uses 192.168.122.0/25 and then in nova expose public addresses in 192.168.122.128/25. This seems to make networking within a libvirt virtual network work correctly.
To get this working:
Install libvirt:
$ sudo apt-get install libvirt-bin
Add yourself to the libvirtd group:
$ sudo adduser $USER libvirtd
Logout and back in, or use sg libvirtd so you are in the libvirtd group
Redefine the default libvirt network to use 192.168.123.0/24 instead of 192.168.122.0/24 (so it won't get in the way of things-- this network isn't used by nova anyway, but nova does expect certain firewall rules to be in effect):
$ virsh net-dumpxml default | sed 's/192.168.122/192.168.123/g' > /tmp/xml $ virsh net-destroy default $ virsh net-define /tmp/xml
Setup a static IP address for eth0 in /etc/network/interfaces:
# The primary network interface auto eth0 iface eth0 inet static address 192.168.122.3 network 192.168.122.0 netmask 255.255.255.128 broadcast 192.168.122.127 gateway 192.168.122.1 #iface eth0 inet dhcp iface eth1 inet manual iface eth1 inet6 manualAdjust /etc/resolvconf/resolv.conf.d/base to have:
search defaultdomain nameserver 192.168.122.1
- Reboot to make sure it all comes up ok.
Networking on the host
Since we are using a static ip address for the OpenStack VM, it is helpful to put an entry in /etc/hosts on the host machine (ie the host that runs the OpenStack VM):
192.168.122.3 openstack-precise-server-amd64 openstack
After add that, send dnsmasq a HUP:
$ sudo kill -HUP `cat /var/run/libvirt/network/default.pid`
At this point you can login to the OpenStack VM with:
$ ssh openstack Welcome to Ubuntu precise (development branch) (GNU/Linux 3.2.0-18-generic x86_64) ... openstack-precise-server-amd64:~$
Setup OpenStack packages
Package installation
Install the necessary packages:
$ sudo apt-get install rabbitmq-server mysql-server nova-compute nova-api nova-scheduler nova-objectstore nova-network glance keystone python-mysqldb euca2ools nova-cert
You will be prompted for a MySQL root password which you will be prompted for in the next step.
mysql setup
OpenStack can be configured to store much of its state and configuration in a database, and MySQL is one of the supported databases.
Setup the mysql databases:
$ mysql -v -u root -p mysql> create database glance; mysql> create database keystone; mysql> create database nova; mysql> grant all privileges on glance.* to 'glance'@'localhost' identified by 'glancemysqlpasswd'; mysql> grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'keystonemysqlpasswd'; mysql> grant all privileges on nova.* to 'nova'@'localhost' identified by 'novamysqlpasswd'; mysql> quit
rabbitmq setup
Rabbitmq is a messaging service and it is used to coordinate messaging between various components within OpenStack. Setup rabbitmq by running the following:
$ sudo rabbitmqctl add_vhost nova $ sudo rabbitmqctl add_user 'nova' 'rabbitmqpasswd' $ sudo rabbitmqctl set_permissions -p nova nova ".*" ".*" ".*"
nova setup
Nova is the compute service within OpenStack and is responsible for launching and managing VMs. Configure nova:
- Add the following to /etc/nova/nova.conf to have:
12.04:
--sql_connection=mysql://nova:novamysqlpasswd@localhost/nova --rabbit_host=localhost --rabbit_userid=nova --rabbit_password=rabbitmqpasswd --rabbit_virtual_host=nova --rabbit_vhost=nova --network_manager=nova.network.manager.FlatDHCPManager --auth_strategy=keystone --keystone_ec2_url=http://localhost:5000/v2.0/ec2tokens
12.10:
sql_connection=mysql://nova:novamysqlpasswd@localhost/nova rabbit_host=localhost rabbit_userid=nova rabbit_password=rabbitmqpasswd rabbit_virtual_host=nova rabbit_vhost=nova network_manager=nova.network.manager.FlatDHCPManager ec2_url=http://localhost:8773/services/Cloud auth_strategy=keystone keystone_ec2_url=http://localhost:5000/v2.0/ec2tokens
Sync the nova database:
$ sudo nova-manage db sync
Restart the nova services:
$ for i in nova-api nova-scheduler nova-network nova-compute nova-objectstore ; do sudo service $i restart ; done
verify it worked:
$ sleep 10 ; netstat -n | egrep '5672' tcp 0 0 127.0.0.1:47836 127.0.0.1:5672 ESTABLISHED tcp 0 0 127.0.0.1:47838 127.0.0.1:5672 ESTABLISHED tcp 0 0 127.0.0.1:47832 127.0.0.1:5672 ESTABLISHED tcp 0 0 127.0.0.1:47837 127.0.0.1:5672 ESTABLISHED tcp6 0 0 127.0.0.1:5672 127.0.0.1:47837 ESTABLISHED tcp6 0 0 127.0.0.1:5672 127.0.0.1:47836 ESTABLISHED tcp6 0 0 127.0.0.1:5672 127.0.0.1:47838 ESTABLISHED tcp6 0 0 127.0.0.1:5672 127.0.0.1:47832 ESTABLISHED
glance setup
Glance is the image service and is responsible for providing VM images (for nova) within OpenStack. Configure glance:
edit /etc/glance/glance-registry.conf to have:
sql_connection = mysql://glance:glancemysqlpasswd@localhost/glance
then append to end:
[paste_deploy] flavor = keystone
- adjust /etc/glance/glance-api.conf:
12.04: append to end of /etc/glance/glance-api.conf:
[paste_deploy] flavor = keystone
12.10: edit /etc/glance/glance-api.conf to have:
sql_connection = mysql://glance:glancemysqlpasswd@localhost/glance
Then append to end:
[paste_deploy] flavor = keystone
stop glance:
$ sudo stop glance-api $ sudo stop glance-registry
set version control on the glance db (see bug 981111):
$ sudo glance-manage version_control 0
sync the glance db:
$ sudo glance-manage db_sync
start glance:
$ sudo start glance-api $ sudo start glance-registry
14. verify it worked:
$ netstat -nl | egrep '(9191|9292)' tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN
keystone setup
Keystone is the authentication service for OpenStack. It uses tenants for authentication and various OpenStack services use tenants as the basis for users, services and roles for authorizing actions. Configure keystone:
- edit /etc/keystone/keystone.conf:
adjust admin token:
admin_token = keystoneadmintoken
adjust connection:
connection = mysql://keystone:keystonemysqlpasswd@localhost/keystone
Verify /etc/keystone/keystone.conf is using the sql driver and not kvs:
[ec2] driver = keystone.contrib.ec2.backends.sql.Ec2
[OPTIONAL] for debugging, adjust /etc/keystone/logging.conf to have:
... [logger_root] level=DEBUG ...
sync the keystone db:
$ sudo keystone-manage db_sync
restart keystone:
$ sudo stop keystone $ sudo start keystone
verify it worked:
$ netstat -nl | egrep '(35357|5000)' tcp 0 0 0.0.0.0:35357 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN
test with python-keystoneclient:
$ export SERVICE_ENDPOINT=http://localhost:35357/v2.0/ $ export SERVICE_TOKEN=keystoneadmintoken $ keystone user-list # 12.04 LTS +----+---------+-------+------+ | id | enabled | email | name | +----+---------+-------+------+ +----+---------+-------+------+ $ keystone user-list # 12.10 $
OpenStack tenants, users, and roles
By this point, the packages should be installed and configured to work together. Specifically:
- nova, glance and keystone can use mysql
- nova should be able to talk to rabbitmq
- nova is configured to use the local keystone for authentication
- glance is configured to use the keystone flavor
- nova, glance and keystone are all up and running on the localhost
Now we need to setup various tenants in keystone. Tenants form the basis for users and services, and users, services and roles combine to form various access controls and permissions within OpenStack.
Create tenants
create an admin tenant:
$ keystone tenant-create --name "admin" --description "Admin tenant" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Admin tenant | | enabled | True | | id | ec949719d82c442cb32729be66e2e8ae | | name | admin | +-------------+----------------------------------+
For ease of use, export the admin tenant id based on the above:
export ADMIN_TENANT_ID=ec949719d82c442cb32729be66e2e8ae
create a users tenant:
$ keystone tenant-create --name "users" --description "Users tenant" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Users tenant | | enabled | True | | id | 9deb2be4a0db4644905a3f752cf5f010 | | name | users | +-------------+----------------------------------+
For ease of use, export the client tenant id based on the above:
export USERS_TENANT_ID=9deb2be4a0db4644905a3f752cf5f010
create a service tenant:
$ keystone tenant-create --name "services" --description "Services tenant" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Services tenant | | enabled | True | | id | 7ecb9778577446929e3e93e10f6f6347 | | name | services | +-------------+----------------------------------+
For ease of use, export the servant tenant id based on the above:
export SERVICES_TENANT_ID=7ecb9778577446929e3e93e10f6f6347
Roles
Create the Member role:
$ keystone role-create --name Member +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id | 6eb5cff58ad94114ae1131511d15b0d7 | | name | Member | +----------+----------------------------------+ $ export MEMBER_ROLE_ID=6eb5cff58ad94114ae1131511d15b0d7
Create the admin role:
$ keystone role-create --name admin +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id | 836c77dd3e5b4720bc93f06ed0e5f4f3 | | name | admin | +----------+----------------------------------+ $ export ADMIN_ROLE_ID=836c77dd3e5b4720bc93f06ed0e5f4f3
Users
Create an admin user in keystone:
$ keystone user-create --tenant_id $ADMIN_TENANT_ID --name admin --pass adminpasswd --enabled true +----------+-------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +----------+-------------------------------------------------------------------------------------------------------------------------+ | email | None | | enabled | true | | id | 00869ca8093f4187a9188473a84d7fd1 | | name | admin | | password | $6$rounds=40000$mrA5YzZ9EgoC0LV3$3en5FCybROr0T..z2QwgYcQZhS3gGmq6B/4Tcd8VZ5vzYm/ecivlIUbe9zc9j2/Iels960kVz1.O4DL.28EVj/ | | tenantId | ec949719d82c442cb32729be66e2e8ae | +----------+-------------------------------------------------------------------------------------------------------------------------+ $ export ADMIN_USER_ID=00869ca8093f4187a9188473a84d7fd1
12.04 LTS:
$ keystone user-role-add --user $ADMIN_USER_ID --tenant_id $ADMIN_TENANT_ID --role $ADMIN_ROLE_ID
12.10:
$ keystone user-role-add --user_id $ADMIN_USER_ID --tenant_id $ADMIN_TENANT_ID --role_id $ADMIN_ROLE_ID
Create a glance user in keystone:
$ keystone user-create --tenant_id $SERVICES_TENANT_ID --name glance --pass glance --enabled true +----------+-------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +----------+-------------------------------------------------------------------------------------------------------------------------+ | email | None | | enabled | true | | id | d151c25d87384b7bb2231dd1d546c80f | | name | glance | | password | ... | | tenantId | 7ecb9778577446929e3e93e10f6f6347 | +----------+-------------------------------------------------------------------------------------------------------------------------+ $ export GLANCE_USER_ID=d151c25d87384b7bb2231dd1d546c80f
12.04 LTS:
$ keystone user-role-add --user $GLANCE_USER_ID --tenant_id $SERVICES_TENANT_ID --role $ADMIN_ROLE_ID
12.10:
$ keystone user-role-add --user_id $GLANCE_USER_ID --tenant_id $SERVICES_TENANT_ID --role_id $ADMIN_ROLE_ID
Create a nova user in keystone:
$ keystone user-create --tenant_id $SERVICES_TENANT_ID --name nova --pass nova --enabled true +----------+-------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +----------+-------------------------------------------------------------------------------------------------------------------------+ | email | None | | enabled | true | | id | b566a3ccf3ad4d9dbe2b83f5a4b971cadd1d546c80f | | name | nova | | password | ... | | tenantId | 7ecb9778577446929e3e93e10f6f6347 | +----------+-------------------------------------------------------------------------------------------------------------------------+ $ export NOVA_USER_ID=b566a3ccf3ad4d9dbe2b83f5a4b971ca
12.04 LTS:
$ keystone user-role-add --user $NOVA_USER_ID --tenant_id $SERVICES_TENANT_ID --role $ADMIN_ROLE_ID
12.10:
$ keystone user-role-add --user_id $NOVA_USER_ID --tenant_id $SERVICES_TENANT_ID --role_id $ADMIN_ROLE_ID
Verify
verify the tenants:
$ keystone tenant-list +----------------------------------+----------+---------+ | id | name | enabled | +----------------------------------+----------+---------+ | 7ecb9778577446929e3e93e10f6f6347 | services | True | | 9deb2be4a0db4644905a3f752cf5f010 | users | True | | ec949719d82c442cb32729be66e2e8ae | admin | True | +----------------------------------+----------+---------+
Verify roles:
$ keystone role-list +----------------------------------+--------+ | id | name | +----------------------------------+--------+ | 6eb5cff58ad94114ae1131511d15b0d7 | Member | | 836c77dd3e5b4720bc93f06ed0e5f4f3 | admin | +----------------------------------+--------+
Verify users (output slightly different on 12.10):
$ keystone user-list +----------------------------------+---------+-------+--------+ | id | enabled | email | name | +----------------------------------+---------+-------+--------+ | 00869ca8093f4187a9188473a84d7fd1 | true | None | admin | | b566a3ccf3ad4d9dbe2b83f5a4b971ca | true | None | nova | | d151c25d87384b7bb2231dd1d546c80f | true | None | glance | +----------------------------------+---------+-------+--------+
OpenStack services and endpoints
Now we can start creating the services that OpenStack will support and create endpoints based on these. Endpoints are what client tools will communicate with to utilize the services.
Services
Create the image service:
$ keystone service-create --name glance --type image --description "Openstack Image Service" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Openstack Image Service | | id | b490a930975b42acaf930fa3703e2c77 | | name | glance | | type | image | +-------------+----------------------------------+ $ export GLANCE_SERVICE_ID=b490a930975b42acaf930fa3703e2c77
Create the compute service:
$ keystone service-create --name nova --type compute --description "Nova compute service" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Nova compute service | | id | 0661c898e1134fec950d64b65149489b | | name | nova | | type | compute | +-------------+----------------------------------+ $ export NOVA_SERVICE_ID=0661c898e1134fec950d64b65149489b
Create the EC2 compatibility layer service:
$ keystone service-create --name ec2 --type ec2 --description "EC2 compatability layer" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | EC2 compatability layer | | id | b0d3b407616a4f1faee28cebeb1eb78c | | name | ec2 | | type | ec2 | +-------------+----------------------------------+ $ export EC2_SERVICE_ID=b0d3b407616a4f1faee28cebeb1eb78c
Create the identity service:
$ keystone service-create --name keystone --type identity --description "Keystone identity service" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Keystone identity service | | id | bd639962a35446238812030b270d05cf | | name | keystone | | type | identity | +-------------+----------------------------------+ $ export KEYSTONE_SERVICE_ID=bd639962a35446238812030b270d05cf
12.10/Folsom and higher: Create the volume service:
$ keystone service-create --name nova-volume --type volume --description "Nova volume service" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Nova volume service | | id | 62aa972d9ba0432c9f331708dc2bdfc9 | | name | nova-volume | | type | volume | +-------------+----------------------------------+ $ export NOVA_VOLUME_ID=62aa972d9ba0432c9f331708dc2bdfc9
Create the endpoint for the image service (glance):
$ keystone endpoint-create --region RegionOne --service_id $GLANCE_SERVICE_ID --publicurl http://localhost:9292/v1 --adminurl http://localhost:9292/v1 --internalurl http://localhost:9292/v1 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://localhost:9292/v1 | | id | af6ac78dac2844c09619b2536d5661e2 | | internalurl | http://localhost:9292/v1 | | publicurl | http://localhost:9292/v1 | | region | RegionOne | | service_id | b490a930975b42acaf930fa3703e2c77 | +-------------+----------------------------------+
OPTIONAL: On folsom (Ubuntu 12.10) and higher, can specify another endpoint that uses a different API (eg, 'v2' -- see curl -v http://localhost:9292/versions for supported API versions) with:
$ keystone endpoint-create --region RegionOne --service_id $GLANCE_SERVICE_ID --publicurl http://localhost:9292/v2 --adminurl http://localhost:9292/v2 --internalurl http://localhost:9292/v2 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://localhost:9292/v2 | | id | 4934bd662add4c369793ad4568f9ee9e | | internalurl | http://localhost:9292/v2 | | publicurl | http://localhost:9292/v2 | | region | RegionOne | | service_id | b490a930975b42acaf930fa3703e2c77 | +-------------+----------------------------------+
Create the endpoint for the compute service (nova):
$ keystone endpoint-create --region RegionOne --service_id $NOVA_SERVICE_ID --publicurl "http://localhost:8774/v1.1/\$(tenant_id)s" --adminurl "http://localhost:8774/v1.1/\$(tenant_id)s" --internalurl "http://localhost:8774/v1.1/\$(tenant_id)s" +-------------+------------------------------------------+ | Property | Value | +-------------+------------------------------------------+ | adminurl | http://localhost:8774/v1.1/$(tenant_id)s | | id | a91042f9b0974bedb58d1e86b0c0da19 | | internalurl | http://localhost:8774/v1.1/$(tenant_id)s | | publicurl | http://localhost:8774/v1.1/$(tenant_id)s | | region | RegionOne | | service_id | 0661c898e1134fec950d64b65149489b | +-------------+------------------------------------------+
Create the endpoint for the EC2 compatibility service:
$ keystone endpoint-create --region RegionOne --service_id $EC2_SERVICE_ID --publicurl http://localhost:8773/services/Cloud --adminurl http://localhost:8773/services/Cloud --internalurl http://localhost:8773/services/Cloud +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | adminurl | http://localhost:8773/services/Cloud | | id | 791db816885349e79481db8e2d92ae16 | | internalurl | http://localhost:8773/services/Cloud | | publicurl | http://localhost:8773/services/Cloud | | region | RegionOne | | service_id | b0d3b407616a4f1faee28cebeb1eb78c | +-------------+--------------------------------------+
Create the endpoint for the identity service (keystone):
$ keystone endpoint-create --region RegionOne --service_id $KEYSTONE_SERVICE_ID --publicurl http://localhost:5000/v2.0 --adminurl http://localhost:35357/v2.0 --internalurl http://localhost:5000/v2.0 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://localhost:35357/v2.0 | | id | 41272efb56194e78b6497e79a2285f4c | | internalurl | http://localhost:5000/v2.0 | | publicurl | http://localhost:5000/v2.0 | | region | RegionOne | | service_id | bd639962a35446238812030b270d05cf | +-------------+----------------------------------+
12.10/Folsom and higher: Create the endpoint for the nova volume service:
$ keystone endpoint-create --region RegionOne --service_id $NOVA_VOLUME_ID --publicurl "http://localhost:8776/v1/\$(tenant_id)s" --adminurl "http://localhost:8776/v1/\$(tenant_id)s" --internalurl "http://localhost:8776/v1/\$(tenant_id)s" +-------------+----------------------------------------+ | Property | Value | +-------------+----------------------------------------+ | adminurl | http://localhost:8776/v1/$(tenant_id)s | | id | cb2b92ceecdb4319a32156f6dfc2f7ae | | internalurl | http://localhost:8776/v1/$(tenant_id)s | | publicurl | http://localhost:8776/v1/$(tenant_id)s | | region | RegionOne | | service_id | 62aa972d9ba0432c9f331708dc2bdfc9 | +-------------+----------------------------------------+
Verify with ('volume' service only in 12.10/Folsom and later):
$ keystone service-list +----------------------------------+-------------+----------+---------------------------+ | id | name | type | description | +----------------------------------+-------------+----------+---------------------------+ | 0661c898e1134fec950d64b65149489b | nova | compute | Nova compute service | | 62aa972d9ba0432c9f331708dc2bdfc9 | nova-volume | volume | Nova volume service | | b0d3b407616a4f1faee28cebeb1eb78c | ec2 | ec2 | EC2 compatability layer | | b490a930975b42acaf930fa3703e2c77 | glance | image | Openstack Image Service | | bd639962a35446238812030b270d05cf | keystone | identity | Keystone identity service | +----------------------------------+----------+----------+---------------------------+ $ keystone endpoint-list +----------------------------------+-----------+------------------------------------------+------------------------------------------+------------------------------------------+ | id | region | publicurl | internalurl | adminurl | +----------------------------------+-----------+------------------------------------------+------------------------------------------+------------------------------------------+ | 41272efb56194e78b6497e79a2285f4c | RegionOne | http://localhost:5000/v2.0 | http://localhost:5000/v2.0 | http://localhost:5000/v2.0 | | 791db816885349e79481db8e2d92ae16 | RegionOne | http://localhost:8773/services/Cloud | http://localhost:8773/services/Cloud | http://localhost:8773/services/Cloud | | a91042f9b0974bedb58d1e86b0c0da19 | RegionOne | http://localhost:8774/v1.1/$(tenant_id)s | http://localhost:8774/v1.1/$(tenant_id)s | http://localhost:8774/v1.1/$(tenant_id)s | | af6ac78dac2844c09619b2536d5661e2 | RegionOne | http://localhost:9292/v1 | http://localhost:9292/v1 | http://localhost:9292/v1 | | cb2b92ceecdb4319a32156f6dfc2f7ae | RegionOne | http://localhost:8776/v1/$(tenant_id)s | http://localhost:8776/v1/$(tenant_id)s | http://localhost:8776/v1/$(tenant_id)s | +----------------------------------+-----------+------------------------------------------+------------------------------------------+------------------------------------------+
Verify the catalog:
# These were set earlier, but need to be unset no as they will interfere with later instructions $ unset SERVICE_ENDPOINT $ unset SERVICE_TOKEN # this is from the keystone user-create --name admin command $ export OS_USERNAME=admin OS_PASSWORD=adminpasswd OS_TENANT_NAME=admin OS_AUTH_URL=http://localhost:5000/v2.0/ $ keystone catalog Service: image +-------------+--------------------------+ | Property | Value | +-------------+--------------------------+ | adminURL | http://localhost:9292/v1 | | internalURL | http://localhost:9292/v1 | | publicURL | http://localhost:9292/v1 | | region | RegionOne | +-------------+--------------------------+ Service: compute +-------------+-------------------------------------------------------------+ | Property | Value | +-------------+-------------------------------------------------------------+ | adminURL | http://localhost:8774/v1.1/ec949719d82c442cb32729be66e2e8ae | | internalURL | http://localhost:8774/v1.1/ec949719d82c442cb32729be66e2e8ae | | publicURL | http://localhost:8774/v1.1/ec949719d82c442cb32729be66e2e8ae | | region | RegionOne | +-------------+-------------------------------------------------------------+ Service: ec2 +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | adminURL | http://localhost:8773/services/Cloud | | internalURL | http://localhost:8773/services/Cloud | | publicURL | http://localhost:8773/services/Cloud | | region | RegionOne | +-------------+--------------------------------------+ Service: identity +-------------+-----------------------------+ | Property | Value | +-------------+-----------------------------+ | adminURL | http://localhost:35357/v2.0 | | internalURL | http://localhost:5000/v2.0 | | publicURL | http://localhost:5000/v2.0 | | region | RegionOne | +-------------+-----------------------------+ Service: volume +-------------+-----------------------------------------------------------+ | Property | Value | +-------------+-----------------------------------------------------------+ | adminURL | http://localhost:8776/v1/448c5952839d4b52aa87ff61c4c8950a | | id | cb2b92ceecdb4319a32156f6dfc2f7ae | | internalURL | http://localhost:8776/v1/448c5952839d4b52aa87ff61c4c8950a | | publicURL | http://localhost:8776/v1/448c5952839d4b52aa87ff61c4c8950a | | region | RegionOne | +-------------+-----------------------------------------------------------+ $ keystone token-get +-----------+----------------------------------+ | Property | Value | +-----------+----------------------------------+ | expires | 2012-03-16T16:03:32Z | | id | 8eeb40cbc6e643a1a1c6a040f9b57086 | | tenant_id | ec949719d82c442cb32729be66e2e8ae | | user_id | 00869ca8093f4187a9188473a84d7fd1 | +-----------+----------------------------------+ $ keystone catalog --service ec2 Service: ec2 +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | adminURL | http://localhost:8773/services/Cloud | | internalURL | http://localhost:8773/services/Cloud | | publicURL | http://localhost:8773/services/Cloud | | region | RegionOne | +-------------+--------------------------------------+
If keystone catalog fails here with a 'Client' object has no attribute 'service_catalog' error, ensure that the SERVICE_ENDPOINT and SERVICE_TOKEN environment variables are unset.
Starting your services
Now adjust nova and glance to use the credentials we created before. Note that the username and password are what we gave to 'keystone user-create'.
- Edit /etc/nova/api-paste.ini to add to the [filter:authtoken] section:
12.04 LTS:
admin_user = nova admin_password = nova admin_tenant_name = services admin_token = keystoneadmintoken
12.10:
admin_user = nova admin_password = nova admin_tenant_name = services admin_token = keystoneadmintoken auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http
- Edit /etc/glance/glance-api-paste.ini to add the following to the [filter:authtoken] section:
12.04 LTS:
admin_tenant_name = services admin_user = glance admin_password = glance admin_token = keystoneadmintoken
12.10:
admin_tenant_name = services admin_user = glance admin_password = glance admin_token = keystoneadmintoken auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http
- Edit /etc/glance/glance-registry-paste.ini to add the following to the [filter:authtoken] section:
12.04 LTS:
admin_tenant_name = services admin_user = glance admin_password = glance admin_token = keystoneadmintoken
12.10:
admin_tenant_name = services admin_user = glance admin_password = glance admin_token = keystoneadmintoken auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http
Restart glance-api glance-registry, nova-api and nova-compute:
$ for i in glance-api glance-registry nova-api nova-compute ; do sudo stop $i ; sudo start $i ; done
Making OpenStack available to the LAN
Assuming that networking is setup properly on the OpenStack host, we can now configure networking within nova:
Bring eth1 into the 'up' state:
$ sudo ifconfig eth1 up
Setup the private network (10.0.0.1-10.0.0.254):
$ sudo nova-manage network create private 10.0.0.0/24 1 256 --bridge=br100 --bridge_interface=eth1 --multi_host=True
Setup the public facing IP addresses (just 192.168.122.225-192.168.122.254):
$ sudo nova-manage floating create 192.168.122.224/27
- Add to /etc/nova/nova.conf to have:
12.04:
--auto_assign_floating_ip
12.10:
auto_assign_floating_ip=True
Restart nova-network with:
$ sudo restart nova-network
Verify it worked:
$ sudo nova-manage network list id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid 1 10.0.0.0/24 None 10.0.0.2 8.8.4.4 None None None 2cfbc990-d993-463b-94e5-119404e6488f $ sudo nova-manage floating list None 192.168.122.225 None nova eth0 None 192.168.122.226 None nova eth0 None 192.168.122.227 None nova eth0 ...
Using OpenStack
By this point, OpenStack should be all setup for populating with images and using. As such, you will need to generate your credentials and then save them somewhere safe on your client machine 9for now, the same machine as the OpenStack. See below for how to make this available to the libvirt network OpenStack is on).
Adding an image via glance
On the OpenStack VM, create ~/.openstackrc (can be named anything, but should be chmod 0600) with the following:
export OS_USERNAME=admin export OS_PASSWORD=adminpasswd export OS_TENANT_NAME=admin export OS_AUTH_URL=http://localhost:5000/v2.0/
On the OpenStack VM, create keystone credentials:
$ keystone ec2-credentials-create +-----------+----------------------------------+ | Property | Value | +-----------+----------------------------------+ | access | 73c9a429d7fe42878d49423b6765f929 | | secret | 5add6a981af04d048612aa09e27818d8 | | tenant_id | ec949719d82c442cb32729be66e2e8ae | | user_id | 00869ca8093f4187a9188473a84d7fd1 | +-----------+----------------------------------+
On the OpenStack VM, append to ~/.openstackrc with EC2_URL set to the public_url for 'keystone catalog --service ec2' (substituting the ip address or hostname of the OpenStack VM for 'localhost'), the EC2_ACCESS_KEY set to the value of 'access' from 'keystone ec2-credentials-create' and EC2_SECRET_KEY set to the value of 'secret' from 'keystone ec2-credentials-create':
export EC2_URL=http://localhost:8773/services/Cloud export EC2_ACCESS_KEY=73c9a429d7fe42878d49423b6765f929 export EC2_SECRET_KEY=5add6a981af04d048612aa09e27818d8
On the OpenStack VM machine, install euca2ools:
$ sudo apt-get install euca2ools
Test the ec2 compatibility layer:
$ . ./.openstackrc $ euca-describe-instances $ euca-describe-images
Test openstack:
$ nova list # will prompt for encrypted keyring password. Use your login password +----+------+--------+----------+ | ID | Name | Status | Networks | +----+------+--------+----------+ +----+------+--------+----------+ $ glance index $
On the OpenStack VM, download an image (again, for now, just onto the OpenStack host):
$ export img=ubuntu-12.04-beta1-server-cloudimg-amd64-disk1.img $ wget http://uec-images.ubuntu.com/releases/precise/beta-1/$img
Add the image:
$ glance add name="my-glance/$img" is_public=true container_format=ami disk_format=ami < ./"$img" Uploading image 'my-glance/ubuntu-12.04-beta1-server-cloudimg-amd64-disk1.img' ... Added new image with ID: a6cfb7db-b988-4ba5-9858-c3bd5747c428
See if it showed up:
$ euca-describe-images IMAGE ami-00000001 None (my-glance/ubuntu-12.04-beta1-server-cloudimg-amd64-disk1.img) available publicmachine instance-store $ glance index ID Name Disk Format Container Format Size ------------------------------------ ------------------------------ -------------------- -------------------- -------------- a6cfb7db-b988-4ba5-9858-c3bd5747c428 my-glance/ubuntu-12.04-beta1-s ami ami 231211008
OPTIONAL: can specify the API version to use if you have different endpoints configured. Eg:
$ glance --os-image-url=http://127.0.0.1:9292/ --os-image-api-version=1 index ID Name Disk Format Container Format Size ------------------------------------ ------------------------------ -------------------- -------------------- -------------- a6cfb7db-b988-4ba5-9858-c3bd5747c428 my-glance/ubuntu-12.04-beta1-s ami ami 231211008 $ glance --os-image-url=http://127.0.0.1:9292/ --os-image-api-version=2 image-list +--------------------------------------+--------------------------------------------------------------+ | ID | Name | +--------------------------------------+--------------------------------------------------------------+ | cdfed269-9c33-4661-ae76-49fefbbbf49e | my-glance/ubuntu-12.04-beta1-server-cloudimg-amd64-disk1.img | +--------------------------------------+--------------------------------------------------------------+
To test images directly from the OpenStack host, generate a keypair:
$ ssh-keygen -t rsa -b 2048 $ nova keypair-add mykey --pub_key ~/.ssh/id_rsa.pub $ nova keypair-list +----------+-------------------------------------------------+ | Name | Fingerprint | +----------+-------------------------------------------------+ | mykey | 6a:bb:67:46:5a:19:9f:c7:a4:bc:25:97:90:7f:9e:d7 | +----------+-------------------------------------------------+
NOTE: due to LP: #959426, nova-compute doesn't always start. Verify it is started and if not start it:
$ ps auxww | grep [n]ova-compute $ sudo start nova-compute nova-compute start/running, process 1869 $ ps auxww | grep [n]ova-compute nova 1869 0.0 0.0 37936 1272 ? Ss 18:04 0:00 su -s /bin/sh -c exec nova-compute --flagfile=/etc/nova/nova.conf --flagfile=/etc/nova/nova-compute.conf nova nova 1870 8.4 2.7 270208 55932 ? S 18:04 0:01 /usr/bin/python /usr/bin/nova-compute --flagfile=/etc/nova/nova.conf --flagfile=/etc/nova/nova-compute.conf
Using nova
Initial setup
To use OpenStack remotely, you must create an ssh keypair and keystone credentials.
On any client machine, create a set of keys to use with this OpenStack installation:
$ ssh-keygen -t rsa -b 2048 -f $HOME/.ssh/openstack.id_rsa $ scp ~/.ssh/openstack.id_rsa.pub openstack-precise-server-amd64:/tmp
On the OpenStack VM:
Add the ssh keypair:
$ nova keypair-add mykey2 --pub_key /tmp/openstack.id_rsa.pub $ nova keypair-list +--------+-------------------------------------------------+ | Name | Fingerprint | +--------+-------------------------------------------------+ | mykey | 6a:bb:67:46:5a:19:9f:c7:a4:bc:25:97:90:7f:9e:d7 | | mykey2 | a9:4c:a5:47:55:da:af:77:db:d3:19:84:d0:5e:fa:a3 | +--------+-------------------------------------------------+
Create keystone credentials:
$ keystone ec2-credentials-create +-----------+----------------------------------+ | Property | Value | +-----------+----------------------------------+ | access | 684d98881d1242f389fcaa6edeab0dfb | | secret | 59e4153c6114455797852450966c52fb | | tenant_id | ec949719d82c442cb32729be66e2e8ae | | user_id | 00869ca8093f4187a9188473a84d7fd1 | +-----------+----------------------------------+
On the client, create ~/.openstackrc with EC2_URL set to the public_url for 'keystone catalog --service ec2' (substituting the ip address or hostname of the OpenStack VM for 'localhost'), the EC2_ACCESS_KEY set to the value of 'access' from 'keystone ec2-credentials-create' and EC2_SECRET_KEY set to the value of 'secret' from 'keystone ec2-credentials-create':
export EC2_URL=http://192.168.122.3:8773/services/Cloud export EC2_ACCESS_KEY=684d98881d1242f389fcaa6edeab0dfb export EC2_SECRET_KEY=59e4153c6114455797852450966c52fb
Verify you can connect to nova via the EC2 compatibility layer:
$ euca-describe-images IMAGE ami-00000001 None (my-glance/ubuntu-12.04-beta1-server-cloudimg-amd64-disk1.img) available public machine instance-store
- [OPTIONAL] To use cloud-utils to bundle images via the EC2 compatibility layer:
create the certificates required to bundle images via the EC2 compatibility layer:
$ mkdir ~/openstack-certs/ $ chmod 700 ~/openstack-certs/ $ nova x509-create-cert ~/openstack-certs/pk.pem ~/openstack-certs/cert.pem Wrote private key to /home/jamie/openstack-certs/pk.pem Wrote x509 certificate to /home/jamie/openstack-certs/cert.pem $ nova x509-get-root-cert ~/openstack-certs/cacert.pem Wrote x509 root cert to /home/jamie/openstack-certs/cacert.pem
Append to ~/.openstackrc:
# below this line is for cloud-utils export EC2_CERT=~/openstack-certs/cert.pem export EC2_USER_ID=42 # nova does not use user id, but bundling requires it export EUCALYPTUS_CERT=~/openstack-certs/cacert.pem export S3_URL=http://localhost:8773/services/Cloud
Can now add images via cloud-utils with something like (NOTE: this fails right now because we don't have an s3 service (S3_URL)):
$ cloud-publish-image -vv x86_64 ubuntu-12.04-beta1-server-cloudimg-amd64-disk1.img my-ubuntu-images
Starting and stopping instances
Start an image (get in the habit of using m1.tiny since you have limited resources in the VM):
$ euca-run-instances -k mykey2 -t m1.tiny ami-00000001 RESERVATION r-7yderdto ec949719d82c442cb32729be66e2e8ae default INSTANCE i-00000008 ami-00000001 server-8 server-8 pending mykey2 (ec949719d82c442cb32729be66e2e8ae, None) 0 m1.tiny 2012-03-20T23:07:07Z unknown zone monitoring-disabled instance-store
Verify it started:
$ euca-describe-instances RESERVATION r-7yderdto ec949719d82c442cb32729be66e2e8ae default INSTANCE i-00000008 ami-00000001 server-8 server-8 running mykey2 (ec949719d82c442cb32729be66e2e8ae, openstack-precise-server-amd64) 0m1.tiny 2012-03-20T23:07:07Z nova monitoring-disabled 10.0.0.4 10.0.0.4 instance-store $ nova list +--------------------------------------+----------+--------+------------------+ | ID | Name | Status | Networks | +--------------------------------------+----------+--------+------------------+ | e7a08f62-a57f-4b64-85bd-dcf8682e3fc7 | Server 8 | ACTIVE | private=10.0.0.4 | +--------------------------------------+----------+--------+------------------+
You can also see in 'ps auxww' output if kvm started:
$ ps auxww | grep [/]kvm 108 2996 81.9 11.8 1814320 243608 ? Sl 18:07 1:14 /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 512...
Verify the image (Note: guest networking does not show up on 12.10):
$ euca-get-console-output i-00000008 i-00000008 2012-03-20T23:12:39Z [ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 3.2.0-17-virtual (buildd@allspice) (gcc version 4.6.2 (Ubuntu/Linaro 4.6.2-16ubuntu1) ) #27-Ubuntu SMP Fri Feb 24 15:57:57 UTC 2012 (Ubuntu 3.2.0-17.27-virtual 3.2.6) [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-17-virtual root=LABEL=cloudimg-rootfs ro console=ttyS0 ... $ ping -c 1 10.0.0.4 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 64 bytes from 10.0.0.4: icmp_req=1 ttl=64 time=0.852 ms ... $ ssh ubuntu@10.0.0.4 Welcome to Ubuntu precise (development branch) (GNU/Linux 3.2.0-17-virtual x86_64) ... ubuntu@server-8:~$
Note that the console output can be found in /var/lib/nova/instances/<instance> (eg i-00000008)
Terminate the instance:
$ euca-terminate-instances i-00000008 $ euca-describe-instances $
Networking with instances
To make instances publicly available, you allocate an address, associate it with an instance, then optionally configure security groups (note that if configuring the non-'default' security group, the image must be started in this security group). If you used '--auto_assign_floating_ip' as instucted (see above), the allocation and association of IP addresses should happen automatically (though you still need to setup the security groups). If not, you can do it manually:
Allocate the addresss:
$ euca-allocate-address ADDRESS 192.168.122.227
Associate the address:
$ euca-associate-address 192.168.122.227 -i i-00000008 ADDRESS 192.168.122.227 i-00000008
Verify the address:
$ euca-describe-instances RESERVATION r-7yderdto ec949719d82c442cb32729be66e2e8ae default INSTANCE i-00000008 ami-00000001 server-8 server-8 running mykey2 (ec949719d82c442cb32729be66e2e8ae, openstack-precise-server-amd64) 0m1.tiny 2012-03-20T23:07:07Z nova monitoring-disabled 192.168.122.227 10.0.0.4 instance-store $ nova list +--------------------------------------+----------+--------+-----------------------------------+ | ID | Name | Status | Networks | +--------------------------------------+----------+--------+-----------------------------------+ | e7a08f62-a57f-4b64-85bd-dcf8682e3fc7 | Server 8 | ACTIVE | private=10.0.0.4, 192.168.122.227 | +--------------------------------------+----------+--------+-----------------------------------+
Configure security groups (note that if configuring a non-'default' security group, the image must be started in this security group):
$ euca-authorize -P tcp -p 22 default EC2APIError: {'to_port': 22, 'cidr': u'0.0.0.0/0', 'from_port': 22, 'protocol': 'tcp', 'parent_group_id': 1L} - This rule already exists in group $ euca-authorize -P icmp -t -1:-1 default EC2APIError: {'to_port': -1, 'cidr': u'0.0.0.0/0', 'from_port': -1, 'protocol': 'icmp', 'parent_group_id': 1L} - This rule already exists in groupSee the group:
$ euca-describe-groups GROUP ec949719d82c442cb32729be66e2e8ae default default PERMISSION ec949719d82c442cb32729be66e2e8ae default ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0 PERMISSION ec949719d82c442cb32729be66e2e8ae default ALLOWS icmp -1 -1 FROM CIDR 0.0.0.0/0
Verify from the OpenStack host:
$ ping -c 1 192.168.122.227 PING 192.168.122.227 (192.168.122.227) 56(84) bytes of data. 64 bytes from 192.168.122.227: icmp_req=1 ttl=64 time=0.885 ms ... 0. Verify from another host on the libvirt network:{{{ ubuntu@precise-amd64:~$ ping -c 1 192.168.122.227 PING 192.168.122.227 (192.168.122.227) 56(84) bytes of data. 64 bytes from 192.168.122.227: icmp_req=1 ttl=62 time=1.49 ms ...
juju
Using the values from 'Initial setup' (above), create ~/.juju/environments.yaml (this file should be chmod 0660; also note the other options for the environment):
environments:
openstack:
type: ec2
control-bucket: juju-openstack-bucket
admin-secret: foooooooooooo
ec2-uri: http://192.168.122.3:8773/services/Cloud
s3-uri: http://192.168.122.3:3333
ec2-key-name: mykey2
authorized-keys-path: <your home dir>/.ssh/openstack.id_rsa
access-key: 684d98881d1242f389fcaa6edeab0dfb
secret-key: 59e4153c6114455797852450966c52fb
default-image-id: ami-00000001
default-instance-type: m1.tiny
default-series: preciseThen can use juju normally (will want to use ssh-agent) to setup up machines, etc. It should handle all the networking and security groups for you as well. Eg:
$ juju bootstrap # this is pretty fast $ juju deploy --repository=~/charms local:precise/wordpress $ juju status ...
Keep in mind that 'juju bootstrap' starts a control node then 'apt-get update's it and installs juju, zookeeper, etc. Until this node is up, it will look like there is a problem. As such, it is convenient to use 'euca-describe-instances', then watch the console output. Eg:
$ euca-describe-instances RESERVATION r-c738ps8l ec949719d82c442cb32729be66e2e8ae juju-openstack, juju-openstack-0 INSTANCE i-00000007 ami-00000001 192.168.122.225 server-7 running None (ec949719d82c442cb32729be66e2e8ae, openstack-precise-server-amd64) 0 m1.tiny 2012-03-22T02:07:04.000Z nova monitoring-disabled 192.168.122.225 10.0.0.2 instance-store $ watch 'euca-get-console-output i-07|tail -20' ...
You'll know the control node is ready when you see something like:
-----END SSH HOST KEY KEYS----- cloud-init boot finished at Thu, 22 Mar 2012 02:20:08 +0000. Up 763.66 seconds
NOTE: due to LP: #962389 juju nodes become unavailable during cloud-init which prevents using juju with OpenStack in a VM currently.
Web frontend (OpenStack Dashboard)
Install the required packages:
$ sudo apt-get install python-memcache memcached openstack-dashboard
Adjust /etc/openstack-dashboard/local_settings.py to have:
CACHE_BACKEND='memcached://127.0.0.1:11211/'
Go to http://<openstack>/ (12.04 LTS) or http://<openstack>/horizon (12.10) and login with 'admin' and password 'adminpasswd' (assumes the above defaults)
NOTE: if running the dashboard on another host, then also adjust OPENSTACK_HOST in /etc/openstack-dashboard/local_settings.py
NOTE: the dashboard requires nova-volume on 12.10/Folsom and higher (946874)
Troubleshooting
Here are some commands and files that are useful with debugging:
- euca-describe-images
- euca-describe-instances
- euca-describe-availability-zones verbose
- euca-describe-addresses
- euca-describe-groups
- euca-associate-address/euca-disassociate-address
- euca-allocate-address/euca-release-address
- euca-run-instances/euca-terminate-instances
- euca-get-console-output (/var/lib/nova/instances/*)
- nova list
nova show <id from nova list>
- /var/log/nova/nova-api.log
- /var/log/nova/nova-compute.log
- /var/log/nova/nova-network.log
- /var/log/upstart/nova-*.log
Restarting all of OpenStack:
$ for i in nova-api nova-scheduler nova-network nova-compute nova-objectstore glance-api glance-registry keystone; do sudo service $i restart ; done
Also, when specifying an ami or an instance, you don't have to specify all the zeros. Eg:
$ euca-run-instances ami-01 $ euca-terminate-instances i-02
Caveats
Due to how rabbitmq works, it does not seem possible to change the host's IP address and hostname (eg, via cloning the VM), regardless of changes to /etc/hosts.
Be very careful to have enough RAM on the OpenStack host, otherwise starting a VM will simply result in an error and 'nova show <id>' won't tell you it is because of too little RAM (/var/log/nova/nova-scheduler.log may have it though). LP: #1019017
If 'nova' continues to prompt for a keyring password, pass '--no-cache' to nova. Eg: nova --no-cache list. (LP: #1020238)
While the above instructions create a 'volume' service, as described, it is unusable. To create a full fledged volume service, can initialize a LVM volume group named 'nova-volumes' and install 'nova-volumes'. See the upstream documentation for more information.
Credits
While this page was primarily written by Jamie Strandboge (jdstrand), the following people contributed greatly to the information in this page:
- Adam Gandelman (adam_g)
- Scott Moser (smoser)