Customizing Eucalyptus Load Balancer for Eucalyptus 4.0

Background

For the Elastic Load Balancing service, Eucalyptus utilizes an HAProxy instance.  The load balancer image contains the following version of haproxy (as of Eucalyptus 4.0.0):

# /usr/sbin/haproxy -vv
HA-Proxy version 1.5-dev21-6b07bf7 +2013/12/17
Copyright 2000-2013 Willy Tarreau <w@1wt.eu>

Build options :
 TARGET = linux2628
 CPU = generic
 CC = gcc
 CFLAGS = -O2 -g -fno-strict-aliasing
 OPTIONS = USE_LINUX_TPROXY=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
 maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built without zlib support (USE_ZLIB not set)
Compression algorithms supported : identity
Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
 epoll : pref=300, test result OK
 poll : pref=200, test result OK
 select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

By default, the following HAProxy configuration options are used by the Eucalyptus Load Balancer image (defined by the Eucalyptus load-balancer-servo application, which is the controlling mechanism for Eucalyptus load balancing):

#template
global
 maxconn 100000
 ulimit-n 655360
 pidfile /var/run/haproxy.pid

#drop privileges after port binding
 user servo
 group servo

defaults
 contimeout 1000
 clitimeout 10000
 srvtimeout 10000
 option http-server-close # affects KA on/off

Given what backend applications will be used with the Eucalyptus Load Balancer, these settings may not be sufficient.

The goal of this entry is to demonstrate how to customize the Eucalyptus Load Balancer image configuration to handle various backend applications that will be used with the load balancer.

Prerequisites

In order to customize the Eucalyptus Load Balancer image, the credentials of the cloud administrator (Eucalyptus IAM credentials of a user under the ‘eucalyptus’ account) must be used.   These credentials are needed to do the following:

Since the cloud administrator credentials will be used, there will be no need to define any Eucalyptus IAM policies.

To mount, and modify the Eucalyptus Load Balancer image, the following Linux tools are needed:

The examples in this blog were all done on CentOS 6.5 machine where the eucalyptus-load-balancer-image package has been installed.  This package contains the ‘euca-install-load-balancer’ command.

Obtaining the Eucalyptus Elastic Load Balancer Image

There are couple of ways to obtain the Eucalyptus Load Balancer image:

This blog entry will use the eucalyptus-load-balancer-image RPM, and update the image according.  To get started, create a directory (in this example ‘eucalyptus-lb’), and  download the latest eucalyptus load balancer image RPM from downloads.eucalyptus.com:

[root@odc-f-13 ~]# mkdir eucalyptus-lb; cd eucalyptus-lb
[root@odc-f-13 eucalyptus-lb]# wget http://downloads.eucalyptus.com/software/eucalyptus/4.0/centos/6/x86_64/eucalyptus-load-balancer-image-1.1.0-0.212.el6.x86_64.rpm

Once the RPM package has been downloaded, unpack the RPM:

[root@odc-f-13 eucalyptus-lb]# rpm2cpio eucalyptus-load-balancer-image-1.1.0-0.212.el6.x86_64.rpm | cpio --extract --make-directories --preserve-modification-time --verbose
./usr/bin/euca-install-load-balancer
./usr/share/doc/eucalyptus-load-balancer-image-1.1.0
./usr/share/doc/eucalyptus-load-balancer-image-1.1.0/IMAGE-LICENSE
./usr/share/doc/eucalyptus-load-balancer-image-1.1.0/eucalyptus-load-balancer-image.ks
./usr/share/eucalyptus-load-balancer-image
./usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-1.1.0-212.tgz
559369 blocks

After unpacking the RPM, change directory to ~/usr/share/eucalyptus-load-balancer-image, and decompress the eucalyptus-load-balancer-image-1.1.0-212.tgz file to obtain the Eucalyptus Load Balancer image:

[root@odc-f-13 eucalyptus-lb]# cd usr/share/eucalyptus-load-balancer-image
[root@odc-f-13 eucalyptus-load-balancer-image]# tar -xzvf eucalyptus-load-balancer-image-1.1.0-212.tgz
eucalyptus-load-balancer-image.img

Now that the image is available, we can modify it accordingly.

Modifying the Eucalyptus Load Balancer Image

To modify the Eucalyptus Load Balancer image, the image needs to be mounted to a loopback device, as demonstrated below:

[root@odc-f-13 eucalyptus-load-balancer-image]# mkdir /mnt/centos
[root@odc-f-13 eucalyptus-load-balancer-image]# losetup /dev/loop0 eucalyptus-load-balancer-image.img
[root@odc-f-13 eucalyptus-load-balancer-image]# kpartx -av /dev/loop0
add map loop0p1 (253:2): 0 3145728 linear /dev/loop0 2048
[root@odc-f-13 eucalyptus-load-balancer-image]# mount /dev/mapper/loop0p1 /mnt/centos
[root@odc-f-13 eucalyptus-load-balancer-image]# chroot /mnt/centos
[root@odc-f-13 /]#

We will modify and add the following HAProxy options under the ‘default’ section in the /etc/load-balancer-servo/haproxy_template.conf file.  For information about these options, please refer to the HAProxy 1.5 documentation:

  • replace ‘srvtimeout‘ with ‘timeout server‘ since ‘srvtimeout‘ is deprecated, and set the value to ‘2m
  • replace ‘clitimeout‘ with ‘timeout client‘ since ‘clitimeout‘ is deprecated, and set the value to ‘2m
  • replace ‘contimeout ‘ with ‘timeout connect‘ since ‘contimeout‘ is deprecated, and set the value to ‘5s
  • add ‘timeout http-keep-alive‘ with the value of  ‘10s
  • add ‘timeout queue‘ with the value of ‘1m
  • add ‘timeout check‘ with the value of ‘5s
  • add ‘retries‘ with the value of ‘3
  • add the following options to not log null connections, and to enable session redistribution in case of failure:
    • option dontlognull
    • option redispatch

The  /etc/load-balancer-servo/haproxy_template.conf should look similar to the following after all the desired attributes are added:

[root@odc-f-13 /]# cat /etc/load-balancer-servo/haproxy_template.conf
#template
global
 maxconn 100000
 ulimit-n 655360
 pidfile /var/run/haproxy.pid

#drop privileges after port binding
 user servo
 group servo

defaults
 timeout connect 5s
 timeout client 2m
 timeout server 2m
 timeout http-keep-alive 10s
 timeout queue 1m
 timeout check 5s
 retries 3
 option dontlognull
 option redispatch
 option http-server-close # affects KA on/off

(Note:  Depending upon what edits are being done to the HAProxy configuration settings, there may also be a need to edit the /etc/sysctl.conf file to help get the desired behavior from the Eucalyptus Load Balancer.  For example, the following sysctl properties can be edited to increase/decrease TCP timeouts:

  • net.ipv4.tcp_keepalive_time
  • net.ipv4.tcp_keepalive_intvl
  • net.ipv4.tcp_keepalive_probes

For more information about editing sysctl values, the documentation from RedHat can be referenced.)

Once all edits are completed, confirm that the configuration file is correct, exit out of the chroot environment and unmount the image:

[root@odc-f-13 /]# /usr/sbin/haproxy -c -f /etc/load-balancer-servo/haproxy_template.conf
Configuration file has no error but will not start (no listener) => exit(2).

[root@odc-f-13 eucalyptus-load-balancer-image]# umount /mnt/centos
[root@odc-f-13 eucalyptus-load-balancer-image]# kpartx -dv /dev/loop0
del devmap : loop0p1
[root@odc-f-13 eucalyptus-load-balancer-image]# losetup -d /dev/loop0

Installing the New Eucalyptus Load Balancer Image

After the image has been unmounted, create a new tar-gzipped file that contains the modified Eucalyptus Load Balancer image:

[root@odc-f-13 eucalyptus-load-balancer-image]# tar -zcvf eucalyptus-load-balancer-image-updated.tgz eucalyptus-load-balancer-image.img

Next, make sure the cloud administrator credentials are sourced and check the cloud properties for the Eucalyptus Load Balancer service:

[root@odc-f-13 eucalyptus-load-balancer-image]# cd
[root@odc-f-13 ~]# source eucarc
[root@odc-f-13 ~]# euca-describe-properties | grep load
PROPERTY ViciousLiesAndDangerousRumors.storage.maxconcurrentsnapshotuploads 3
PROPERTY ViciousLiesAndDangerousRumors.storage.snapshotuploadtimeoutinhours 48
PROPERTY authentication.credential_download_host_match {}
PROPERTY loadbalancing.loadbalancer_app_cookie_duration 24
PROPERTY loadbalancing.loadbalancer_dns_subdomain elb
PROPERTY loadbalancing.loadbalancer_emi emi-F0D5828C
PROPERTY loadbalancing.loadbalancer_instance_type m1.medium
PROPERTY loadbalancing.loadbalancer_num_vm 1
PROPERTY loadbalancing.loadbalancer_restricted_ports 22
PROPERTY loadbalancing.loadbalancer_vm_keyname euca-elb
PROPERTY loadbalancing.loadbalancer_vm_ntp_server pool.ntp.org

Check to see what load balancer images are enabled:

[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:

Version 1
emi-FA373789 (loadbalancer_v1/eucalyptus-load-balancer-image-1.0.4-164.img.manifest.xml)
 Installed on 2014-05-20 at 07:12:18 PDT

Version 2 (enabled)
emi-F0D5828C (loadbalancer-v2/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-05-28 at 11:10:03 PDT

To install the modified load balancer image, use the ‘euca-install-load-balancer‘ command:

[root@odc-f-13 ~]# euca-install-load-balancer -t ~/eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-updated.tgz
Decompressing tarball: eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-updated.tgz
Bundling and uploading image to bucket: loadbalancer-v3
Registering image manifest: loadbalancer-v3/eucalyptus-load-balancer-image.img.manifest.xml
Registered image: emi-BCAD86BE
PROPERTY loadbalancing.loadbalancer_emi emi-BCAD86BE was emi-F0D5828C

Confirm that the new EMI is enabled:

[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:

Version 1
emi-FA373789 (loadbalancer_v1/eucalyptus-load-balancer-image-1.0.4-164.img.manifest.xml)
 Installed on 2014-05-20 at 07:12:18 PDT

Version 2
emi-F0D5828C (loadbalancer-v2/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-05-28 at 11:10:03 PDT

Version 3 (enabled)
emi-BCAD86BE (loadbalancer-v3/eucalyptus-load-balancer-image-updated.img.manifest.xml)
 Installed on 2014-07-06 at 18:02:20 PDT

Confirming the Updated Load Balancer Configuration

To confirm the changes, make sure the cloud property loadbalancing.loadbalancer_vm_keyname‘ has a defined value for debugging purposes, then create a Eucalyptus Elastic Load Balancer:

[root@odc-f-13 ~]# eulb-create-lb TestLoadBalancer -z ViciousLiesAndDangerousRumors -l "lb-port=80, protocol=HTTP, instance-port=80, instance-protocol=HTTP"
DNS_NAME TestLoadBalancer-408396244283.elb.acme.eucalyptus-systems.com

Confirm that the load balancer instance is running (only the cloud administrator can see the load balancing instance IDs), and authorize port 22 (SSH) to the instance:

[root@odc-f-13 ~]# euca-describe-instances
RESERVATION r-C27D6F37 944786667073 euca-internal-408396244283-TestLoadBalancer
INSTANCE i-A0DBA47D emi-BCAD86BE euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com euca-172-18-246-58.bigboi.internal running euca-elb 0 m1.medium 2014-07-07T01:08:32.400Z ViciousLiesAndDangerousRumors monitoring-enabled 10.104.6.233 172.18.246.58 instance-store hvm bce54dd2-a9af-4587-9421-457e22dda5ff_ViciousLiesAndDangerousR_1 sg-4F469747 arn:aws:iam::944786667073:instance-profile/internal/loadbalancer/loadbalancer-vm-408396244283-TestLoadBalancer
TAG instance i-A0DBA47D Name loadbalancer-resources
TAG instance i-A0DBA47D aws:autoscaling:groupName asg-euca-internal-elb-408396244283-TestLoadBalancer
TAG instance i-A0DBA47D euca:node 10.105.10.7
[root@odc-f-13 ~]# euca-authorize -P tcp -p ssh euca-internal-408396244283-TestLoadBalancer
GROUP euca-internal-408396244283-TestLoadBalancer
PERMISSION euca-internal-408396244283-TestLoadBalancer ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0

After SSHing into the load balancer instance, confirm that the /var/lib/load-balancer-servo/euca_haproxy.conf file has the updated changes:

[root@odc-f-13 ~]# ssh -i euca-elb.priv root@euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com
The authenticity of host 'euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com (10.104.6.233)' can't be established.
RSA key fingerprint is e3:9b:80:e2:f3:12:a3:0b:f0:5c:7c:6b:bc:d8:9d:77.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com' (RSA) to the list of known hosts.
Warning: the RSA host key for 'euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com' differs from the key for the IP address '10.104.6.233'
Offending key for IP in /root/.ssh/known_hosts:9
Are you sure you want to continue connecting (yes/no)? yes
[root@euca-172-18-246-58 ~]# ps aux | grep haprox
servo 1027 0.0 0.2 58648 3020 ? Ss 01:09 0:00 /usr/sbin/haproxy -f /var/lib/load-balancer-servo/euca_haproxy.conf -p /var/run/load-balancer-servo/haproxy.pid -V -C /var/lib/load-balancer-servo -D
[root@euca-172-18-246-58 ~]# cat /var/lib/load-balancer-servo/euca_haproxy.conf
global
 maxconn 100000
 ulimit-n 655360
 pidfile /var/run/haproxy.pid
 #drop privileges after port binding
 user servo
 group servo

defaults
 timeout connect 5s
 timeout client 2m
 timeout server 2m
 timeout http-keep-alive 10s
 timeout queue 1m
 timeout check 5s
 retries 3
 option dontlognull
 option redispatch
 option http-server-close # affects KA on/off

frontend http-80
 # lb-TestLoadBalancer
 mode http
 option forwardfor except 127.0.0.1
 bind 0.0.0.0:80
 log /var/lib/load-balancer-servo/haproxy.sock local2 info
 log-format httplog\ %f\ %b\ %s\ %ST\ %ts\ %Tq\ %Tw\ %Tc\ %Tr\ %Tt
 default_backend backend-http-80

backend backend-http-80
 mode http
 balance roundrobin

The ‘default‘ section should contain all the modifications made to the /etc/load-balancer-servo/haproxy_template.conf file.   The Eucalyptus Load Balancer will now utilize the updated changes needed to address the desired performance with the various backend applications that will be used with the load balancer.

Customizing Eucalyptus Load Balancer for Eucalyptus 4.0

Growing and Securing Instance Root Filesystems With Overlayroot on Eucalyptus

Background

I frequently check out Dustin Kirkland’s blog to get ideas about how to secure instances running on various cloud infrastructures.  Recently, I stumbled across one of his blog entries, where he discussed a package called ‘overlayroot‘, which is part of the cloud-initramfs-tools package for Ubuntu.  The really interesting feature I liked about this package is the ability to encrypt – using dmcrypt –  the root filesystem of the instance.  What this means that if the Node Controller (NC) that hosts the instance happens to become compromised, the information on the device associated with the instance can’t be accessed.  In addition, overlayroot allows the root filesystem of the instance to be ‘laid’ on top of another device that is bigger in size.  The advantage here is that there isn’t a need to re-create an image with a larger root filesystem.

Prerequisites

The key prerequisites for this blog were mentioned in my previous blog, which discusses how to bundle, upload and register a CoreOS EMI.  In addition, to these prerequisites, the following EC2 actions are needed for the Eucalyptus IAM policy:

Overlayroot is available with any Ubuntu Cloud image from 12.10 (Quantal Quetzal) to the most recent at the time of this blog, 14.10 (Utopic Unicorn).  For this blog, Ubuntu 14.04 LTS (Trusty Tahr) will be used.

Setting Up the EMI

As mentioned in the overlayroot blog entry, the configuration of overlayroot is located in /etc/overlayroot.conf.  This can be configured before this Ubuntu Cloud image is bundled, uploaded and registered as an EMI, or after.  For brevity, overlayroot will be configured after the Ubuntu Cloud image has been bundled, uploaded and registered.

To get started, download the Ubuntu Cloud image:

# wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img

Next, convert the image from qcow2 to raw:

# qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw

Bundle, upload and register the image as an instance store-backed HVM EMI:

# euca-bundle-and-upload-image -i trusty-server-cloudimg-amd64-disk1.raw -b trusty-server-hvm -r x86_64
# euca-register -n trusty-server-hvm trusty-server-hvm/trusty-server-cloudimg-amd64-disk1.raw.manifest.xml --virtualization-type hvm 
IMAGE emi-348861E4

After creating a keypair using euca-create-keypair, and authorizing SSH access using euca-authorize, launch an instance from the EMI:

# euca-run-instances -k account1-user01 -t m1.medium emi-348861E4
RESERVATION r-BE317660 408396244283 default
INSTANCE i-41DA6A90 emi-348861E4 pending account1-user01 0 m1.medium 2014-06-15T22:47:03.552Z ViciousLiesAndDangerousRumors monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-A5133B59

Once the instance has reached a ‘running’ state, SSH into the instance:

# euca-describe-instances i-41DA6A90 
RESERVATION r-BE317660 408396244283 default
INSTANCE i-41DA6A90 emi-348861E4 euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com euca-172-18-238-170.bigboi.internal running account1-user01 0 m1.medium 2014-06-15T22:47:03.552Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.235 172.18.238.170 instance-store hvm sg-A5133B59
# ssh -i account1-user01/account1-user01.priv ubuntu@euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com

Encrypted Root Filesystem

Once inside the instance, since the EMI is an instance store-backed HVM EMI, the ephemeral disk is /dev/vdb:

ubuntu@euca-172-18-238-170:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 2.2G 0 disk
└─vda1 253:1 0 2.2G 0 part /
vdb 253:16 0 7.8G 0 disk

Edit the ‘overlayroot’ option in the /etc/overlayroot.conf to set /dev/vdb to used to encrypt the root filesystem:

ubuntu@euca-172-18-238-170:~$ sudo vi /etc/overlayroot.conf
......
overlayroot=”crypt:dev=/dev/vdb”

After editing /etc/overlayroot.conf, reboot the instance:

ubuntu@euca-172-18-238-170:~$ sudo reboot

After the instance has been rebooted, SSH back into the instance and observe the new layout of the filesystem:

ubuntu@euca-172-18-238-170:~$ mount
overlayroot on / type overlayfs (rw,lowerdir=/media/root-ro/,upperdir=/media/root-rw/overlay)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
/dev/vda1 on /media/root-ro type ext4 (ro)
/dev/mapper/secure on /media/root-rw type ext4 (rw,relatime,data=ordered)
none on /sys/fs/pstore type pstore (rw)
systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd)

To verify the encrypted filesystem, use cryptsetup:

ubuntu@euca-172-18-238-170:~$ sudo cryptsetup luksDump /dev/vdb
LUKS header information for /dev/vdb
Version: 1
Cipher name: aes
Cipher mode: xts-plain64
Hash spec: sha1
Payload offset: 4096
MK bits: 256
MK digest: 77 70 97 81 9e d6 56 4b c5 79 60 92 23 02 18 80 9c eb 40 c8
MK salt: ed d1 69 1e d7 49 a6 a6 91 fb 0f 44 3a a2 0b b2
 2a 56 fb 82 77 3f 70 9c 70 ff c2 15 37 09 10 2c
MK iterations: 33250
UUID: 164026af-f6fa-4dd4-8042-5cd24b8c00c9
Key Slot 0: ENABLED
 Iterations: 132641
 Salt: 5e 7f 03 33 d9 af e8 a9 37 fb 5c 4e 10 db c5 38
 f7 9f 49 12 d8 c4 43 8c cb 79 4a 25 da 04 c9 73
 Key material offset: 8
 AF stripes: 4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

Expand and Encrypt the Root Filesystem

If the cloud user wants to increase the size of the root filesystem and encrypt it, but use a larger size than available for ephemeral storage, the user can create a volume, attach it to the instance, then configure overlayroot to use that device.  For example:

# euca-create-volume -s 20 -z ViciousLiesAndDangerousRumors
# euca-describe-volumes vol-450936D7
VOLUME vol-450936D7 20 ViciousLiesAndDangerousRumors available 2014-06-16T19:25:36.936Z standard
# euca-attach-volume -i i-41DA6A90 -d /dev/sdf vol-450936D7
ATTACHMENT vol-450936D7 i-41DA6A90 /dev/sdf attaching 2014-06-16T19:30:29.605Z
# euca-describe-instances i-41DA6A90
RESERVATION r-BE317660 408396244283 default
INSTANCE i-41DA6A90 emi-348861E4 euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com euca-172-18-238-170.bigboi.internal running account1-user01 0 m1.medium 2014-06-15T22:47:03.552Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.235 172.18.238.170 instance-store hvm sg-A5133B59
BLOCKDEVICE /dev/sdf vol-450936D7 2014-06-16T19:30:29.619Z false

Follow the steps above regarding editing the /etc/overlayroot.conf, except use /dev/vdc instead of /dev/vdb:

# ssh -i account1-user01/account1-user01.priv ubuntu@euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com
ubuntu@euca-172-18-238-170:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 2.2G 0 disk
└─vda1 253:1 0 2.2G 0 part /
vdb 253:16 0 7.8G 0 disk
vdc 253:32 0 20G 0 disk
ubuntu@euca-172-18-238-170:~$ sudo vi /etc/overlayroot.conf
.....
overlayroot=”crypt:dev=/dev/vdc”

Reboot the instance.  After the reboot completes, SSH back into the instance and noticed the root filesystem increased in size, and encrypted:

ubuntu@euca-172-18-238-170:~$ sudo reboot
# ssh -i account1-user01/account1-user01.priv ubuntu@euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com
ubuntu@euca-172-18-238-170:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 2.2G 0 disk
└─vda1 253:1 0 2.2G 0 part /media/root-ro
vdb 253:16 0 7.8G 0 disk
vdc 253:32 0 20G 0 disk
└─secure (dm-0) 252:0 0 20G 0 crypt /media/root-rw
ubuntu@euca-172-18-238-170:~$ df -ah
Filesystem Size Used Avail Use% Mounted on
overlayroot 20G 46M 19G 1% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 0 0 0 - /sys/fs/fuse/connections
none 0 0 0 - /sys/kernel/debug
none 0 0 0 - /sys/kernel/security
udev 493M 8.0K 493M 1% /dev
devpts 0 0 0 - /dev/pts
tmpfs 100M 328K 100M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/vda1 2.2G 754M 1.3G 37% /media/root-ro
/dev/mapper/secure 20G 46M 19G 1% /media/root-rw
none 0 0 0 - /sys/fs/pstore
systemd 0 0 0 - /sys/fs/cgroup/systemd

Bonus – Configuring Overlayroot Before Bundle, Uploading and Registering the EMI

As mention before,  overlayroot can be configured before the image has been bundled, uploaded and registered as an instance store-backed HVM EMI.  After downloading the Ubuntu Trusty cloud image, use losetup and kpartx to aid in mounting the image in order to configure overlayroot:

# qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw
# losetup /dev/loop0 trusty-server-cloudimg-amd64-disk1.raw
# kpartx -av /dev/loop0
add map loop0p1 (253:2): 0 4192256 linear /dev/loop0 2048
# mkdir /mnt/ubuntu
# mount /dev/mapper/loop0p1 /mnt/ubuntu
# chroot /mnt/ubuntu
root@odc-f-13:/# vi /etc/overlayroot.conf
(added the following)
overlayroot=”crypt:dev=/dev/vdb”
root@odc-f-13:/# exit
exit
# umount /mnt/ubuntu
# kpartx -dv /dev/loop0
# losetup -d /dev/loop0

After unmounting the image, just bundle, upload and register the image as an instance store-backed HVM EMI.  When an instance is launched from the EMI, the root filesystem will automatically be encrypted.

Conclusion

Using overlayroot gives additional instance security for the cloud user.  For more information about the additional configuration options of overlayroot, please refer to the overlayroot.conf file in the cloud-initramfs-tools repository on Launchpad.

Enjoy!

Growing and Securing Instance Root Filesystems With Overlayroot on Eucalyptus

CoreOS CloudInit Config for Docker Storage Management

CoreOS is a Linux distribution that allows easy deployment of Docker environments.  With CoreOS, users have the ability to deploy clustered Docker environments,  or deploy zero downtime applications.  Recently, I have blogged about how to deploy and use Docker on Eucalyptus cloud environments. This blog will focus on how to leverage cloud-init configuration with a CoreOS EMI to manage instance storage that will be used by Docker containers on Eucalyptus 4.0.  The same cloud-init configuration file can be used  on AWS with CoreOS AMIs, which is yet another example of how Eucalyptus has continued to maintain its focus on being the best on-premise AWS compatible cloud environment.

Prerequisites

Since Eucalyptus Identity and Access Management (IAM) is very similar to AWS’s IAM, at a minimum – the following Elastic Compute Cloud (EC2) actions need to be allowed:

In order to bundle, upload and register the CoreOS image, use the following AWS S3 policy (which can be generated using AWS Policy Generator):

{
  "Statement": [
    {
      "Sid": "Stmt1402675433766",
      "Action": "s3:*",
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

For more information about how to use Eucalyptus IAM, please refer to the Eucalyptus 4.0 Administrator documentation regarding access concepts and policy overview.

In addition to the correct IAM policy being applied to the user, here are the other prerequisites that need to be met:

Once these prerequisites are met, the Eucalyptus user will be able to implement the topic for this blog.

CoreOS CloudInit Config for Docker Storage Management

As mentioned in the CoreOS documentation regarding how to use CoreOS with Eucalyptus, the user needs to do the following:

  • Download the CoreOS image
  • Decompress the CoreOS image
  • Bundle, upload and register the image

For example:

# wget -q http://beta.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2

# bunzip2 coreos_production_openstack_image.img.bz2

# qemu-img convert -O raw coreos_production_openstack_image.img coreos_production_openstack_image.raw
# euca-bundle-and-upload-image -i coreos_production_openstack_image.raw -b coreos-production-beta -r x86_64
# euca-register -n coreos-production coreos-production-beta/coreos_production_openstack_image.raw.manifest.xml --virtualization-type hvm
IMAGE emi-98868F66

After the image is registered, create a security group and authorize port 22 for SSH access to the CoreOS instance:

# euca-create-group coreos-testing -d "Security Group for CoreOS Cluster"
GROUP sg-C8E3B168 coreos-testing Security Group for CoreOS Cluster
# euca-authorize -P tcp -p ssh coreos-testing
GROUP coreos-testing
PERMISSION coreos-testing ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0

Next, create a keypair that will be used to access the CoreOS instance:

# euca-create-keypair coreos > coreos.priv
# chmod 0600 coreos.priv

Now, we are need to create the cloud-init configuration file.  CoreOS implements a subset of cloud-init config spec with coreos-cloudinit.  The cloud-init config below will do the following:

  1. wipe the the ephemeral device – /dev/vdb (since the CoreOS EMI is an instance store-backed HVM image, ephemeral device will be /dev/vdb)
  2. format the ephemeral device with BTRFS filesystem
  3. mount /dev/vdb to /var/lib/docker (which is the location for images used by the Docker containers)

Create a cloud-init.config file with the following information:

#cloud-config
coreos:
 units:
 - name: format-ephemeral.service
 command: start
 content: |
 [Unit]
 Description=Formats the ephemeral drive
 [Service]
 Type=oneshot
 RemainAfterExit=yes
 ExecStart=/usr/sbin/wipefs -f /dev/vdb
 ExecStart=/usr/sbin/mkfs.btrfs -f /dev/vdb
 - name: var-lib-docker.mount
 command: start
 content: |
 [Unit]
 Description=Mount ephemeral to /var/lib/docker
 Requires=format-ephemeral.service
 Before=docker.service
 [Mount]
 What=/dev/vdb
 Where=/var/lib/docker
 Type=btrfs

Use euca-describe-instance-types to select the desired instance type for the CoreOS instance (in this example, c1.medium will be used).

# euca-describe-instance-types 
INSTANCETYPE Name CPUs Memory (MiB) Disk (GiB)
INSTANCETYPE t1.micro 1 256 5
INSTANCETYPE m1.small 1 512 10
INSTANCETYPE m1.medium 1 1024 10
INSTANCETYPE c1.xlarge 2 2048 10
INSTANCETYPE m1.large 2 1024 15
INSTANCETYPE c1.medium 1 1024 20
INSTANCETYPE m1.xlarge 2 1024 30
INSTANCETYPE m2.2xlarge 2 4096 30
INSTANCETYPE m3.2xlarge 4 4096 30
INSTANCETYPE m2.xlarge 2 2048 40
INSTANCETYPE m3.xlarge 2 2048 50
INSTANCETYPE cc1.4xlarge 8 3072 60
INSTANCETYPE m2.4xlarge 8 4096 60
INSTANCETYPE hi1.4xlarge 8 6144 120
INSTANCETYPE cc2.8xlarge 16 6144 120
INSTANCETYPE cg1.4xlarge 16 12288 200
INSTANCETYPE cr1.8xlarge 16 16384 240
INSTANCETYPE hs1.8xlarge 48 119808 24000

Use euca-run-instances to launch the CoreOS image as an instance, passing the cloud-init.config file using the –user-data-file option:

# euca-run-instances -k coreos -t c1.medium emi-98868F66 --user-data-file cloud-init-docker-storage.config
RESERVATION r-FC799274 408396244283 default
INSTANCE i-AF303D5D emi-98868F66 pending coreos 0 c1.medium 2014-06-12T13:38:31.008Z ViciousLiesAndDangerousRumors monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-A5133B59

Once the instance reaches the ‘running’ state, SSH into the instance to see the ephemeral storage mounted and formatted correctly:

# euca-describe-instances i-AF303D5D --region account1-user01@
RESERVATION r-FC799274 408396244283 default
INSTANCE i-AF303D5D emi-98868F66 euca-10-104-6-236.bigboi.acme.eucalyptus-systems.com euca-172-18-238-171.bigboi.internal running coreos 0 c1.medium 2014-06-12T13:38:31.008Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.236 172.18.238.17 instance-store hvm sg-A5133B59
# ssh -i coreos.priv core@euca-10-104-6-236.bigboi.acme.eucalyptus-systems.com
CoreOS (beta)
core@localhost ~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 254:0 0 8.3G 0 disk
|-vda1 254:1 0 128M 0 part
|-vda2 254:2 0 64M 0 part
|-vda3 254:3 0 1G 0 part
|-vda4 254:4 0 1G 0 part /usr
|-vda6 254:6 0 128M 0 part /usr/share/oem
`-vda9 254:9 0 6G 0 part /
vdb 254:16 0 11.7G 0 disk /var/lib/docker
core@localhost ~ $ mount
.......
/dev/vda6 on /usr/share/oem type ext4 (rw,nodev,relatime,commit=600,data=ordered
/dev/vdb on /var/lib/docker type btrfs (rw,relatime,space_cache)

The instance is now ready for docker containers to be created.  For some docker container examples, check out the CoreOS documentation and the Docker documentation.

Enjoy!

CoreOS CloudInit Config for Docker Storage Management

Yet Another AWS Compatibility Example Using Vagrant-AWS Plugin and Eucalyptus 3.4 to Deploy Docker

Something that Eucalyptus has been constant about from the beginning is its stance on being the best open source, on-premise AWS-compatible cloud on the market.   This blog entry is just another example of demonstrating this compatibility.

My most recent blog entries have been centered around Docker and how to deploy it on Eucalyptus.  This entry will show how a user can take Docker’s own documentation on deploying Docker using Vagrant with AWS – but with Eucalyptus.   Before getting started, there are some prerequisites that need to be in place.

Prerequisites for Eucalyptus Cloud

In order to get started, the Eucalyptus cloud needs to have an Ubuntu Raring Cloud Image bundled, uploaded and registered before the steps below can be followed.  The previous blog entries below will help here:

In addition to the Ubuntu Raring Cloud EMI being available, the user also needs to have the following:

After these prerequisites have been met, the user is ready to set up Vagrant to interact with Eucalyptus.

Setting up Vagrant Environment

To start out, we need to set up the Vagrant environment.  The steps below will get you going:

  1. Install Vagrant from http://www.vagrantup.com/. (optional – package manager can be used here instead)
  2. Install the vagrant aws plugin:  

    vagrant plugin install vagrant-aws

After Vagrant and the vagrant aws plugin have been successfully installed, all that is left is to create a Vagrantfile to provide information to Vagrant as to how to interact with Eucalyptus.  Since the vagrant aws plugin is being used, and Eucalyptus is compatible with AWS, the configuration will be very similar to AWS.

I provided a Vagrantfile on Github to help get users up to speed quicker.  To check out the Vagrantfile, just clone the repository from Github:

$ git clone https://github.com/hspencer77/eucalyptus-docker-raring.git

After checking out the file, change directory to eucalyptus-docker-raring, and edit the following variables to match your user information for the Eucalyptus cloud that will run the Docker instance:

AWS_ENDPOINT = "<EC2_URL for Eucalyptus Cloud>"
AWS_AMI = "<ID for Ubuntu Raring EMI>"
AWS_ACCESS_KEY = "<Access Key ID>"
AWS_SECRET_KEY = "<Secret Key>"
AWS_KEYPAIR_NAME = "<Key Pair name>"
AWS_INSTANCE_TYPE = '<VM type>'
SSH_PRIVKEY_PATH = "<The path to the private key for the named keypair, for example ~/.ssh/docker.pem>"

Once the Vagrantfile is populated with the correct information, we are now ready to launch the Docker instance.

Launch the Docker Instance Using Vagrant

From here, Vagrant makes this very straight-forward.  There are only two steps to launch the Docker instance.

  1. Run the following command to launch the instance:

     vagrant up --provider=aws
  2. After Vagrant finishes deploying the instance, SSH into the instance:

    vagrant ssh

Thats it!  Once you are SSHed into the instance, to run Docker, execute the following command:

ubuntu@euca-172-17-120-212:~$ sudo docker

You have successfully launched a Docker instance on Eucalyptus using Vagrant.  Since Eucalyptus works with the vagrant aws plugin, the same Vagrantfile can be used against AWS (of course, the values for the variables above will change).  This is a perfect dev/test to production setup whether Eucalyptus is being used for dev/test and AWS being used for production (and vise versa).

Yet Another AWS Compatibility Example Using Vagrant-AWS Plugin and Eucalyptus 3.4 to Deploy Docker

Deploying CentOS 6.5 Image in Docker on Eucalyptus 3.4

This blog entry is a follow-up of my blog entry entitled “Step-by-Step Deployment of Docker on Eucalyptus 3.4 for the Cloud Administrator“.  In that blog entry, I covered how to deploy an Ubuntu Raring Cloud Image on Eucalyptus, and use that image to deploy Docker.   The really cool thing about Docker is that it provides the ability to deploy different operating systems within one machine – in this case, an instance.   The focus of this entry is to show how to deploy a CentOS 6.5 image in a instance running Docker on Eucalyptus 3.4.

Prerequisites

The prerequisites for this blog is to complete the steps outlined in my previous blog about Docker on Eucalyptus 3.4.  Once those steps are completed by the cloud administrator (i.e. a user associated with the ‘eucalyptus’ account), you can get started  on the steps below.  One thing to note here, to follow these steps, there is no need to be the cloud administrator.  This entry is entirely directed to cloud users.  Since Eucalyptus IAM is similar to AWS IAM, the following EC2 Actions need to be allowed for the cloud user:

Instance Deployment

To get started, we need to launch the Ubuntu Raring EMI that has been provided by the cloud administrator.  In this example, the EMI will be emi-26403979:

$ euca-describe-images emi-26403979 --region account1-user01@
IMAGE emi-26403979 ubuntu-raring-docker-rootfs-v3/ubuntu-raring-docker-v3.manifest.xml
 441445882805 available private x86_64 machine eki-17093995 
eri-6BF033EE instance-store paravirtualized

If the –region option seems confusing, this is due to the fact that I am using the nice configuration file feature in Euca2ools.  Its really helpful when you are using different credentials for different users.

Now that we know the EMI we can use, lets launch the instance.  We will be using the cloud-init config file from my previous Docker blog to configure the instance.  The VM type used here is c1.xlarge.  This is because I wanted to make sure I had 2 CPU, and 2 Gigs of RAM for my instance:

$ euca-run-instances -k account1-user01 -t c1.xlarge 
--user-data-file cloud-init-docker.config emi-26403979 
--region account1-user01@
RESERVATION r-2EE941D4 961915002812 default
INSTANCE i-503642D4 emi-26403979 euca-0-0-0-0.eucalyptus.euca-hasp.eucalyptus-systems.com
 euca-0-0-0-0.eucalyptus.internal pending account1-user01 0 
c1.xlarge 2014-02-14T00:47:15.632Z LayinDaSmackDown eki-17093995 
eri-6BF033EE monitoring-disabled 0.0.0.0 0.0.0.0 instance-store paravirtualized

Make sure the instance gets into the running state:

$ euca-describe-instances --region account1-user01@
RESERVATION r-2EE941D4 961915002812 default
INSTANCE i-503642D4 emi-26403979 euca-10-104-7-10.eucalyptus.euca-hasp.eucalyptus-systems.com
 euca-172-17-112-207.eucalyptus.internal running 
account1-user01 0 c1.xlarge 2014-02-14T00:47:15.632Z 
LayinDaSmackDown eki-17093995 eri-6BF033EE monitoring-disabled 
10.104.7.10 172.17.112.207 instance-store paravirtualized

Now thats in the running state, lets SSH into the instance to make sure its up and running:

$ ssh -i account1-user01.priv ubuntu@euca-10-104-7-10.eucalyptus.euca-hasp.eucalyptus-systems.com
Welcome to Ubuntu 13.04 (GNU/Linux 3.8.0-33-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Fri Feb 14 00:49:52 UTC 2014
System load: 0.21 Users logged in: 0
 Usage of /: 21.8% of 4.80GB IP address for eth0: 172.17.112.207
 Memory usage: 5% IP address for lxcbr0: 10.0.3.1
 Swap usage: 0% IP address for docker0: 10.42.42.1
 Processes: 85
Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud
Use Juju to deploy your cloud instances and workloads:
 https://juju.ubuntu.com/#cloud-raring
Your Ubuntu release is not supported anymore.
For upgrade information, please visit:
http://www.ubuntu.com/releaseendoflife
New release '13.10' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Tue Nov 19 00:48:23 2013 from odc-d-06-07.prc.eucalyptus-systems.com
ubuntu@euca-172-17-112-207:~$

Now its time to prepare the instance to create your own base Docker CentOS 6.5 image.

Preparing the Instance

To prepare the instance to create the base image, install the following packages using apt-get:

ubuntu@euca-172-17-112-207:~$ sudo -s
root@euca-172-17-112-207:~# apt-get install rinse perl rpm \
rpm2cpio libwww-perl liblwp-protocol-https-perl

After the packages finish installing, lets go ahead and mount the ephemeral storage on the instance as /tmp to give Docker extra space for building the base image:

root@euca-172-17-112-207:~# curl http://169.254.169.254/latest/meta-data/block-device-mapping/ephemeral0
sda2
root@euca-172-17-112-207:~# mount /dev/vda2 /tmp
root@euca-172-17-112-207:~# df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 4.8G 1.1G 3.5G 24% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 0 0 0 - /sys/fs/fuse/connections
none 0 0 0 - /sys/kernel/debug
none 0 0 0 - /sys/kernel/security
udev 993M 8.0K 993M 1% /dev
devpts 0 0 0 - /dev/pts
tmpfs 201M 240K 201M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1002M 0 1002M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/vda2 9.4G 150M 8.8G 2% /tmp
root@euca-172-17-112-207:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 15G 0 disk
├─vda1 253:1 0 5G 0 part /
├─vda2 253:2 0 9.5G 0 part /tmp
└─vda3 253:3 0 512M 0 part

Next, download the mkimage-rinse.sh script from the Docker repository on Github:

root@euca-172-17-112-207:~#  wget --no-check-certificate https://raw.github.com/dotcloud/docker/master/contrib/mkimage-rinse.sh
root@euca-172-17-112-207:~# chmod 755 mkimage-rinse.sh

Now we are ready to build the CentOS 6.5 Base Image.

Building the CentOS Image

The only thing left to do is build the base image.  Use mkimage-rinse.sh to build the CentOS 6.5 base image:

root@euca-172-17-112-207:~# ./mkimage-rinse.sh ubuntu/centos centos-6

After the installation is complete, test out the base image:

root@euca-172-17-112-207:~# docker run ubuntu/centos:6.5 cat /etc/centos-release
CentOS release 6.5 (Final)

Now we have a CentOS 6.5 base image.  The mkimage-rinse.sh script can also be used to install CentOS 5 base images as well.  Instead of passing centos-6, just pass centos-5.  For example, I have created a CentOS 5 base image in this instance as well.  Below shows the output of the images added to Docker:

root@euca-172-17-112-207:~# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu/centos 5.10 06019bdea24b 2 minutes ago 123.8 MB
ubuntu/centos 6.5 cd23a8442c8a 2 hours ago 127.3 MB
root@euca-172-17-112-207:~# docker run ubuntu/centos:5.10 cat /etc/redhat-release
CentOS release 5.10 (Final)

As you can see, Docker helps developers test their software across multiple Linux distributions.  With Eucalyptus (just as in AWS), users can use the ephemeral nature of instances to quickly stand up this test environment in one instance.

Deploying CentOS 6.5 Image in Docker on Eucalyptus 3.4

12 Steps To EBS-Backed EMI Bliss on Eucalyptus

In previous posts, I shared how to use Ubuntu Cloud Images and eustore with Eucalyptus and AWS.  This blog entry will focus on how to use these assets to create EBS-backed EMIs in 12 steps.   These steps can be used on AWS as well, but instead of creating an instance store-backed AMI first, Ubuntu has already provided AMIs that can be used as the building block instance on AWS.  Let’s get started.

Prerequisites

On Eucalyptus and AWS, it is required the user has the appropriate IAM policy in order to perform these steps.  The policy should contain the following EC2 Actions at a minimum:

  • RunInstances
  • AttachVolume
  • AuthorizeSecurityGroupEgress
  • AuthorizeSecurityGroupIngress
  • CreateKeyPair
  • CreateSnapshot
  • CreateVolume
  • DescribeImages
  • DescribeInstances
  • DescribeInstanceStatus
  • DescribeSnapshots
  • DetachVolume
  • RegisterImage

In addition, the user needs an access key ID and secret key.  For more information, check out the following resources:

This entry also assumes Eucalyptus euca2ools are installed on the client machine.

The 12 Steps

Although the Ubuntu Cloud Image used in this entry is Ubuntu Precise (12.04) LTS, any of of the maintained Ubuntu Cloud images can be used.

  1. Use wget to download tar-gzipped precise-server-cloudimg:
    $ wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64.tar.gz
  2. After setting the EC2_ACCESS_KEY, EC2_SECRET_KEY, and EC2_URL, use eustore-install-image to an instance stored-backed EMI:
    $ eustore-install-image -t precise-server-cloudimg-amd64.tar.gz \
    -b ubuntu-latest-precise-x86_64 --hypervisor universal \
    -s "Ubuntu Cloud Image - Precise Pangolin - 12.04 LTS"
  3. Create a keypair using euca-create-keypair, then use euca-run-instances to launch an instance from the EMI returned from eustore-install-image. For example:
    $ euca-run-instances -t m1.medium \
    -k account1-user01 emi-5C8C3909
  4. Use euca-create-volume to create a volume based upon the size of how big you want the root filesystem to be.  The availability zone (-z option) will be based on if you are using Eucalyptus or AWS:
    $ euca-create-volume -s 6 \
    -z LayinDaSmackDown
  5. Using euca-attach-volume, attach the resulting volume to the running instance. For example:
    $ euca-attach-volume -d /dev/vdd \
    -i i-839E3FB0 vol-B5863B3B
  6. Use euca-authorize to open SSH access to the instance, SSH into the instance, then use wget to download the Ubuntu Precise Cloud Image (qcow2 format):
    $ ssh -i account1-user01.priv ubuntu@euca-10-104-7-10.eucalyptus.euca-hasp.eucalyptus-systems.com
    # sudo -s
    # wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
  7. Install qemu-utils:
    # apt-get install -y qemu-utils
  8. Use qemu-img to convert image from qcow2 to raw:
    # qemu-img convert \
    -O raw precise-server-cloudimg-amd64-disk1.img precise-server-cloudimg-amd64-disk1-raw.img
  9. dd raw image to block device where volume is attached (use dmesg to figure that out easily):
    # dmesg | tail
    [ 7026.943212] virtio-pci 0000:00:05.0: using default PCI settings
    [ 7026.943249] pci 0000:00:07.0: no hotplug settings from platform
    [ 7026.943251] pci 0000:00:07.0: using default PCI settings
    [ 7026.945964] virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
    [ 7026.955143] virtio-pci 0000:00:07.0: PCI INT A -> Link[LNKC] -> GSI 10 (level, high) -> IRQ 10
    [ 7026.955180] virtio-pci 0000:00:07.0: setting latency timer to 64
    [ 7026.955429] virtio-pci 0000:00:07.0: irq 45 for MSI/MSI-X
    [ 7026.955456] virtio-pci 0000:00:07.0: irq 46 for MSI/MSI-X
    [ 7026.986990] vdb: unknown partition table
    [10447.093426] virtio-pci 0000:00:07.0: PCI INT A disabled
    # dd if=/mnt/precise-server-cloudimg-amd64-disk1-raw.img of=/dev/vdb bs=1M
  10. Log out the instance, and use euca-detach-volume to detach the volume:
    $ euca-detach-volume vol-B5863B3B
  11. Use euca-create-snapshot to create a snapshot of the volume:
    $ euca-create-snapshot vol-B5863B3B
  12. Use euca-register to register the resulting snapshot to create the EBS-backed EMI:
    $ euca-register --name ebs-precise-x86_64-sda \
    --snapshot snap-EFDB40A1 --root-device-name /dev/sda

Thats it!  You have successfully created an EBS-backed EMI/AMI.  As mentioned earlier, these steps can be used on AWS just as well (just skip steps 1 & 2, and use one of the Ubuntu Cloud Images in the AWS region of your choice).  Enjoy!

12 Steps To EBS-Backed EMI Bliss on Eucalyptus