Setting Up 3-Factor Authentication (Keypair, Password, Google Authenticator) for Eucalyptus Cloud Instances

Recently, I was logging into my AWS account, where I have multi-factor authentication (MFA) enabled, using the Google Authenticator application on my smart phone.  This inspired me to research how to enable MFA for any Linux distribution.  I ran across the following blog entries:

From there, I figured I would try to create a Eucalyptus EMI that would support three-factor authentication on a Eucalyptus 4.0 cloud.  The trick here was to figure out how to display the Google Authenticator information so users could configure Google Authenticator.  The euca2ools command ‘euca-get-console-output‘ proved to be the perfect mechanism to provide this information to the cloud user.  This blog will show how to configure an Ubuntu Trusty (14.04) Cloud image to support three-factor authentication.

Prerequisites

In order to leverage the steps mentioned in this blog, the following is needed:

Now that the prereqs have been mentioned, lets get started.

Updating the Ubuntu Image

Before we can update the Ubuntu image, let’s download the image:

[root@odc-f-13 ~]# wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img

After the image has been downloaded successfully, the image needs to be converted to a raw format.  Use qemu-img for this conversion:

[root@odc-f-13 ~]# qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw

After converting the image to a raw format, we need to mount it in order to update the image accordingly:

[root@odc-f-13 ~]# losetup /dev/loop0 trusty-server-cloudimg-amd64-disk1.raw
[root@odc-f-13 ~]# kpartx -av /dev/loop0
add map loop0p1 (253:2): 0 4192256 linear /dev/loop0 2048
[root@odc-f-13 ~]# mkdir /mnt/ubuntu
[root@odc-f-13 ~]# mount /dev/mapper/loop0p1 /mnt/ubuntu
[root@odc-f-13 ~]# chroot /mnt/ubuntu

The above command ‘chroot’ allows us to edit the image as if its the current running Linux operating system.  We have to install a couple of packages in the image.  Before we do, use the resolvconf to create the necessary information in /etc/resolv.conf.

root@odc-f-13:/# resolvconf -I

Confirm the settings are correct by running ‘apt-get update’:

root@odc-f-13:/#  apt-get update

Once that command runs successfully, install the PAM module for Google Authenticator and the whois package:

root@odc-f-13:/# apt-get install libpam-google-authenticator whois

After these packages have been installed, run the ‘google-authenticator’ command to see all the available options:

root@odc-f-13:/# google-authenticator --help
google-authenticator [<options>]
 -h, --help Print this message
 -c, --counter-based Set up counter-based (HOTP) verification
 -t, --time-based Set up time-based (TOTP) verification
 -d, --disallow-reuse Disallow reuse of previously used TOTP tokens
 -D, --allow-reuse Allow reuse of previously used TOTP tokens
 -f, --force Write file without first confirming with user
 -l, --label=<label> Override the default label in "otpauth://" URL
 -q, --quiet Quiet mode
 -Q, --qr-mode={NONE,ANSI,UTF8}
 -r, --rate-limit=N Limit logins to N per every M seconds
 -R, --rate-time=M Limit logins to N per every M seconds
 -u, --no-rate-limit Disable rate-limiting
 -s, --secret=<file> Specify a non-standard file location
 -w, --window-size=W Set window of concurrently valid codes
 -W, --minimal-window Disable window of concurrently valid codes

Updating PAM configuration

Next the PAM configuration file /etc/pam.d/common-auth needs to be updated.  Find the following line in that file:

auth[success=1 default=ignore]pam_unix.so nullok_secure

Replace it with the following lines:

auth requisite pam_unix.so nullok_secure
auth requisite pam_google_authenticator.so
auth [success=1 default=ignore] pam_permit.so

Next, we need to update SSHD configuration.

Update SSHD configuration

We need to modify the /etc/ssh/sshd_config file to help make sure the Google Authenticator PAM module works successfully.  Modify/add the following lines to the /etc/ssh/sshd_config file:

ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive

Updating Cloud-Init Configuration

The next modification involves enabling the ‘ubuntu‘ user to have a password.  By default, the account is locked (i.e. doesn’t have a password assigned) in the cloud-init configuration file.  For this exercise, we will enable it, and assign a password.  Just like the old Ubuntu Cloud images, we will assign the ‘ubuntu‘ user the password ‘ubuntu‘.

Use ‘mkpasswd‘ as mentioned in the cloud-init documentation to create the password for the user:

root@odc-f-13:/# mkpasswd --method=SHA-512
Password:
$6$8/.y8gwYT$dVmtT7jXdBrz0w1ku5mh6HOC.vngjsXpehyeEicJT4kIyhvUMV3p9VGUIDC42Z1mjXdfAaQkINcCfcFe5jEKX/

In the file /etc/cloud/cloud.cfg, find the section ‘default_user‘.  Change the following line from:

lock_passwd: True

to

lock_passwd: False
passwd: $6$8/.y8gwYT$dVmtT7jXdBrz0w1ku5mh6HOC.vngjsXpehyeEicJT4kIyhvUMV3p9VGUIDC42Z1mjXdfAaQkINcCfcFe5jEKX/

The value for the ‘passwd‘ option is the output from the mkpasswd command executed earlier.

Updating /etc/rc.local

The final update to the image is to add some bash code to the /etc/rc.local file.   The reason for this update is so the information for configuring Google Authenticator with the instance can be presented to the user through the output of ‘euca-get-console-output‘.  Add the following code to the /etc/rc.local file above the ‘exit 0‘ line:

if [ ! -f /home/ubuntu/.google_authenticator ]; then
 /bin/su ubuntu -c "google-authenticator -t -d -f -r 3 -R 30 -w 4" > /root/google-auth.txt
 echo "############################################################"
 echo "Google Authenticator Information:"
 echo "############################################################"
 cat /root/google-auth.txt
 echo "############################################################"
fi

Thats it!  Now we need to bundle, upload and register the image.

Bundle, Upload and Register the Image

Since we are using an HVM image, we don’t have to worry about the kernel and ramdisk.  We can just bundle, upload and register the image.  To do so, use the euca-install-image command.  Before we do that, we need to exit out of the chroot environment and unmount the image:

root@odc-f-13:/# exit
[root@odc-f-13 ~]# umount /mnt/ubuntu
[root@odc-f-13 ~]# kpartx -dv /dev/loop0
del devmap : loop0p1
[root@odc-f-13 ~]# losetup -d /dev/loop0

After unmounting the image, bundle, upload and register the image with the euca-install-image command:

[root@odc-f-13 ~]# euca-install-image -b ubuntu-trusty-server-google-auth-x86_64-hvm -i trusty-server-cloudimg-amd64-disk1.raw --virtualization-type hvm -n trusty-server-google-auth -r x86_64
/var/tmp/bundle-Q8yit1/trusty-server-cloudimg-amd64-disk1.raw.manifest.xml 100% |===============| 7.38 kB 3.13 kB/s Time: 0:00:02
IMAGE emi-FF439CBA

After the image is registered, launch the instance with a keypair that has been created using the ‘euca-create-keypair‘ command:

[root@odc-f-13 ~]# euca-run-instances -k account1-user01 -t m1.medium emi-FF439CBA
RESERVATION r-B79E6A59 408396244283 default
INSTANCE i-48D98090 emi-FF439CBA pending account1-user01 0 m1.medium 2014-07-21T20:23:10.285Z ViciousLiesAndDangerousRumors monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-A5133B59

Once the instance has reached the ‘running’ state, use ‘euca-get-console-ouptut’ to grab the Google Authenticator information:

[root@odc-f-13 ~]# euca-describe-instances i-48D98090
RESERVATION r-B79E6A59 408396244283 default
INSTANCE i-48D98090 emi-FF439CBA euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com euca-172-18-238-157.bigboi.internal running account1-user01 0 m1.medium 2014-07-21T20:23:10.285Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.237 172.18.238.157 instance-store hvm sg-A5133B59
[root@odc-f-13 ~]# euca-get-console-output i-48D98090
.......
############################################################
Google Authenticator Information:
############################################################
https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/ubuntu@euca-172-18-238-157%3Fsecret%3D2MGKGDZTFLVE5LCX
Your new secret key is: 2MGKGDZTFLVE5LCX
Your verification code is 275414
Your emergency scratch codes are:
 59078604
 17425999
 89676696
 65201554
 14740079
############################################################
.....

Now we are ready to test access to the instance.

Testing Access to the Instance

To test access to the instance, make sure the Google Authenticator application is installed on your smart phone/hand-held device.  Next, copy the URL seen in the output (e.g. https://www.google.com/chart?chs=200×200&chld=M|0&cht=qr&chl=otpauth://totp/ubuntu@euca-172-18-238-157%3Fsecret%3D2MGKGDZTFLVE5LCX) from ‘euca-get-console-output’, and past it into a browser:

OTPAUTH URL for Google Authenticator

OTPAUTH URL for Google Authenticator

Use the ‘Google Authenticator’ application on your smart phone/hand-held device, and scan the QR Code:

Google Authenticator Application

Google Authenticator Application

 

Google Authenticator Application - Set Up Account

Google Authenticator Application – Set Up Account

 

After selecting the ‘Set up account‘ option, select ‘Scan a barcode‘, hold your smartphone/hand-held device to the screen where your browser is showing the QR code, and scan:

Google Authenticator Application - Scan Barcode

Google Authenticator Application – Scan Barcode

 

After scanning the QR code, you should see the account get added, and the verification codes begin to populate for the account:

Verification Code For Instance

Verification Code For Instance

 

Finally, SSH into the instance using the following:

  • the private key of the keypair used when launching the instance with euca-run-instances
  • the password ‘ubuntu
  • the verification code displayed in Google Authenticator for the new account added

With the information above, the SSH authentication should look similar to the following:

[root@odc-f-13 ~]# ssh -i account1-user01/account1-user01.priv ubuntu@euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com
The authenticity of host 'euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com (10.104.6.237)' can't be established.
RSA key fingerprint is c9:37:18:66:e3:ee:66:d2:8a:ac:a4:21:a6:84:92:08.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com,10.104.6.237' (RSA) to the list of known hosts.
Authenticated with partial success.
Password:
Verification code:
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-32-generic x86_64)

* Documentation: https://help.ubuntu.com/

System information as of Mon Jul 21 13:23:48 UTC 2014

System load: 0.0 Memory usage: 5% Processes: 68
 Usage of /: 56.1% of 1.32GB Swap usage: 0% Users logged in: 0

Graph this data and manage this system at:

https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:

http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@euca-172-18-238-157:~$

Three-factor authentication has been successfully configured for the Ubuntu cloud image.  If cloud administrators would like to use different authentication for the instance user, I suggest investigating how to set up PAM LDAP authentication, where SSH public keys are stored in OpenLDAP.  In order to do this, the Ubuntu image  would have to be updated to work.  I would check out the ‘sss_ssh_authorizedkeys‘ command, and the pam-script module to potentially help get this working.

Enjoy!

Eucalyptus 4.0 Load Balancer Statistics Web UI for the Cloud Administrator

Background

From the cloud user’s perspective, the Eucalyptus Load Balancer is a “black box“.  The only interaction cloud user’s have with the Eucalyptus Load Balancer is through the eulb-* commands in euca2ools or the AWS Elastic Load Balancing API tools.   In Eucalyptus 3.4 and greater, the cloud administrator (any user under the ‘eucalyptus’ account) has the ability to access the instance that implements the load balancing solution used by the Eucalyptus Load Balancing service.  This access can be used to help troubleshoot the Eucalyptus Load Balancer if there are any issues reported by the cloud user.

The Eucalyptus Load Balancer utilizes HAProxy to implement the load balancing solution.  HAProxy has a cool feature to enable the ability to display a statistics page for the HAProxy application.  Enabling this feature to the Eucalyptus Load Balancer can help cloud administrators obtain valuable information from the load balancer in the following areas:

  • Network traffic to the backend instances registered with the load balancer
  • Network traffic to the load balancer
  • Triaging any Eucalyptus Load Balancer behavior associated with Eucalyptus CloudWatch alarms

Before getting into the details, I would like to thank Nathan Evans for his entry entitled “Cultural learnings of HA-Proxy, for make benefit…“, which helped influence this blog entry.   Now on to the fun stuff….

Prerequisites

The prerequisites for this blog entry are pretty straight forward – just read my previous entry entitled “Customizing Eucalyptus Load Balancer for Eucalyptus 4.0“.  To enable the web UI stats page, we will just add information to the /etc/load-balancer-servo/haproxy_template.conf file in the load balancer image.

In addition, the cloud administrator credentials will be needed, along with euca2ools 3.1 installed.

Enabling the HAProxy Web Statistics Page

After downloading and mounting the Eucalyptus Load Balancer image (as mentioned in my previous blog entry), to enable the HAProxy web statistics page, update the /etc/load-balancer-servo/haproxy_template.conf to look like the following:

[root@odc-f-13 /]# cat etc/load-balancer-servo/haproxy_template.conf
#template
global
 maxconn 100000
 ulimit-n 655360
 pidfile /var/run/haproxy.pid

#drop privileges after port binding
 user servo
 group servo

defaults
 timeout connect 5s
 timeout client 2m
 timeout server 2m
 timeout http-keep-alive 10s
 timeout queue 1m
 timeout check 5s
 retries 3
 option dontlognull
 option redispatch
 option http-server-close # affects KA on/off

 userlist UsersFor_HAProxyStatistics
  group admin users admin
  user admin insecure-password pwd*4admin
  user stats insecure-password pwd*4stats

listen HAProxy-Statistics *:81
 mode http
 stats enable
 stats uri /haproxy?stats
 stats refresh 60s
 stats show-node
 stats show-legends
 acl AuthOkay_ReadOnly http_auth(UsersFor_HAProxyStatistics)
 acl AuthOkay_Admin http_auth_group(UsersFor_HAProxyStatistics) admin
 stats http-request auth realm HAProxy-Statistics unless AuthOkay_ReadOnly
 stats admin if AuthOkay_Admin

For more information regarding these options, please refer to the HAProxy 1.5 documentation.  The key options here are as follows:

  • The port defined in the ‘listen’ section - listen HAProxy-Statistics *:81
  • The username and passwords defined in the ‘userlist‘ subsection under the ‘defaults’ section.
  • The URI defined in the ‘listen’ section - stats uri /haproxy?stats

After making these changes, confirm that there aren’t any configuration file errors:

[root@odc-f-13 /]# /usr/sbin/haproxy -c -f etc/load-balancer-servo/haproxy_template.conf
 Configuration file is valid

Next, unmount the image, and tar-gzip the image:

[root@odc-f-13 eucalyptus-load-balancer-image]# umount /mnt/centos
[root@odc-f-13 eucalyptus-load-balancer-image]# kpartx -dv /dev/loop0
del devmap : loop0p1
[root@odc-f-13 eucalyptus-load-balancer-image]# losetup -d /dev/loop0
[root@odc-f-13 eucalyptus-load-balancer-image]# tar -zcvf eucalyptus-load-balancer-image-monitored.tgz eucalyptus-load-balancer-image.img
eucalyptus-load-balancer-image.img

Use euca-install-load-balancer to upload the new image:

[root@odc-f-13 eucalyptus-load-balancer-image]# cd
[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:

Version 2 (enabled)
emi-F0D5828C (loadbalancer-v2/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-05-28 at 11:10:03 PDT

[root@odc-f-13 ~]# euca-install-load-balancer -t eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-monitored.tgz
Decompressing tarball: eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-monitored.tgz
Bundling and uploading image to bucket: loadbalancer-v3
Registering image manifest: loadbalancer-v3/eucalyptus-load-balancer-image.img.manifest.xml
Registered image: emi-DB150EC0
PROPERTY loadbalancing.loadbalancer_emi emi-DB150EC0 was emi-F0D5828C

Load Balancing Support is Enabled
[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:

Version 2
emi-F0D5828C (loadbalancer-v2/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-05-28 at 11:10:03 PDT

Version 3 (enabled)
emi-DB150EC0 (loadbalancer-v3/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-07-08 at 18:38:29 PDT

Testing the Eucalyptus Load Balancer Statistics Page

To view the HAProxy statistics page, create a Eucalyptus Load Balancer instance by using eulb-create-lb:

[root@odc-f-13 ~]# eulb-create-lb TestLoadBalancer -z ViciousLiesAndDangerousRumors -l "lb-port=80, protocol=HTTP, instance-port=80, instance-protocol=HTTP"
DNS_NAME TestLoadBalancer-408396244283.elb.acme.eucalyptus-systems.com

[root@odc-f-13 ~]# euca-describe-instances
RESERVATION r-06DF089F 944786667073 euca-internal-408396244283-TestLoadBalancer
INSTANCE i-3DA342C2 emi-DB150EC0 euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com euca-172-18-229-187.bigboi.internal running euca-elb 0 m1.medium 2014-07-09T01:45:11.753Z ViciousLiesAndDangerousRumors monitoring-enabled 10.104.6.233 172.18.229.187 instance-store hvm 8ba248ae-dbeb-41ce-97df-fb13b91a337b_ViciousLiesAndDangerousR_1 sg-3EA4ADEC arn:aws:iam::944786667073:instance-profile/internal/loadbalancer/loadbalancer-vm-408396244283-TestLoadBalancer
TAG instance i-3DA342C2 Name loadbalancer-resources
TAG instance i-3DA342C2 aws:autoscaling:groupName asg-euca-internal-elb-408396244283-TestLoadBalancer
TAG instance i-3DA342C2 euca:node 10.105.1.188

Since the web statistics page is configured to display on port 81, use euca-authorize to allow access to that port in the load balancer’s security group.  I recommend limiting access to the port for security reasons.  In the example below, access is limited to only the client 192.168.30.25:

[root@odc-f-13 ~]# euca-authorize -P tcp -p 81 -s 192.168.30.25/32 euca-internal-408396244283-TestLoadBalancer
 GROUP euca-internal-408396244283-TestLoadBalancer
 PERMISSION euca-internal-408396244283-TestLoadBalancer ALLOWS tcp 81 81 FROM CIDR 192.168.30.25/32

Finally, use a browser on the authorized client to view the statistics page on the load balancer.  In this example, the URL - http://testloadbalancer-408396244283.elb.acme.eucalyptus-systems.com:81/haproxy?stats – will be used.  Use the username and password credentials that were added to to the HAProxy configuration file to view the page.  It should look similar to the screenshot below:

HAProxy Statistics Web Page of the Eucalyptus Load Balancer

HAProxy Statistics Web Page of the Eucalyptus Load Balancer

 

Thats it!  For any load balancer thats launched on the Eucalyptus 4.0 cloud, the cloud administrator will be able to display statistics of the load balancer.  This is also something that the cloud administrator can provide to cloud users as a service.  By leveraging restrictions placed in security groups of the load balancer, cloud administrators can limit access to the statistics page based upon the source IP addresses of the cloud users’ client machine(s).

Enjoy!

Customizing Eucalyptus Load Balancer for Eucalyptus 4.0

Background

For the Elastic Load Balancing service, Eucalyptus utilizes an HAProxy instance.  The load balancer image contains the following version of haproxy (as of Eucalyptus 4.0.0):

# /usr/sbin/haproxy -vv
HA-Proxy version 1.5-dev21-6b07bf7 +2013/12/17
Copyright 2000-2013 Willy Tarreau <w@1wt.eu>

Build options :
 TARGET = linux2628
 CPU = generic
 CC = gcc
 CFLAGS = -O2 -g -fno-strict-aliasing
 OPTIONS = USE_LINUX_TPROXY=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
 maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built without zlib support (USE_ZLIB not set)
Compression algorithms supported : identity
Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
 epoll : pref=300, test result OK
 poll : pref=200, test result OK
 select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

By default, the following HAProxy configuration options are used by the Eucalyptus Load Balancer image (defined by the Eucalyptus load-balancer-servo application, which is the controlling mechanism for Eucalyptus load balancing):

#template
global
 maxconn 100000
 ulimit-n 655360
 pidfile /var/run/haproxy.pid

#drop privileges after port binding
 user servo
 group servo

defaults
 contimeout 1000
 clitimeout 10000
 srvtimeout 10000
 option http-server-close # affects KA on/off

Given what backend applications will be used with the Eucalyptus Load Balancer, these settings may not be sufficient.

The goal of this entry is to demonstrate how to customize the Eucalyptus Load Balancer image configuration to handle various backend applications that will be used with the load balancer.

Prerequisites

In order to customize the Eucalyptus Load Balancer image, the credentials of the cloud administrator (Eucalyptus IAM credentials of a user under the ‘eucalyptus’ account) must be used.   These credentials are needed to do the following:

Since the cloud administrator credentials will be used, there will be no need to define any Eucalyptus IAM policies.

To mount, and modify the Eucalyptus Load Balancer image, the following Linux tools are needed:

The examples in this blog were all done on CentOS 6.5 machine where the eucalyptus-load-balancer-image package has been installed.  This package contains the ‘euca-install-load-balancer’ command.

Obtaining the Eucalyptus Elastic Load Balancer Image

There are couple of ways to obtain the Eucalyptus Load Balancer image:

This blog entry will use the eucalyptus-load-balancer-image RPM, and update the image according.  To get started, create a directory (in this example ‘eucalyptus-lb’), and  download the latest eucalyptus load balancer image RPM from downloads.eucalyptus.com:

[root@odc-f-13 ~]# mkdir eucalyptus-lb; cd eucalyptus-lb
[root@odc-f-13 eucalyptus-lb]# wget http://downloads.eucalyptus.com/software/eucalyptus/4.0/centos/6/x86_64/eucalyptus-load-balancer-image-1.1.0-0.212.el6.x86_64.rpm

Once the RPM package has been downloaded, unpack the RPM:

[root@odc-f-13 eucalyptus-lb]# rpm2cpio eucalyptus-load-balancer-image-1.1.0-0.212.el6.x86_64.rpm | cpio --extract --make-directories --preserve-modification-time --verbose
./usr/bin/euca-install-load-balancer
./usr/share/doc/eucalyptus-load-balancer-image-1.1.0
./usr/share/doc/eucalyptus-load-balancer-image-1.1.0/IMAGE-LICENSE
./usr/share/doc/eucalyptus-load-balancer-image-1.1.0/eucalyptus-load-balancer-image.ks
./usr/share/eucalyptus-load-balancer-image
./usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-1.1.0-212.tgz
559369 blocks

After unpacking the RPM, change directory to ~/usr/share/eucalyptus-load-balancer-image, and decompress the eucalyptus-load-balancer-image-1.1.0-212.tgz file to obtain the Eucalyptus Load Balancer image:

[root@odc-f-13 eucalyptus-lb]# cd usr/share/eucalyptus-load-balancer-image
[root@odc-f-13 eucalyptus-load-balancer-image]# tar -xzvf eucalyptus-load-balancer-image-1.1.0-212.tgz
eucalyptus-load-balancer-image.img

Now that the image is available, we can modify it accordingly.

Modifying the Eucalyptus Load Balancer Image

To modify the Eucalyptus Load Balancer image, the image needs to be mounted to a loopback device, as demonstrated below:

[root@odc-f-13 eucalyptus-load-balancer-image]# mkdir /mnt/centos
[root@odc-f-13 eucalyptus-load-balancer-image]# losetup /dev/loop0 eucalyptus-load-balancer-image.img
[root@odc-f-13 eucalyptus-load-balancer-image]# kpartx -av /dev/loop0
add map loop0p1 (253:2): 0 3145728 linear /dev/loop0 2048
[root@odc-f-13 eucalyptus-load-balancer-image]# mount /dev/mapper/loop0p1 /mnt/centos
[root@odc-f-13 eucalyptus-load-balancer-image]# chroot /mnt/centos
[root@odc-f-13 /]#

We will modify and add the following HAProxy options under the ‘default’ section in the /etc/load-balancer-servo/haproxy_template.conf file.  For information about these options, please refer to the HAProxy 1.5 documentation:

  • replace ‘srvtimeout‘ with ‘timeout server‘ since ‘srvtimeout‘ is deprecated, and set the value to ‘2m
  • replace ‘clitimeout‘ with ‘timeout client‘ since ‘clitimeout‘ is deprecated, and set the value to ‘2m
  • replace ‘contimeout ‘ with ‘timeout connect‘ since ‘contimeout‘ is deprecated, and set the value to ‘5s
  • add ‘timeout http-keep-alive‘ with the value of  ‘10s
  • add ‘timeout queue‘ with the value of ‘1m
  • add ‘timeout check‘ with the value of ‘5s
  • add ‘retries‘ with the value of ‘3
  • add the following options to not log null connections, and to enable session redistribution in case of failure:
    • option dontlognull
    • option redispatch

The  /etc/load-balancer-servo/haproxy_template.conf should look similar to the following after all the desired attributes are added:

[root@odc-f-13 /]# cat /etc/load-balancer-servo/haproxy_template.conf
#template
global
 maxconn 100000
 ulimit-n 655360
 pidfile /var/run/haproxy.pid

#drop privileges after port binding
 user servo
 group servo

defaults
 timeout connect 5s
 timeout client 2m
 timeout server 2m
 timeout http-keep-alive 10s
 timeout queue 1m
 timeout check 5s
 retries 3
 option dontlognull
 option redispatch
 option http-server-close # affects KA on/off

(Note:  Depending upon what edits are being done to the HAProxy configuration settings, there may also be a need to edit the /etc/sysctl.conf file to help get the desired behavior from the Eucalyptus Load Balancer.  For example, the following sysctl properties can be edited to increase/decrease TCP timeouts:

  • net.ipv4.tcp_keepalive_time
  • net.ipv4.tcp_keepalive_intvl
  • net.ipv4.tcp_keepalive_probes

For more information about editing sysctl values, the documentation from RedHat can be referenced.)

Once all edits are completed, confirm that the configuration file is correct, exit out of the chroot environment and unmount the image:

[root@odc-f-13 /]# /usr/sbin/haproxy -c -f /etc/load-balancer-servo/haproxy_template.conf
Configuration file has no error but will not start (no listener) => exit(2).

[root@odc-f-13 eucalyptus-load-balancer-image]# umount /mnt/centos
[root@odc-f-13 eucalyptus-load-balancer-image]# kpartx -dv /dev/loop0
del devmap : loop0p1
[root@odc-f-13 eucalyptus-load-balancer-image]# losetup -d /dev/loop0

Installing the New Eucalyptus Load Balancer Image

After the image has been unmounted, create a new tar-gzipped file that contains the modified Eucalyptus Load Balancer image:

[root@odc-f-13 eucalyptus-load-balancer-image]# tar -zcvf eucalyptus-load-balancer-image-updated.tgz eucalyptus-load-balancer-image.img

Next, make sure the cloud administrator credentials are sourced and check the cloud properties for the Eucalyptus Load Balancer service:

[root@odc-f-13 eucalyptus-load-balancer-image]# cd
[root@odc-f-13 ~]# source eucarc
[root@odc-f-13 ~]# euca-describe-properties | grep load
PROPERTY ViciousLiesAndDangerousRumors.storage.maxconcurrentsnapshotuploads 3
PROPERTY ViciousLiesAndDangerousRumors.storage.snapshotuploadtimeoutinhours 48
PROPERTY authentication.credential_download_host_match {}
PROPERTY loadbalancing.loadbalancer_app_cookie_duration 24
PROPERTY loadbalancing.loadbalancer_dns_subdomain elb
PROPERTY loadbalancing.loadbalancer_emi emi-F0D5828C
PROPERTY loadbalancing.loadbalancer_instance_type m1.medium
PROPERTY loadbalancing.loadbalancer_num_vm 1
PROPERTY loadbalancing.loadbalancer_restricted_ports 22
PROPERTY loadbalancing.loadbalancer_vm_keyname euca-elb
PROPERTY loadbalancing.loadbalancer_vm_ntp_server pool.ntp.org

Check to see what load balancer images are enabled:

[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:

Version 1
emi-FA373789 (loadbalancer_v1/eucalyptus-load-balancer-image-1.0.4-164.img.manifest.xml)
 Installed on 2014-05-20 at 07:12:18 PDT

Version 2 (enabled)
emi-F0D5828C (loadbalancer-v2/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-05-28 at 11:10:03 PDT

To install the modified load balancer image, use the ‘euca-install-load-balancer‘ command:

[root@odc-f-13 ~]# euca-install-load-balancer -t ~/eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-updated.tgz
Decompressing tarball: eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-updated.tgz
Bundling and uploading image to bucket: loadbalancer-v3
Registering image manifest: loadbalancer-v3/eucalyptus-load-balancer-image.img.manifest.xml
Registered image: emi-BCAD86BE
PROPERTY loadbalancing.loadbalancer_emi emi-BCAD86BE was emi-F0D5828C

Confirm that the new EMI is enabled:

[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:

Version 1
emi-FA373789 (loadbalancer_v1/eucalyptus-load-balancer-image-1.0.4-164.img.manifest.xml)
 Installed on 2014-05-20 at 07:12:18 PDT

Version 2
emi-F0D5828C (loadbalancer-v2/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-05-28 at 11:10:03 PDT

Version 3 (enabled)
emi-BCAD86BE (loadbalancer-v3/eucalyptus-load-balancer-image-updated.img.manifest.xml)
 Installed on 2014-07-06 at 18:02:20 PDT

Confirming the Updated Load Balancer Configuration

To confirm the changes, make sure the cloud property loadbalancing.loadbalancer_vm_keyname‘ has a defined value for debugging purposes, then create a Eucalyptus Elastic Load Balancer:

[root@odc-f-13 ~]# eulb-create-lb TestLoadBalancer -z ViciousLiesAndDangerousRumors -l "lb-port=80, protocol=HTTP, instance-port=80, instance-protocol=HTTP"
DNS_NAME TestLoadBalancer-408396244283.elb.acme.eucalyptus-systems.com

Confirm that the load balancer instance is running (only the cloud administrator can see the load balancing instance IDs), and authorize port 22 (SSH) to the instance:

[root@odc-f-13 ~]# euca-describe-instances
RESERVATION r-C27D6F37 944786667073 euca-internal-408396244283-TestLoadBalancer
INSTANCE i-A0DBA47D emi-BCAD86BE euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com euca-172-18-246-58.bigboi.internal running euca-elb 0 m1.medium 2014-07-07T01:08:32.400Z ViciousLiesAndDangerousRumors monitoring-enabled 10.104.6.233 172.18.246.58 instance-store hvm bce54dd2-a9af-4587-9421-457e22dda5ff_ViciousLiesAndDangerousR_1 sg-4F469747 arn:aws:iam::944786667073:instance-profile/internal/loadbalancer/loadbalancer-vm-408396244283-TestLoadBalancer
TAG instance i-A0DBA47D Name loadbalancer-resources
TAG instance i-A0DBA47D aws:autoscaling:groupName asg-euca-internal-elb-408396244283-TestLoadBalancer
TAG instance i-A0DBA47D euca:node 10.105.10.7
[root@odc-f-13 ~]# euca-authorize -P tcp -p ssh euca-internal-408396244283-TestLoadBalancer
GROUP euca-internal-408396244283-TestLoadBalancer
PERMISSION euca-internal-408396244283-TestLoadBalancer ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0

After SSHing into the load balancer instance, confirm that the /var/lib/load-balancer-servo/euca_haproxy.conf file has the updated changes:

[root@odc-f-13 ~]# ssh -i euca-elb.priv root@euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com
The authenticity of host 'euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com (10.104.6.233)' can't be established.
RSA key fingerprint is e3:9b:80:e2:f3:12:a3:0b:f0:5c:7c:6b:bc:d8:9d:77.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com' (RSA) to the list of known hosts.
Warning: the RSA host key for 'euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com' differs from the key for the IP address '10.104.6.233'
Offending key for IP in /root/.ssh/known_hosts:9
Are you sure you want to continue connecting (yes/no)? yes
[root@euca-172-18-246-58 ~]# ps aux | grep haprox
servo 1027 0.0 0.2 58648 3020 ? Ss 01:09 0:00 /usr/sbin/haproxy -f /var/lib/load-balancer-servo/euca_haproxy.conf -p /var/run/load-balancer-servo/haproxy.pid -V -C /var/lib/load-balancer-servo -D
[root@euca-172-18-246-58 ~]# cat /var/lib/load-balancer-servo/euca_haproxy.conf
global
 maxconn 100000
 ulimit-n 655360
 pidfile /var/run/haproxy.pid
 #drop privileges after port binding
 user servo
 group servo

defaults
 timeout connect 5s
 timeout client 2m
 timeout server 2m
 timeout http-keep-alive 10s
 timeout queue 1m
 timeout check 5s
 retries 3
 option dontlognull
 option redispatch
 option http-server-close # affects KA on/off

frontend http-80
 # lb-TestLoadBalancer
 mode http
 option forwardfor except 127.0.0.1
 bind 0.0.0.0:80
 log /var/lib/load-balancer-servo/haproxy.sock local2 info
 log-format httplog\ %f\ %b\ %s\ %ST\ %ts\ %Tq\ %Tw\ %Tc\ %Tr\ %Tt
 default_backend backend-http-80

backend backend-http-80
 mode http
 balance roundrobin

The ‘default‘ section should contain all the modifications made to the /etc/load-balancer-servo/haproxy_template.conf file.   The Eucalyptus Load Balancer will now utilize the updated changes needed to address the desired performance with the various backend applications that will be used with the load balancer.

Growing and Securing Instance Root Filesystems With Overlayroot on Eucalyptus

Background

I frequently check out Dustin Kirkland’s blog to get ideas about how to secure instances running on various cloud infrastructures.  Recently, I stumbled across one of his blog entries, where he discussed a package called ‘overlayroot‘, which is part of the cloud-initramfs-tools package for Ubuntu.  The really interesting feature I liked about this package is the ability to encrypt – using dmcrypt –  the root filesystem of the instance.  What this means that if the Node Controller (NC) that hosts the instance happens to become compromised, the information on the device associated with the instance can’t be accessed.  In addition, overlayroot allows the root filesystem of the instance to be ‘laid’ on top of another device that is bigger in size.  The advantage here is that there isn’t a need to re-create an image with a larger root filesystem.

Prerequisites

The key prerequisites for this blog were mentioned in my previous blog, which discusses how to bundle, upload and register a CoreOS EMI.  In addition, to these prerequisites, the following EC2 actions are needed for the Eucalyptus IAM policy:

Overlayroot is available with any Ubuntu Cloud image from 12.10 (Quantal Quetzal) to the most recent at the time of this blog, 14.10 (Utopic Unicorn).  For this blog, Ubuntu 14.04 LTS (Trusty Tahr) will be used.

Setting Up the EMI

As mentioned in the overlayroot blog entry, the configuration of overlayroot is located in /etc/overlayroot.conf.  This can be configured before this Ubuntu Cloud image is bundled, uploaded and registered as an EMI, or after.  For brevity, overlayroot will be configured after the Ubuntu Cloud image has been bundled, uploaded and registered.

To get started, download the Ubuntu Cloud image:

# wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img

Next, convert the image from qcow2 to raw:

# qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw

Bundle, upload and register the image as an instance store-backed HVM EMI:

# euca-bundle-and-upload-image -i trusty-server-cloudimg-amd64-disk1.raw -b trusty-server-hvm -r x86_64
# euca-register -n trusty-server-hvm trusty-server-hvm/trusty-server-cloudimg-amd64-disk1.raw.manifest.xml --virtualization-type hvm 
IMAGE emi-348861E4

After creating a keypair using euca-create-keypair, and authorizing SSH access using euca-authorize, launch an instance from the EMI:

# euca-run-instances -k account1-user01 -t m1.medium emi-348861E4
RESERVATION r-BE317660 408396244283 default
INSTANCE i-41DA6A90 emi-348861E4 pending account1-user01 0 m1.medium 2014-06-15T22:47:03.552Z ViciousLiesAndDangerousRumors monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-A5133B59

Once the instance has reached a ‘running’ state, SSH into the instance:

# euca-describe-instances i-41DA6A90 
RESERVATION r-BE317660 408396244283 default
INSTANCE i-41DA6A90 emi-348861E4 euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com euca-172-18-238-170.bigboi.internal running account1-user01 0 m1.medium 2014-06-15T22:47:03.552Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.235 172.18.238.170 instance-store hvm sg-A5133B59
# ssh -i account1-user01/account1-user01.priv ubuntu@euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com

Encrypted Root Filesystem

Once inside the instance, since the EMI is an instance store-backed HVM EMI, the ephemeral disk is /dev/vdb:

ubuntu@euca-172-18-238-170:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 2.2G 0 disk
└─vda1 253:1 0 2.2G 0 part /
vdb 253:16 0 7.8G 0 disk

Edit the ‘overlayroot’ option in the /etc/overlayroot.conf to set /dev/vdb to used to encrypt the root filesystem:

ubuntu@euca-172-18-238-170:~$ sudo vi /etc/overlayroot.conf
......
overlayroot=”crypt:dev=/dev/vdb”

After editing /etc/overlayroot.conf, reboot the instance:

ubuntu@euca-172-18-238-170:~$ sudo reboot

After the instance has been rebooted, SSH back into the instance and observe the new layout of the filesystem:

ubuntu@euca-172-18-238-170:~$ mount
overlayroot on / type overlayfs (rw,lowerdir=/media/root-ro/,upperdir=/media/root-rw/overlay)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
/dev/vda1 on /media/root-ro type ext4 (ro)
/dev/mapper/secure on /media/root-rw type ext4 (rw,relatime,data=ordered)
none on /sys/fs/pstore type pstore (rw)
systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd)

To verify the encrypted filesystem, use cryptsetup:

ubuntu@euca-172-18-238-170:~$ sudo cryptsetup luksDump /dev/vdb
LUKS header information for /dev/vdb
Version: 1
Cipher name: aes
Cipher mode: xts-plain64
Hash spec: sha1
Payload offset: 4096
MK bits: 256
MK digest: 77 70 97 81 9e d6 56 4b c5 79 60 92 23 02 18 80 9c eb 40 c8
MK salt: ed d1 69 1e d7 49 a6 a6 91 fb 0f 44 3a a2 0b b2
 2a 56 fb 82 77 3f 70 9c 70 ff c2 15 37 09 10 2c
MK iterations: 33250
UUID: 164026af-f6fa-4dd4-8042-5cd24b8c00c9
Key Slot 0: ENABLED
 Iterations: 132641
 Salt: 5e 7f 03 33 d9 af e8 a9 37 fb 5c 4e 10 db c5 38
 f7 9f 49 12 d8 c4 43 8c cb 79 4a 25 da 04 c9 73
 Key material offset: 8
 AF stripes: 4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

Expand and Encrypt the Root Filesystem

If the cloud user wants to increase the size of the root filesystem and encrypt it, but use a larger size than available for ephemeral storage, the user can create a volume, attach it to the instance, then configure overlayroot to use that device.  For example:

# euca-create-volume -s 20 -z ViciousLiesAndDangerousRumors
# euca-describe-volumes vol-450936D7
VOLUME vol-450936D7 20 ViciousLiesAndDangerousRumors available 2014-06-16T19:25:36.936Z standard
# euca-attach-volume -i i-41DA6A90 -d /dev/sdf vol-450936D7
ATTACHMENT vol-450936D7 i-41DA6A90 /dev/sdf attaching 2014-06-16T19:30:29.605Z
# euca-describe-instances i-41DA6A90
RESERVATION r-BE317660 408396244283 default
INSTANCE i-41DA6A90 emi-348861E4 euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com euca-172-18-238-170.bigboi.internal running account1-user01 0 m1.medium 2014-06-15T22:47:03.552Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.235 172.18.238.170 instance-store hvm sg-A5133B59
BLOCKDEVICE /dev/sdf vol-450936D7 2014-06-16T19:30:29.619Z false

Follow the steps above regarding editing the /etc/overlayroot.conf, except use /dev/vdc instead of /dev/vdb:

# ssh -i account1-user01/account1-user01.priv ubuntu@euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com
ubuntu@euca-172-18-238-170:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 2.2G 0 disk
└─vda1 253:1 0 2.2G 0 part /
vdb 253:16 0 7.8G 0 disk
vdc 253:32 0 20G 0 disk
ubuntu@euca-172-18-238-170:~$ sudo vi /etc/overlayroot.conf
.....
overlayroot=”crypt:dev=/dev/vdc”

Reboot the instance.  After the reboot completes, SSH back into the instance and noticed the root filesystem increased in size, and encrypted:

ubuntu@euca-172-18-238-170:~$ sudo reboot
# ssh -i account1-user01/account1-user01.priv ubuntu@euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com
ubuntu@euca-172-18-238-170:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 2.2G 0 disk
└─vda1 253:1 0 2.2G 0 part /media/root-ro
vdb 253:16 0 7.8G 0 disk
vdc 253:32 0 20G 0 disk
└─secure (dm-0) 252:0 0 20G 0 crypt /media/root-rw
ubuntu@euca-172-18-238-170:~$ df -ah
Filesystem Size Used Avail Use% Mounted on
overlayroot 20G 46M 19G 1% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 0 0 0 - /sys/fs/fuse/connections
none 0 0 0 - /sys/kernel/debug
none 0 0 0 - /sys/kernel/security
udev 493M 8.0K 493M 1% /dev
devpts 0 0 0 - /dev/pts
tmpfs 100M 328K 100M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/vda1 2.2G 754M 1.3G 37% /media/root-ro
/dev/mapper/secure 20G 46M 19G 1% /media/root-rw
none 0 0 0 - /sys/fs/pstore
systemd 0 0 0 - /sys/fs/cgroup/systemd

Bonus – Configuring Overlayroot Before Bundle, Uploading and Registering the EMI

As mention before,  overlayroot can be configured before the image has been bundled, uploaded and registered as an instance store-backed HVM EMI.  After downloading the Ubuntu Trusty cloud image, use losetup and kpartx to aid in mounting the image in order to configure overlayroot:

# qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw
# losetup /dev/loop0 trusty-server-cloudimg-amd64-disk1.raw
# kpartx -av /dev/loop0
add map loop0p1 (253:2): 0 4192256 linear /dev/loop0 2048
# mkdir /mnt/ubuntu
# mount /dev/mapper/loop0p1 /mnt/ubuntu
# chroot /mnt/ubuntu
root@odc-f-13:/# vi /etc/overlayroot.conf
(added the following)
overlayroot=”crypt:dev=/dev/vdb”
root@odc-f-13:/# exit
exit
# umount /mnt/ubuntu
# kpartx -dv /dev/loop0
# losetup -d /dev/loop0

After unmounting the image, just bundle, upload and register the image as an instance store-backed HVM EMI.  When an instance is launched from the EMI, the root filesystem will automatically be encrypted.

Conclusion

Using overlayroot gives additional instance security for the cloud user.  For more information about the additional configuration options of overlayroot, please refer to the overlayroot.conf file in the cloud-initramfs-tools repository on Launchpad.

Enjoy!

CoreOS CloudInit Config for Docker Storage Management

CoreOS is a Linux distribution that allows easy deployment of Docker environments.  With CoreOS, users have the ability to deploy clustered Docker environments,  or deploy zero downtime applications.  Recently, I have blogged about how to deploy and use Docker on Eucalyptus cloud environments. This blog will focus on how to leverage cloud-init configuration with a CoreOS EMI to manage instance storage that will be used by Docker containers on Eucalyptus 4.0.  The same cloud-init configuration file can be used  on AWS with CoreOS AMIs, which is yet another example of how Eucalyptus has continued to maintain its focus on being the best on-premise AWS compatible cloud environment.

Prerequisites

Since Eucalyptus Identity and Access Management (IAM) is very similar to AWS’s IAM, at a minimum – the following Elastic Compute Cloud (EC2) actions need to be allowed:

In order to bundle, upload and register the CoreOS image, use the following AWS S3 policy (which can be generated using AWS Policy Generator):

{
  "Statement": [
    {
      "Sid": "Stmt1402675433766",
      "Action": "s3:*",
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

For more information about how to use Eucalyptus IAM, please refer to the Eucalyptus 4.0 Administrator documentation regarding access concepts and policy overview.

In addition to the correct IAM policy being applied to the user, here are the other prerequisites that need to be met:

Once these prerequisites are met, the Eucalyptus user will be able to implement the topic for this blog.

CoreOS CloudInit Config for Docker Storage Management

As mentioned in the CoreOS documentation regarding how to use CoreOS with Eucalyptus, the user needs to do the following:

  • Download the CoreOS image
  • Decompress the CoreOS image
  • Bundle, upload and register the image

For example:

# wget -q http://beta.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2

# bunzip2 coreos_production_openstack_image.img.bz2

# qemu-img convert -O raw coreos_production_openstack_image.img coreos_production_openstack_image.raw
# euca-bundle-and-upload-image -i coreos_production_openstack_image.raw -b coreos-production-beta -r x86_64
# euca-register -n coreos-production coreos-production-beta/coreos_production_openstack_image.raw.manifest.xml --virtualization-type hvm
IMAGE emi-98868F66

After the image is registered, create a security group and authorize port 22 for SSH access to the CoreOS instance:

# euca-create-group coreos-testing -d "Security Group for CoreOS Cluster"
GROUP sg-C8E3B168 coreos-testing Security Group for CoreOS Cluster
# euca-authorize -P tcp -p ssh coreos-testing
GROUP coreos-testing
PERMISSION coreos-testing ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0

Next, create a keypair that will be used to access the CoreOS instance:

# euca-create-keypair coreos > coreos.priv
# chmod 0600 coreos.priv

Now, we are need to create the cloud-init configuration file.  CoreOS implements a subset of cloud-init config spec with coreos-cloudinit.  The cloud-init config below will do the following:

  1. wipe the the ephemeral device – /dev/vdb (since the CoreOS EMI is an instance store-backed HVM image, ephemeral device will be /dev/vdb)
  2. format the ephemeral device with BTRFS filesystem
  3. mount /dev/vdb to /var/lib/docker (which is the location for images used by the Docker containers)

Create a cloud-init.config file with the following information:

#cloud-config
coreos:
 units:
 - name: format-ephemeral.service
 command: start
 content: |
 [Unit]
 Description=Formats the ephemeral drive
 [Service]
 Type=oneshot
 RemainAfterExit=yes
 ExecStart=/usr/sbin/wipefs -f /dev/vdb
 ExecStart=/usr/sbin/mkfs.btrfs -f /dev/vdb
 - name: var-lib-docker.mount
 command: start
 content: |
 [Unit]
 Description=Mount ephemeral to /var/lib/docker
 Requires=format-ephemeral.service
 Before=docker.service
 [Mount]
 What=/dev/vdb
 Where=/var/lib/docker
 Type=btrfs

Use euca-describe-instance-types to select the desired instance type for the CoreOS instance (in this example, c1.medium will be used).

# euca-describe-instance-types 
INSTANCETYPE Name CPUs Memory (MiB) Disk (GiB)
INSTANCETYPE t1.micro 1 256 5
INSTANCETYPE m1.small 1 512 10
INSTANCETYPE m1.medium 1 1024 10
INSTANCETYPE c1.xlarge 2 2048 10
INSTANCETYPE m1.large 2 1024 15
INSTANCETYPE c1.medium 1 1024 20
INSTANCETYPE m1.xlarge 2 1024 30
INSTANCETYPE m2.2xlarge 2 4096 30
INSTANCETYPE m3.2xlarge 4 4096 30
INSTANCETYPE m2.xlarge 2 2048 40
INSTANCETYPE m3.xlarge 2 2048 50
INSTANCETYPE cc1.4xlarge 8 3072 60
INSTANCETYPE m2.4xlarge 8 4096 60
INSTANCETYPE hi1.4xlarge 8 6144 120
INSTANCETYPE cc2.8xlarge 16 6144 120
INSTANCETYPE cg1.4xlarge 16 12288 200
INSTANCETYPE cr1.8xlarge 16 16384 240
INSTANCETYPE hs1.8xlarge 48 119808 24000

Use euca-run-instances to launch the CoreOS image as an instance, passing the cloud-init.config file using the –user-data-file option:

# euca-run-instances -k coreos -t c1.medium emi-98868F66 --user-data-file cloud-init-docker-storage.config
RESERVATION r-FC799274 408396244283 default
INSTANCE i-AF303D5D emi-98868F66 pending coreos 0 c1.medium 2014-06-12T13:38:31.008Z ViciousLiesAndDangerousRumors monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-A5133B59

Once the instance reaches the ‘running’ state, SSH into the instance to see the ephemeral storage mounted and formatted correctly:

# euca-describe-instances i-AF303D5D --region account1-user01@
RESERVATION r-FC799274 408396244283 default
INSTANCE i-AF303D5D emi-98868F66 euca-10-104-6-236.bigboi.acme.eucalyptus-systems.com euca-172-18-238-171.bigboi.internal running coreos 0 c1.medium 2014-06-12T13:38:31.008Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.236 172.18.238.17 instance-store hvm sg-A5133B59
# ssh -i coreos.priv core@euca-10-104-6-236.bigboi.acme.eucalyptus-systems.com
CoreOS (beta)
core@localhost ~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 254:0 0 8.3G 0 disk
|-vda1 254:1 0 128M 0 part
|-vda2 254:2 0 64M 0 part
|-vda3 254:3 0 1G 0 part
|-vda4 254:4 0 1G 0 part /usr
|-vda6 254:6 0 128M 0 part /usr/share/oem
`-vda9 254:9 0 6G 0 part /
vdb 254:16 0 11.7G 0 disk /var/lib/docker
core@localhost ~ $ mount
.......
/dev/vda6 on /usr/share/oem type ext4 (rw,nodev,relatime,commit=600,data=ordered
/dev/vdb on /var/lib/docker type btrfs (rw,relatime,space_cache)

The instance is now ready for docker containers to be created.  For some docker container examples, check out the CoreOS documentation and the Docker documentation.

Enjoy!

Making FileVault Use a Disk Password

hspencer77:

Great use of FileVault on Mac OS X.

Originally posted on /dev/zero:

To unlock a disk that is encrypted with OS X’s FileVault feature one needs to type in the password that belongs to any user on the machine who is allowed to unlock the disk. The system then boots and helpfully logs you in as that user. In general that is probably a convenient little feature, but for me it just makes things awkward — I want to use different passwords for unlocking the disk and logging into my user account. To make that work I have to create a second account dedicated to unlocking the disk, get logged into that one when the system boots, then immediately log back out so I can log in as the user I actually want to use.

Or do I?

The system that powers FileVault, Core Storage, combines full disk encryption and some logical volume management features in a manner similar to LVM…

View original 1,170 more words

Install Eucalyptus 4.0 Using Motherbrain and Chef

hspencer77:

Great blog discussing how to use Motherbrain, Chef and Eucalyptus.

Originally posted on Testing Clouds at 128bpm:

Introduction

Installing distributed systems can be a tedious and time consuming process. Luckily there are many solutions for distributed configuration management available to the open source community. Over the past few months, I have been working on the Eucalyptus cookbook which allows for standardized deployments of Eucalyptus using Chef. This functionality has already been implemented in MicroQA using individual calls to Knife (the Chef command line interface) for each machine in the deployment. Orchestration of the deployment is rather static and thus only 3 topologies have been implemented as part of the deployment tab

Last month, Riot Games released Motherbrain, their orchestration framework that allows flexible, repeatable, and scalable deployment of multi-tiered applications. Their approach to the deployment roll out problem is simple and understandable. You configure manifests that define how your application components are split up then define the order in which they should be deployed.

For…

View original 508 more words

Yet Another AWS Compatibility Example Using Vagrant-AWS Plugin and Eucalyptus 3.4 to Deploy Docker

Something that Eucalyptus has been constant about from the beginning is its stance on being the best open source, on-premise AWS-compatible cloud on the market.   This blog entry is just another example of demonstrating this compatibility.

My most recent blog entries have been centered around Docker and how to deploy it on Eucalyptus.  This entry will show how a user can take Docker’s own documentation on deploying Docker using Vagrant with AWS - but with Eucalyptus.   Before getting started, there are some prerequisites that need to be in place.

Prerequisites for Eucalyptus Cloud

In order to get started, the Eucalyptus cloud needs to have an Ubuntu Raring Cloud Image bundled, uploaded and registered before the steps below can be followed.  The previous blog entries below will help here:

In addition to the Ubuntu Raring Cloud EMI being available, the user also needs to have the following:

After these prerequisites have been met, the user is ready to set up Vagrant to interact with Eucalyptus.

Setting up Vagrant Environment

To start out, we need to set up the Vagrant environment.  The steps below will get you going:

  1. Install Vagrant from http://www.vagrantup.com/. (optional – package manager can be used here instead)
  2. Install the vagrant aws plugin:  

    vagrant plugin install vagrant-aws

After Vagrant and the vagrant aws plugin have been successfully installed, all that is left is to create a Vagrantfile to provide information to Vagrant as to how to interact with Eucalyptus.  Since the vagrant aws plugin is being used, and Eucalyptus is compatible with AWS, the configuration will be very similar to AWS.

I provided a Vagrantfile on Github to help get users up to speed quicker.  To check out the Vagrantfile, just clone the repository from Github:

$ git clone https://github.com/hspencer77/eucalyptus-docker-raring.git

After checking out the file, change directory to eucalyptus-docker-raring, and edit the following variables to match your user information for the Eucalyptus cloud that will run the Docker instance:

AWS_ENDPOINT = "<EC2_URL for Eucalyptus Cloud>"
AWS_AMI = "<ID for Ubuntu Raring EMI>"
AWS_ACCESS_KEY = "<Access Key ID>"
AWS_SECRET_KEY = "<Secret Key>"
AWS_KEYPAIR_NAME = "<Key Pair name>"
AWS_INSTANCE_TYPE = '<VM type>'
SSH_PRIVKEY_PATH = "<The path to the private key for the named keypair, for example ~/.ssh/docker.pem>"

Once the Vagrantfile is populated with the correct information, we are now ready to launch the Docker instance.

Launch the Docker Instance Using Vagrant

From here, Vagrant makes this very straight-forward.  There are only two steps to launch the Docker instance.

  1. Run the following command to launch the instance:

     vagrant up --provider=aws
  2. After Vagrant finishes deploying the instance, SSH into the instance:

    vagrant ssh

Thats it!  Once you are SSHed into the instance, to run Docker, execute the following command:

ubuntu@euca-172-17-120-212:~$ sudo docker

You have successfully launched a Docker instance on Eucalyptus using Vagrant.  Since Eucalyptus works with the vagrant aws plugin, the same Vagrantfile can be used against AWS (of course, the values for the variables above will change).  This is a perfect dev/test to production setup whether Eucalyptus is being used for dev/test and AWS being used for production (and vise versa).

Deploying CentOS 6.5 Image in Docker on Eucalyptus 3.4

This blog entry is a follow-up of my blog entry entitled “Step-by-Step Deployment of Docker on Eucalyptus 3.4 for the Cloud Administrator“.  In that blog entry, I covered how to deploy an Ubuntu Raring Cloud Image on Eucalyptus, and use that image to deploy Docker.   The really cool thing about Docker is that it provides the ability to deploy different operating systems within one machine – in this case, an instance.   The focus of this entry is to show how to deploy a CentOS 6.5 image in a instance running Docker on Eucalyptus 3.4.

Prerequisites

The prerequisites for this blog is to complete the steps outlined in my previous blog about Docker on Eucalyptus 3.4.  Once those steps are completed by the cloud administrator (i.e. a user associated with the ‘eucalyptus’ account), you can get started  on the steps below.  One thing to note here, to follow these steps, there is no need to be the cloud administrator.  This entry is entirely directed to cloud users.  Since Eucalyptus IAM is similar to AWS IAM, the following EC2 Actions need to be allowed for the cloud user:

Instance Deployment

To get started, we need to launch the Ubuntu Raring EMI that has been provided by the cloud administrator.  In this example, the EMI will be emi-26403979:

$ euca-describe-images emi-26403979 --region account1-user01@
IMAGE emi-26403979 ubuntu-raring-docker-rootfs-v3/ubuntu-raring-docker-v3.manifest.xml
 441445882805 available private x86_64 machine eki-17093995 
eri-6BF033EE instance-store paravirtualized

If the –region option seems confusing, this is due to the fact that I am using the nice configuration file feature in Euca2ools.  Its really helpful when you are using different credentials for different users.

Now that we know the EMI we can use, lets launch the instance.  We will be using the cloud-init config file from my previous Docker blog to configure the instance.  The VM type used here is c1.xlarge.  This is because I wanted to make sure I had 2 CPU, and 2 Gigs of RAM for my instance:

$ euca-run-instances -k account1-user01 -t c1.xlarge 
--user-data-file cloud-init-docker.config emi-26403979 
--region account1-user01@
RESERVATION r-2EE941D4 961915002812 default
INSTANCE i-503642D4 emi-26403979 euca-0-0-0-0.eucalyptus.euca-hasp.eucalyptus-systems.com
 euca-0-0-0-0.eucalyptus.internal pending account1-user01 0 
c1.xlarge 2014-02-14T00:47:15.632Z LayinDaSmackDown eki-17093995 
eri-6BF033EE monitoring-disabled 0.0.0.0 0.0.0.0 instance-store paravirtualized

Make sure the instance gets into the running state:

$ euca-describe-instances --region account1-user01@
RESERVATION r-2EE941D4 961915002812 default
INSTANCE i-503642D4 emi-26403979 euca-10-104-7-10.eucalyptus.euca-hasp.eucalyptus-systems.com
 euca-172-17-112-207.eucalyptus.internal running 
account1-user01 0 c1.xlarge 2014-02-14T00:47:15.632Z 
LayinDaSmackDown eki-17093995 eri-6BF033EE monitoring-disabled 
10.104.7.10 172.17.112.207 instance-store paravirtualized

Now thats in the running state, lets SSH into the instance to make sure its up and running:

$ ssh -i account1-user01.priv ubuntu@euca-10-104-7-10.eucalyptus.euca-hasp.eucalyptus-systems.com
Welcome to Ubuntu 13.04 (GNU/Linux 3.8.0-33-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Fri Feb 14 00:49:52 UTC 2014
System load: 0.21 Users logged in: 0
 Usage of /: 21.8% of 4.80GB IP address for eth0: 172.17.112.207
 Memory usage: 5% IP address for lxcbr0: 10.0.3.1
 Swap usage: 0% IP address for docker0: 10.42.42.1
 Processes: 85
Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:

http://www.ubuntu.com/business/services/cloud

Use Juju to deploy your cloud instances and workloads:

https://juju.ubuntu.com/#cloud-raring

Your Ubuntu release is not supported anymore.
For upgrade information, please visit:

http://www.ubuntu.com/releaseendoflife

New release '13.10' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Tue Nov 19 00:48:23 2013 from odc-d-06-07.prc.eucalyptus-systems.com
ubuntu@euca-172-17-112-207:~$

Now its time to prepare the instance to create your own base Docker CentOS 6.5 image.

Preparing the Instance

To prepare the instance to create the base image, install the following packages using apt-get:

ubuntu@euca-172-17-112-207:~$ sudo -s
root@euca-172-17-112-207:~# apt-get install rinse perl rpm \
rpm2cpio libwww-perl liblwp-protocol-https-perl

After the packages finish installing, lets go ahead and mount the ephemeral storage on the instance as /tmp to give Docker extra space for building the base image:

root@euca-172-17-112-207:~# curl http://169.254.169.254/latest/meta-data/block-device-mapping/ephemeral0
sda2
root@euca-172-17-112-207:~# mount /dev/vda2 /tmp
root@euca-172-17-112-207:~# df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 4.8G 1.1G 3.5G 24% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 0 0 0 - /sys/fs/fuse/connections
none 0 0 0 - /sys/kernel/debug
none 0 0 0 - /sys/kernel/security
udev 993M 8.0K 993M 1% /dev
devpts 0 0 0 - /dev/pts
tmpfs 201M 240K 201M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1002M 0 1002M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/vda2 9.4G 150M 8.8G 2% /tmp
root@euca-172-17-112-207:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 15G 0 disk
├─vda1 253:1 0 5G 0 part /
├─vda2 253:2 0 9.5G 0 part /tmp
└─vda3 253:3 0 512M 0 part

Next, download the mkimage-rinse.sh script from the Docker repository on Github:

root@euca-172-17-112-207:~#  wget --no-check-certificate https://raw.github.com/dotcloud/docker/master/contrib/mkimage-rinse.sh
root@euca-172-17-112-207:~# chmod 755 mkimage-rinse.sh

Now we are ready to build the CentOS 6.5 Base Image.

Building the CentOS Image

The only thing left to do is build the base image.  Use mkimage-rinse.sh to build the CentOS 6.5 base image:

root@euca-172-17-112-207:~# ./mkimage-rinse.sh ubuntu/centos centos-6

After the installation is complete, test out the base image:

root@euca-172-17-112-207:~# docker run ubuntu/centos:6.5 cat /etc/centos-release
CentOS release 6.5 (Final)

Now we have a CentOS 6.5 base image.  The mkimage-rinse.sh script can also be used to install CentOS 5 base images as well.  Instead of passing centos-6, just pass centos-5.  For example, I have created a CentOS 5 base image in this instance as well.  Below shows the output of the images added to Docker:

root@euca-172-17-112-207:~# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu/centos 5.10 06019bdea24b 2 minutes ago 123.8 MB
ubuntu/centos 6.5 cd23a8442c8a 2 hours ago 127.3 MB
root@euca-172-17-112-207:~# docker run ubuntu/centos:5.10 cat /etc/redhat-release
CentOS release 5.10 (Final)

As you can see, Docker helps developers test their software across multiple Linux distributions.  With Eucalyptus (just as in AWS), users can use the ephemeral nature of instances to quickly stand up this test environment in one instance.