Using Boto3 Against HPE Helion Eucalyptus 4.2 Deployments

Recently, there was a blog entry posted on the AWS Developer Blog discussing how to migrate to boto3.  Since HPE Helion Eucalyptus strives to provide 100% AWS-compatible APIs for implemented services, AWS SDKs – such as the AWS SDK for Python – works solidly.  This blog entry will demonstrate how to use boto3 – the latest version of AWS SDK for Python – with HPE Helion Eucalyptus 4.2.

At the time of the posting of this blog entry, the following AWS service APIs are supported by HPE Helion Eucalyptus 4.2:

Installation

As mentioned on the boto3 documentation, install boto3 using pip:

# pip install boto3

Configuration

Again, as mentioned in the boto3 documentation, configuration can be done by using AWS CLI, or manually creating the config and credentials file under the .aws directory.  For example, here are the contents of the .aws/config and .aws/credentials files that will be used for this demonstration:

# cat .aws/config
[profile devops-admin]
output = json
region = us-east-1
# cat .aws/credentials
[devops-admin]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXX

If these files do not want to be used, as an alternative, you can pass the AWS Access Key ID and AWS Secret Key programmatically.  This will be referenced later on in this blog entry.

Using Boto3

To demonstrate how to use boto3, ipython will be utilized.  To get started, the Session class will be imported from the boto3 library:

# ipython
Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.2 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: from boto3.session import Session

Next invoke the session:

In [2]: session = Session(region_name='us-east-1', profile_name="devops-admin")

As mentioned earlier, alternatively, if we want to programmatically pass the AWS Access Key ID and the AWS Secret Key, it can be done when the session is invoked:

In [2]: session = Session(aws_access_key_id='XXXXXXXXXXXXXX', aws_secret_access_key='XXXXXXXXXXXXXXXXXXXXXXXX', region_name='us-east-1')

Even though region_name has a value here, when the client connection is created, the service endpoint will be a HPE Helion Eucalyptus service endpoint.  Any valid AWS region name can be used with HPE Helion Eucalyptus.  The important piece will be the endpoint URL.

From here, we can use the session to establish a client connection with a given HPE Helion Eucalyptus service endpoint.  Since the HPE Helion Eucalyptus cloud used in this example contains HTTPS endpoints, the trusted root certificate for the cloud subdomain will be passed as well.

Examples

Here is an example connecting to the EC2 service endpoint provided by the HPE Helion Eucalyptus Compute service to discover what instances as associated with the authenticated user account:

In [3]: client = session.client('ec2', endpoint_url='https://ec2.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [4]: for reservation in client.describe_instances()['Reservations']: 
  for instance in reservation['Instances']:
    print instance['InstanceId']
 ...:
i-4064f4e7
i-1c8515dd
i-79e96bc1
i-d43f50f1
i-b4adc06b
i-c4025e42

Below is another example connecting to the S3 service endpoint provided by the HPE Helion Eucalyptus Object Storage Gateway (OSG) service to list the buckets owned by the authenticated user account:

In [5]: client = session.client('s3', endpoint_url='https://s3.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [6]: for bucket in client.list_buckets()['Buckets']: 
  print bucket['Name']
 ...:
cfn-templates
ubuntu-trusty-x86_64-hvm-20151218
ubuntu-xenial-x86_64-hvm-20151217

Another example connecting to the Cloudformation service endpoint provided by the HPE Helion Eucalyptus Cloudformation service:

In [7]: client = session.client('cloudformation', endpoint_url='https://cloudformation.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [8]: for stack in client.describe_stacks()['Stacks']:
 print "Stack Name: " + stack['StackName']
 print "Status: " + stack['StackStatus']
 print "ID: " + stack['StackId']
 ...:
Stack Name: CoreOSCluster
Status: CREATE_COMPLETE
ID: arn:aws:cloudformation::001520216600:stack/CoreOSCluster/12437fe7-8a03-4920-9e34-270764450fa0

And for the last example, connecting to the AutoScaling service endpoint provided by the HPE Helion Eucalyptus AutoScaling service:

In [9]: client = session.client('autoscaling', endpoint_url='https://autoscaling.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [10]: for asg in client.describe_auto_scaling_groups()['AutoScalingGroups']:
 print "AutoScaling Group Name: " + asg['AutoScalingGroupName']
 print "Launch Config: " + asg['LaunchConfigurationName']
 print "Availability Zones:"
 for az in asg['AvailabilityZones']:
 print "\t" + az
 print "AutoScaling Group Instances:"
 for instance in asg['Instances']:
 print "\t" + instance['InstanceId']
 ....:
AutoScaling Group Name: CoreOSCluster-CoreOsGroup-JTKMRINKKMYDI
Launch Config: CoreOSCluster-CoreOsLaunchConfig-LAWHOT5X5K5PX
Availability Zones:
 us-east-1c
 us-east-1b
 us-east-1a
AutoScaling Group Instances:
 i-79e96bc1
 i-4064f4e7
 i-c4025e42
 i-d43f50f1
 i-1c8515dd

Conclusion

As mentioned earlier, boto3 can be used with any AWS compatible service implemented by HPE Helion Eucalyptus.  If your team isn’t ready to use boto3 yet, boto can still be used with HPE Helion Eucalyptus.

As always, I hope you enjoyed this entry.  Please let me know if there are any questions/suggestion/ideas regarding this blog topic.

Enjoy!

 

Using Boto3 Against HPE Helion Eucalyptus 4.2 Deployments

Cloud Image Management on Eucalyptus: Creating a CentOS 6.6 EMI With ZFS Support

ZFS is a filesystem designed by Sun Microsystems that focuses on data integrity.  What makes this such an attractive filesystem to use in the cloud is that a cloud user can easily do the following:

  • set up an LVM + RAID filesystem for storing large amounts of data (e.g. database information)
  • expand the filesystem by adding more storage (i.e. EBS volumes)
  • backup the filesystem without taking the filesystem offline/unmounting
  • restore the filesystem

This blog entry will focus on how a cloud user can create their own Eucalyptus Machine Image (EMI) that has ZFS support.  The CentOS 6.5 EMI on the Eucalyptus Machine Image Catalog will be used as the base image.

Before Starting…

Before following the steps in this blog, make sure the following is in place:

Once these requirements have been met, everything should be ready to go.

Set Up Base Image/Instance

To begin, follow the ‘Quick Start’ instructions mentioned on the Eucalyptus Machine Image Catalog page.  This will install all the images provided by the catalog.  When the process has finished, list the CentOS 6.5 EMI.  For example:

# euca-describe-images emi-bdcec010 
IMAGE emi-bdcec010 centos-6.5-x86_64-20140917/centos.raw.manifest.xml 094999295155 available public x86_64 machine instance-store hvm

Once the CentOS 6.5 EMI has been listed, launch an instance from the EMI.  For example:

# euca-run-instances -k account2-user11 -t m1.medium emi-bdcec010 
RESERVATION r-a22f0201 325271821652 default
INSTANCE i-b9fccf9f emi-bdcec010 pending account2-user11 0 m1.medium 2014-12-03T22:52:41.522Z Honest monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-6ef9907f x86_64
# euca-describe-instances i-b9fccf9f
RESERVATION r-a22f0201 325271821652 default
INSTANCE i-b9fccf9f emi-bdcec010 euca-10-104-7-15.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-248-178.future.internal running account2-user11 0 m1.medium 2014-12-03T22:52:41.522Z Honest monitoring-disabled 10.104.7.15 172.17.248.178 instance-store hvm sg-6ef9907f x86_64

Once the instance is running, its ready to be customized.

Adding ZFS Support to the Instance

Now that the instance is running, SSH into the instance so the following ZFS repository can be added:

[root@odc-f-13 ~]# ssh -i account2-user11.priv root@euca-10-104-7-15.future.future.euca-hasp.cs.prc.eucalyptus-systems.com
[root@euca-172-17-248-178 ~]# yum localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@euca-172-17-248-178 ~]# yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
[root@euca-172-17-248-178 ~]# yum upgrade -y
[root@euca-172-17-248-178 ~]# yum install kernel-devel zfs -y

After all the packages have been installed, reboot the instance:

[root@euca-172-17-248-178 ~]# reboot

Preparing the Instance For EMI Creation

After rebooting the instance, SSH back into the instance and prepare the instance for EMI creation.  First, load the zfs module:

[root@odc-f-13 ~]# ssh -i account2-user11.priv root@euca-10-104-7-15.future.future.euca-hasp.cs.prc.eucalyptus-systems.com
[root@euca-172-17-248-178 ~]# modprobe zfs
[root@euca-172-17-248-178 ~]# lsmod | grep zfs
zfs 1195522 0
zcommon 46278 1 zfs
znvpair 80974 2 zfs,zcommon
zavl 6925 1 zfs
zunicode 323159 1 zfs
spl 266655 5 zfs,zcommon,znvpair,zavl,zunicode

After confirming that the ZFS module is loaded, clear the network udev rules, and confirm PERSISTENT_DHCLIENT is set to “yes” in the /etc/sysconfig/network-scripts/ifcfg-eth0 file:

[root@euca-172-17-248-178 ~]# echo "" > /etc/udev/rules.d/70-persistent-net.rules
[root@euca-172-17-248-178 ~]# echo "" > /lib/udev/rules.d/75-persistent-net-generator.rules
[root@euca-172-17-248-178 ~]# echo "PERSISTENT_DHCLIENT=yes" >> /etc/sysconfig/network-scripts/ifcfg-eth0

Confirm that the instance has been upgraded to CentOS 6.6, then exit the instance.

[root@euca-172-17-248-178 ~]# cat /etc/redhat-release
CentOS release 6.6 (Final)
[root@euca-172-17-248-178 ~]# exit

Create the CentOS 6.6 EMI with ZFS Support

The instance is now ready to be bundled.  Bundle the instance using the euca-bundle-instance command.  This command is used to bundle Windows instances, however Eucalyptus extended this command to work with Linux instances as well.  Use euca-describe-bundle-tasks to monitor the bundling status:

[root@odc-f-13 ~]# euca-bundle-instance --bucket centos6.6-zfs --prefix centos6.6-zfs i-b9fccf9f
BUNDLE bun-b9fccf9f i-b9fccf9f centos6.6-zfs centos6.6-zfs 2014-12-03T23:54:51.644Z 2014-12-03T23:54:51.644Z pending 0 centos6.6-zfs/centos6.6-zfs.manifest.xml
..
[root@odc-f-13 ~]# euca-describe-bundle-tasks
BUNDLE bun-b9fccf9f i-b9fccf9f centos6.6-zfs centos6.6-zfs 2014-12-03T23:54:51.644Z 2014-12-03T23:57:37.517Z complete 0 centos6.6-zfs/centos6.6-zfs.manifest.xml

Once the bundle task completes, register the instance store-backed HVM image using the euca-register command:

[root@odc-f-13 ~]# euca-register -a x86_64 -n centos6.6-zfs centos6.6-zfs/centos6.6-zfs.manifest.xml --virtualization-type hvm 
IMAGE emi-5e63f02c

The custom image has been registered. Now lets test it out.

ZFS Test

To test the image out, we will do the following:

  • Launch an instance from the new EMI
  • Create 5 volumes and attach them to the instance
  • Create a ZFS storage pool and dataset

To launch the instance, use the euca-run-instances command.  To create the 5 EBS volumes, use euca-create-volume command.  After the volumes are created, use euca-attach-volume to attach the volumes to the instance.  Once the volumes are attached, the output of euca-describe-instances should look similar to the following:

# euca-describe-instances i-0cd3b6b8
RESERVATION r-cf7c5c73 325271821652 default
INSTANCE i-0cd3b6b8 emi-5e63f02c euca-10-104-7-3.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-248-184.future.internal running account2-user11 0 m1.medium 2014-12-04T00:16:52.887Z Honest monitoring-disabled 10.104.7.3 172.17.248.184 instance-store hvm sg-6ef9907f x86_64
BLOCKDEVICE /dev/sdd vol-a23cfb1f 2014-12-04T01:45:59.730Z false
BLOCKDEVICE /dev/sdh vol-a27b75a5 2014-12-04T01:47:31.162Z false
BLOCKDEVICE /dev/sdf vol-2a971204 2014-12-04T01:46:54.575Z false
BLOCKDEVICE /dev/sdg vol-b33e9890 2014-12-04T01:47:13.346Z false
BLOCKDEVICE /dev/sde vol-dcc8b6ac 2014-12-04T01:46:15.011Z false

SSH into the instance and check what block devices are associated with the EBS volumes using the lsblk command:

# ssh -i account2-user11.priv root@euca-10-104-7-3.future.future.euca-hasp.cs.prc.eucalyptus-systems.com
[root@euca-172-17-248-184 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 4.9G 0 disk
├─vda1 252:1 0 500M 0 part /boot
└─vda2 252:2 0 4.4G 0 part
 ├─VolGroup-lv_root (dm-0) 253:0 0 3.9G 0 lvm /
 └─VolGroup-lv_swap (dm-1) 253:1 0 500M 0 lvm [SWAP]
vdb 252:16 0 5.1G 0 disk
vdc 252:32 0 5G 0 disk
vdd 252:48 0 5G 0 disk
vde 252:64 0 5G 0 disk
vdf 252:80 0 5G 0 disk
vdg 252:96 0 5G 0 disk

The EBS volumes are /dev/vdc, /dev/vdd, /dev/vde, /dev/vdf, and /dev/vdg.  Use these devices to create the ZFS storage pool by using the zpool command:

[root@euca-172-17-248-184 ~]# zpool create -f app-pool vdc vdd vde vdf vdg
[root@euca-172-17-248-184 ~]# zpool status
 pool: app-pool
 state: ONLINE
 scan: none requested
config:
 NAME STATE READ WRITE CKSUM
 app-pool ONLINE 0 0 0
 vdc1 ONLINE 0 0 0
 vdd1 ONLINE 0 0 0
 vde1 ONLINE 0 0 0
 vdf1 ONLINE 0 0 0
 vdg1 ONLINE 0 0 0
errors: No known data errors

Next, we need to create a ZFS dataset.  For this example, this instance will end up being a MySQL server, so we will create a dataset for storing the MySQL data.

[root@euca-172-17-248-184 ~]# zfs create app-pool/mysql
[root@euca-172-17-248-184 ~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
app-pool 152K 24.5G 30K /app-pool
app-pool/mysql 30K 24.5G 30K /app-pool/mysql

The mount point of the dataset can be adjusted by setting the mountpoint option:

[root@euca-172-17-248-184 ~]# zfs set mountpoint=/opt/mysql app-pool/mysql
[root@euca-172-17-248-184 ~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
app-pool 162K 24.5G 31K /app-pool
app-pool/mysql 30K 24.5G 30K /opt/mysql

Thats it!  Notice how this only required 2 commands to set up a LVM + RAID filesystem, compared to around 7 commands using mdadm, pvcreate, vgcreate, mkfs, mkdir and mount. The instance is now ready to utilize the ZFS filesystem for the MySQL server.

Online Backup Example to OSG Bucket using s3cmd

As mentioned earlier, a slick feature of using ZFS is being able to perform backups online.  This section will show the following:

  • Setup and configure s3cmd
  • Create a ZFS snapshot, and use ZFS send with s3cmd to place the snapshot on an OSG bucket

To get started, in the instance, install the following packages:

[root@euca-172-17-248-184 ~]# yum install -y git python-dateutil.noarch xz

Next, clone the s3tools/s3cmd repository from Github:

[root@euca-172-17-248-184 ~]# git clone https://github.com/s3tools/s3cmd.git

If the instance was launched with an instance profile that assumes a role with OSG (S3) API access, s3cmd will pick up the temporary credentials and token through the Eucalyptus instance metadata service, as if the instance was launched on AWS EC2.  This wasn’t the case here, so we need to provide the Access Key ID and Secret Key manually:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: AKIRAGCHAGFE6IIX9BYF
Secret Key: GMdrL97AqcybhfyyxOpNmVUnBtiMenag3ju82L7L

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]:

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

New settings:
 Access Key: AKIRAGCHAGFE6IIX9BYF
 Secret Key: GMdrL97AqcybhfyyxOpNmVUnBtiMenag3ju82L7L
 Encryption password:
 Path to GPG program: /usr/bin/gpg
 Use HTTPS protocol: False
 HTTP Proxy server name:
 HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] n
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'

Edit the .s3cfg file to make sure to point to the OSG on your Eucalyptus 4.0.2 cloud.  For example, change the following:

host_base = s3.amazonaws.com

to

host_base = objectstorage.future.euca-hasp.cs.prc.eucalyptus-systems.com:8773

and

host_bucket = %(bucket)s.s3.amazonaws.com

to

host_bucket = %(bucket)s.objectstorage.future.euca-hasp.cs.prc.eucalyptus-systems.com:8773

Confirm that s3cmd is configured correctly.  For example:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd ls
2014-11-05 21:45 s3://centos-images
2014-12-03 23:54 s3://centos6.6-zfs
2014-10-08 01:50 s3://instance-profile-testing
2014-12-01 22:27 s3://mongodb-snapshots
2014-10-10 20:01 s3://new-ubuntu-bundled-image
2014-09-17 18:31 s3://s3cmd-testing
2014-09-30 01:58 s3://ubuntu-bundled-vol
2014-10-22 14:47 s3://ubuntu-docker-template
2014-10-08 13:39 s3://ubuntu-images
2014-10-02 01:42 s3://ubuntu-trusty-imported-20141001
2014-10-30 18:25 s3://ubuntu-trusty-imported-20141030
2014-10-29 02:18 s3://ubuntu-trusty-server-10282014
2014-10-01 00:28 s3://wrong-s3-url-test

To perform a ZFS snapshot of the app-pool/mysql dataset, do the following:

[root@euca-172-17-248-184 ~]# zfs snapshot app-pool/mysql@wednesday
[root@euca-172-17-248-184 ~]# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
app-pool/mysql@wednesday 0 - 30K -

After creating a bucket for the backup, send the ZFS snapshot to the bucket:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd mb s3://mysql-backups
[root@euca-172-17-248-184 ~]# zfs send app-pool/mysql@wednesday | xz | ./s3cmd/s3cmd put - s3://mysql-backups/mysql-backup-wednesday.img.xz
<stdin> -> s3://mysql-backups/mysql-backup-wednesday.img.xz [part 1, 1440B]
 1440 of 1440 100% in 2s 561.67 B/s done

To confirm if the snapshot is located in the bucket, use s3cmd:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd ls s3://mysql-backups
2014-12-04 02:22 1440 s3://mysql-backups/mysql-backup-wednesday.img.xz

Thats all folks.  We have successfully created a CentOS 6.6 EMI with ZFS support.  For more information regarding ZFS (and inspirations for this blog), check out the following resources:

Cloud Image Management on Eucalyptus: Creating a CentOS 6.6 EMI With ZFS Support

CoreOS CloudInit Config for Docker Storage Management

CoreOS is a Linux distribution that allows easy deployment of Docker environments.  With CoreOS, users have the ability to deploy clustered Docker environments,  or deploy zero downtime applications.  Recently, I have blogged about how to deploy and use Docker on Eucalyptus cloud environments. This blog will focus on how to leverage cloud-init configuration with a CoreOS EMI to manage instance storage that will be used by Docker containers on Eucalyptus 4.0.  The same cloud-init configuration file can be used  on AWS with CoreOS AMIs, which is yet another example of how Eucalyptus has continued to maintain its focus on being the best on-premise AWS compatible cloud environment.

Prerequisites

Since Eucalyptus Identity and Access Management (IAM) is very similar to AWS’s IAM, at a minimum – the following Elastic Compute Cloud (EC2) actions need to be allowed:

In order to bundle, upload and register the CoreOS image, use the following AWS S3 policy (which can be generated using AWS Policy Generator):

{
  "Statement": [
    {
      "Sid": "Stmt1402675433766",
      "Action": "s3:*",
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

For more information about how to use Eucalyptus IAM, please refer to the Eucalyptus 4.0 Administrator documentation regarding access concepts and policy overview.

In addition to the correct IAM policy being applied to the user, here are the other prerequisites that need to be met:

Once these prerequisites are met, the Eucalyptus user will be able to implement the topic for this blog.

CoreOS CloudInit Config for Docker Storage Management

As mentioned in the CoreOS documentation regarding how to use CoreOS with Eucalyptus, the user needs to do the following:

  • Download the CoreOS image
  • Decompress the CoreOS image
  • Bundle, upload and register the image

For example:

# wget -q http://beta.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2

# bunzip2 coreos_production_openstack_image.img.bz2

# qemu-img convert -O raw coreos_production_openstack_image.img coreos_production_openstack_image.raw
# euca-bundle-and-upload-image -i coreos_production_openstack_image.raw -b coreos-production-beta -r x86_64
# euca-register -n coreos-production coreos-production-beta/coreos_production_openstack_image.raw.manifest.xml --virtualization-type hvm
IMAGE emi-98868F66

After the image is registered, create a security group and authorize port 22 for SSH access to the CoreOS instance:

# euca-create-group coreos-testing -d "Security Group for CoreOS Cluster"
GROUP sg-C8E3B168 coreos-testing Security Group for CoreOS Cluster
# euca-authorize -P tcp -p ssh coreos-testing
GROUP coreos-testing
PERMISSION coreos-testing ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0

Next, create a keypair that will be used to access the CoreOS instance:

# euca-create-keypair coreos > coreos.priv
# chmod 0600 coreos.priv

Now, we are need to create the cloud-init configuration file.  CoreOS implements a subset of cloud-init config spec with coreos-cloudinit.  The cloud-init config below will do the following:

  1. wipe the the ephemeral device – /dev/vdb (since the CoreOS EMI is an instance store-backed HVM image, ephemeral device will be /dev/vdb)
  2. format the ephemeral device with BTRFS filesystem
  3. mount /dev/vdb to /var/lib/docker (which is the location for images used by the Docker containers)

Create a cloud-init.config file with the following information:

#cloud-config
coreos:
 units:
 - name: format-ephemeral.service
 command: start
 content: |
 [Unit]
 Description=Formats the ephemeral drive
 [Service]
 Type=oneshot
 RemainAfterExit=yes
 ExecStart=/usr/sbin/wipefs -f /dev/vdb
 ExecStart=/usr/sbin/mkfs.btrfs -f /dev/vdb
 - name: var-lib-docker.mount
 command: start
 content: |
 [Unit]
 Description=Mount ephemeral to /var/lib/docker
 Requires=format-ephemeral.service
 Before=docker.service
 [Mount]
 What=/dev/vdb
 Where=/var/lib/docker
 Type=btrfs

Use euca-describe-instance-types to select the desired instance type for the CoreOS instance (in this example, c1.medium will be used).

# euca-describe-instance-types 
INSTANCETYPE Name CPUs Memory (MiB) Disk (GiB)
INSTANCETYPE t1.micro 1 256 5
INSTANCETYPE m1.small 1 512 10
INSTANCETYPE m1.medium 1 1024 10
INSTANCETYPE c1.xlarge 2 2048 10
INSTANCETYPE m1.large 2 1024 15
INSTANCETYPE c1.medium 1 1024 20
INSTANCETYPE m1.xlarge 2 1024 30
INSTANCETYPE m2.2xlarge 2 4096 30
INSTANCETYPE m3.2xlarge 4 4096 30
INSTANCETYPE m2.xlarge 2 2048 40
INSTANCETYPE m3.xlarge 2 2048 50
INSTANCETYPE cc1.4xlarge 8 3072 60
INSTANCETYPE m2.4xlarge 8 4096 60
INSTANCETYPE hi1.4xlarge 8 6144 120
INSTANCETYPE cc2.8xlarge 16 6144 120
INSTANCETYPE cg1.4xlarge 16 12288 200
INSTANCETYPE cr1.8xlarge 16 16384 240
INSTANCETYPE hs1.8xlarge 48 119808 24000

Use euca-run-instances to launch the CoreOS image as an instance, passing the cloud-init.config file using the –user-data-file option:

# euca-run-instances -k coreos -t c1.medium emi-98868F66 --user-data-file cloud-init-docker-storage.config
RESERVATION r-FC799274 408396244283 default
INSTANCE i-AF303D5D emi-98868F66 pending coreos 0 c1.medium 2014-06-12T13:38:31.008Z ViciousLiesAndDangerousRumors monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-A5133B59

Once the instance reaches the ‘running’ state, SSH into the instance to see the ephemeral storage mounted and formatted correctly:

# euca-describe-instances i-AF303D5D --region account1-user01@
RESERVATION r-FC799274 408396244283 default
INSTANCE i-AF303D5D emi-98868F66 euca-10-104-6-236.bigboi.acme.eucalyptus-systems.com euca-172-18-238-171.bigboi.internal running coreos 0 c1.medium 2014-06-12T13:38:31.008Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.236 172.18.238.17 instance-store hvm sg-A5133B59
# ssh -i coreos.priv core@euca-10-104-6-236.bigboi.acme.eucalyptus-systems.com
CoreOS (beta)
core@localhost ~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 254:0 0 8.3G 0 disk
|-vda1 254:1 0 128M 0 part
|-vda2 254:2 0 64M 0 part
|-vda3 254:3 0 1G 0 part
|-vda4 254:4 0 1G 0 part /usr
|-vda6 254:6 0 128M 0 part /usr/share/oem
`-vda9 254:9 0 6G 0 part /
vdb 254:16 0 11.7G 0 disk /var/lib/docker
core@localhost ~ $ mount
.......
/dev/vda6 on /usr/share/oem type ext4 (rw,nodev,relatime,commit=600,data=ordered
/dev/vdb on /var/lib/docker type btrfs (rw,relatime,space_cache)

The instance is now ready for docker containers to be created.  For some docker container examples, check out the CoreOS documentation and the Docker documentation.

Enjoy!

CoreOS CloudInit Config for Docker Storage Management

Yet Another AWS Compatibility Example Using Vagrant-AWS Plugin and Eucalyptus 3.4 to Deploy Docker

Something that Eucalyptus has been constant about from the beginning is its stance on being the best open source, on-premise AWS-compatible cloud on the market.   This blog entry is just another example of demonstrating this compatibility.

My most recent blog entries have been centered around Docker and how to deploy it on Eucalyptus.  This entry will show how a user can take Docker’s own documentation on deploying Docker using Vagrant with AWS – but with Eucalyptus.   Before getting started, there are some prerequisites that need to be in place.

Prerequisites for Eucalyptus Cloud

In order to get started, the Eucalyptus cloud needs to have an Ubuntu Raring Cloud Image bundled, uploaded and registered before the steps below can be followed.  The previous blog entries below will help here:

In addition to the Ubuntu Raring Cloud EMI being available, the user also needs to have the following:

After these prerequisites have been met, the user is ready to set up Vagrant to interact with Eucalyptus.

Setting up Vagrant Environment

To start out, we need to set up the Vagrant environment.  The steps below will get you going:

  1. Install Vagrant from http://www.vagrantup.com/. (optional – package manager can be used here instead)
  2. Install the vagrant aws plugin:  

    vagrant plugin install vagrant-aws

After Vagrant and the vagrant aws plugin have been successfully installed, all that is left is to create a Vagrantfile to provide information to Vagrant as to how to interact with Eucalyptus.  Since the vagrant aws plugin is being used, and Eucalyptus is compatible with AWS, the configuration will be very similar to AWS.

I provided a Vagrantfile on Github to help get users up to speed quicker.  To check out the Vagrantfile, just clone the repository from Github:

$ git clone https://github.com/hspencer77/eucalyptus-docker-raring.git

After checking out the file, change directory to eucalyptus-docker-raring, and edit the following variables to match your user information for the Eucalyptus cloud that will run the Docker instance:

AWS_ENDPOINT = "<EC2_URL for Eucalyptus Cloud>"
AWS_AMI = "<ID for Ubuntu Raring EMI>"
AWS_ACCESS_KEY = "<Access Key ID>"
AWS_SECRET_KEY = "<Secret Key>"
AWS_KEYPAIR_NAME = "<Key Pair name>"
AWS_INSTANCE_TYPE = '<VM type>'
SSH_PRIVKEY_PATH = "<The path to the private key for the named keypair, for example ~/.ssh/docker.pem>"

Once the Vagrantfile is populated with the correct information, we are now ready to launch the Docker instance.

Launch the Docker Instance Using Vagrant

From here, Vagrant makes this very straight-forward.  There are only two steps to launch the Docker instance.

  1. Run the following command to launch the instance:

     vagrant up --provider=aws
  2. After Vagrant finishes deploying the instance, SSH into the instance:

    vagrant ssh

Thats it!  Once you are SSHed into the instance, to run Docker, execute the following command:

ubuntu@euca-172-17-120-212:~$ sudo docker

You have successfully launched a Docker instance on Eucalyptus using Vagrant.  Since Eucalyptus works with the vagrant aws plugin, the same Vagrantfile can be used against AWS (of course, the values for the variables above will change).  This is a perfect dev/test to production setup whether Eucalyptus is being used for dev/test and AWS being used for production (and vise versa).

Yet Another AWS Compatibility Example Using Vagrant-AWS Plugin and Eucalyptus 3.4 to Deploy Docker

IAM Roles and Instance Profiles in Eucalyptus 3.4

IAM Roles in AWS are quite powerful – especially when users need instances to access service APIs to implement complex deployments.  In the past, this could be accomplished by passing access keys and secret keys through the instance user data service, which can be cumbersome and is quite insecure.  With IAM roles, instances can be launched with profiles that allow them to leverage various IAM policies provided by the user to control what service APIs  instances can access in a secure manner.  As part of  constant pursuit for AWS compatibility, one of the new features in Eucalyptus 3.4 is the support of IAM roles and instance profiles (and yes, it works with tools like ec2-api-tools, and libraries like boto, which support accessing IAM roles through the instance meta data service).

This blog entry will demonstrate the following:

  • Set up an Eucalyptus IAM role
  • Create an Eucalyptus instance profile
  • Assign an instance profile when launching an instance
  • Leverage the IAM role from within the instance to access a service API (for this example, it will be the EC2 service API on Eucalyptus)

Prerequisites

To use IAM roles on Eucalyptus, the following is required:

  • A Eucalyptus 3.4 cloud – These packages can be downloaded from the Eucalyptus 3.4 nightly repo.  For additional information regarding downloading nightly builds of Eucalyptus, please refer the Eucalyptus Install Guide (note: anywhere there is a “3.3” reference, replace with “3.4”)
  • User Credentials – User credentials for an account administrator (admin user), and credentials of a non-admin user of a non-eucalyptus account.
  • Apply an IAM policy for the non-admin user to launch instances, and pass roles to instances launched by that user using euare-useruploadpolicy.  An example policy is below:

    {"Statement": [
     "Effect":"Allow",
     "Action":"iam:PassRole",
     "Resource":"*"
     },
     {
     "Effect":"Allow",
     "Action":"iam:ListInstanceProfiles",
     "Resource":"*"
     },
     {
     "Effect":"Allow",
     "Action":"ec2:*",
     "Resource":"*"
     }]
    }

  • AWS IAM CLI Tools and Euca2ools 3 – The AWS IAM CLI tools are for creating IAM roles and instance profiles; euca2ools for launching instances. There will be one configuration file for the AWS IAM CLI tools that will contain the credentials of the account admin user (for example, account1-admin.config).  Euca2ools will only need the credentials of the non-admin user in the euca2ools.ini file (for example, creating a user section called account1-user01].

Creating  a Eucalyptus IAM Role

Just as in AWS IAM, iam-rolecreate can be used with Eucalyptus IAM to create IAM roles.  To create a IAM role on Eucalyptus, run the following command:

# iam-rolecreate --aws-credential-file account1-admin.config 
--url http://10.104.10.6:8773/services/Euare/ -r ACCT1-EC2-ACTIONS 
-s http://10.104.10.6:8773/services/Eucalyptus
# iam-rolelistbypath --aws-credential-file account1-admin.config
 --url http://10.104.10.6:8773/services/Euare/
arn:aws:iam::735723906303:role/ACCT1-EC2-ACTIONS
IsTruncated: false

This will create a IAM role called ACCT1-EC2-ACTIONS.  Next, we need to add an IAM policy to the role.  As mentioned earlier, the IAM policy will allow the instance to execute an EC2 API call (in this case, ec2-describe-availability-zones).  Use iam-roleuploadpolicy to upload the following IAM policy file:

{
"Statement": [
{
"Sid": "Stmt1381454720306",
"Action": [
"ec2:DescribeAvailabilityZones"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

After the IAM policy file has been created (e.g. ec2-describe-az), upload the policy to the role:

# iam-roleuploadpolicy --aws-credential-file account1-admin.config 
--url http://10.104.10.6:8773/services/Euare/ -p ec2-describe-az 
-f ec2-describe-az -r ACCT1-EC2-ACTIONS
# iam-rolelistpolicies --aws-credential-file account1-admin.config 
--url http://10.104.10.6:8773/services/Euare/ -r ACCT1-EC2-ACTIONS -v
ec2-describe-az
{
 "Statement": [
 {
 "Sid": "Stmt1381454720306",
 "Action": [
 "ec2:DescribeAvailabilityZones"
 ],
 "Effect": "Allow",
 "Resource": "*"
 }
 ]
}
IsTruncated: false

As displayed, the IAM role has been created, and an IAM policy has been added to the role successfully.  Now its time to deal with instance profiles.

Create an Instance Profile and Add a Role to the Profile

Instance profiles are used to pass the IAM role to the instance.  An IAM role can be associated to many instance profiles, but an instance profile can be associated to only one IAM role.  To create an instance profile, use iam-instanceprofilecreate.  Since the IAM role ACCT1-EC2-ACTIONS was previously created, the role can be added as the instance profile is created:

# iam-instanceprofilecreate 
--aws-credential-file account1-admin.config 
--url http://10.104.10.6:8773/services/Euare/ -r ACCT1-EC2-ACTIONS 
-s instance-ec2-actions
# iam-instanceprofilelistbypath --aws-credential-file acct1-user1-aws-iam.config 
--url http://10.104.10.6:8773/services/Euare/
arn:aws:iam::735723906303:instance-profile/instances-ec2-actions
IsTruncated: false

We have successfully created an instance profile and associated an IAM role to it.  All that is left to do is test it out.

Testing out the Instance Profile

Before testing out the instance profile, make sure that the euca2ools.ini file has the correct user and region information for the non-admin user of the account (for this example, the user will be user01).  For information about obtaining the credentials for the user, please refer to the section “Create Credentials” in the Eucalyptus User Guide.

After setting up the euca2ools.ini file, use euca-run-instance to launch an instance with an instance profile.  The image used here is the Ubuntu Raring Cloud Image.  The keypair account1-user01 was created using euca-create-keypair.  To open up SSH access to the instance, use euca-authorize.   Create a cloud-init user data file to enable the multiverse repository.

# cat cloud-init.config
#cloud-config
apt_sources:
 - source: deb $MIRROR $RELEASE multiverse
apt_update: true
apt_upgrade: true
disable_root: true
# euca-run-instances --key account1-user1 emi-C25538DA 
--instance-type m1.large --user-data-file cloud-init.config 
--iam-profile arn:aws:iam::407837561996:instance-profile/instance-ec2-actions 
--region account1-user01@
RESERVATION r-CED1435E 407837561996 default
INSTANCE i-72F244CC emi-C25538DA 0.0.0.0 0.0.0.0 pending account1-user01 0 
m1.large 2013-10-10T22:08:00.589Z Exodus eki-C9083808 eri-39BC3B99 
monitoring-disabled 0.0.0.0 0.0.0.0 instance-store paravirtualized 
arn:aws:iam::407837561996:instance-profile/instance-ec2-actions
....
# euca-describe-instances --region account1-user01@
RESERVATION r-CED1435E 407837561996 default
INSTANCE i-72F244CC emi-C25538DA 10.104.7.22 172.17.190.157 
running account1-user01 0 m1.large 2013-10-10T22:08:00.589Z Exodus eki-C9083808 
eri-39BC3B99 monitoring-disabled 10.104.7.22 172.17.190.157 
instance-store paravirtualized 
arn:aws:iam::407837561996:instance-profile/instance-ec2-actions
TAG instance i-72F244CC euca:node 10.105.10.11

Next, SSH into the instance and confirm the instance profile is accessible by the instance meta-data service.

[root@odc-c-06 ~]# ssh-keygen -R 10.104.7.22
/root/.ssh/known_hosts updated.
Original contents retained as /root/.ssh/known_hosts.old
[root@odc-c-06 ~]# ssh -i euca-admin.priv ubuntu@10.104.7.22
The authenticity of host '10.104.7.22 (10.104.7.22)' can't be established.
RSA key fingerprint is a1:b2:5d:1a:be:e3:cb:0b:58:5f:bd:c1:e2:1f:e3:2d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.104.7.22' (RSA) to the list of known hosts.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
Welcome to Ubuntu 13.04 (GNU/Linux 3.8.0-31-generic x86_64)
* Documentation: https://help.ubuntu.com/
.....
Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud
Use Juju to deploy your cloud instances and workloads:
 https://juju.ubuntu.com/#cloud-raring
0 packages can be updated.
0 updates are security updates.
ubuntu@ip-172-17-190-157:~$ curl http://169.254.169.254/latest/meta-data/
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
iam/
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
mac
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
security-groups
### check for IAM role temporary secuirty credentials ###
ubuntu@ip-172-17-190-157:~$ curl http://169.254.169.254/latest/meta-data/iam/
security-credentials/ACCT1-EC2-ACTIONS
{
 "Code": "Success",
 "LastUpdated": "2013-10-11T18:07:37Z",
 "Type": "AWS-HMAC",
 "AccessKeyId": "AKIYW7FDRV8ZG5HIM91D",
 "SecretAccessKey": "sgVOgLJoc3wXjI5mu7yrYXI3NHtiq18cJuOT7Mwh",
 "Token": "ZXVjYQABQe4E4f2NnIsnvT/5jfpauKh3dClPVwPEoMepqk0lViODSgk4axiQb9rRQyU7Qnhvxb22wO201EoT6Ay/
rg+1i3+2xQLfbkh7kqy4CmqdGM3Q7LNI1dFPSz332E6us5BsSdHpiw3VGLyMLnDAkV8BMi+6lKE5eaJ+hpFI/
KXEVPSNkFMI9R+9bKPIFZvceiBE1w+kAEJC/18uCpZ0kSNy2iFBYcZ+zTwrYTgnsqNYcEIuWzEh4z1WIA==",
 "Expiration": "2013-10-11T19:07:37Z"
}

Install the ec2-api-tools from the Ubuntu Raring multiverse repository.

ubuntu@ip-172-17-190-157:~$ sudo apt-get update
Get:1 http://security.ubuntu.com raring-security Release.gpg [933 B]
Hit http://Exodus.clouds.archive.ubuntu.com raring Release.gpg
......
Ign http://Exodus.clouds.archive.ubuntu.com raring-updates/main Translation-en_US
Ign http://Exodus.clouds.archive.ubuntu.com raring-updates/multiverse Translation-en_US
Ign http://Exodus.clouds.archive.ubuntu.com raring-updates/universe Translation-en_US
Fetched 8,015 kB in 19s (421 kB/s)
Reading package lists... Done
ubuntu@ip-172-17-190-157:~$ sudo apt-get install ec2-api-tools
Reading package lists... Done
The following extra packages will be installed:
 ca-certificates-java default-jre-headless fontconfig-config
 icedtea-7-jre-jamvm java-common libavahi-client3 libavahi-common-data 
libavahi-common3 libcups2 libfontconfig1 libjpeg-turbo8 libjpeg8 liblcms2-2
 libnspr4 libnss3 libnss3-1d openjdk-7-jre-headless openjdk-7-jre-lib 
ttf-dejavu-core tzdata-java
......
Adding debian:TDC_Internet_Root_CA.pem
Adding debian:SecureTrust_CA.pem
done.
Setting up openjdk-7-jre-lib (7u25-2.3.10-1ubuntu0.13.04.2) ...
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
Processing triggers for ca-certificates ...
Updating certificates in /etc/ssl/certs... 0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....
done.
done.

Finally, run ec2-describe-availability-zones using the –url option to point to the Eucalyptus cloud being used.

ubuntu@ip-172-17-190-157:~$ ec2-describe-availability-zones 
-U http://10.104.10.6:8773/services/Eucalyptus/
AVAILABILITYZONE Legend 10.104.1.185 
arn:euca:eucalyptus:Legend:cluster:IsThisLove/
AVAILABILITYZONE Exodus 10.104.10.22 
arn:euca:eucalyptus:Exodus:cluster:NaturalMystic/

Thats it!  Notice how there wasn’t a need to pass any access key and secret key.  All that information is grabbed from the instance meta-data service.

IAM roles and instance profiles are quite powerful.  Great use cases include enabling CloudWatch metrics, and deploying ELBs on Eucalyptus.

I hope this has been helpful.  As always, any questions/suggestions/ideas/feedback are greatly appreciated.

 

IAM Roles and Instance Profiles in Eucalyptus 3.4