CoreOS CloudInit Config for Docker Storage Management

CoreOS is a Linux distribution that allows easy deployment of Docker environments.  With CoreOS, users have the ability to deploy clustered Docker environments,  or deploy zero downtime applications.  Recently, I have blogged about how to deploy and use Docker on Eucalyptus cloud environments. This blog will focus on how to leverage cloud-init configuration with a CoreOS EMI to manage instance storage that will be used by Docker containers on Eucalyptus 4.0.  The same cloud-init configuration file can be used  on AWS with CoreOS AMIs, which is yet another example of how Eucalyptus has continued to maintain its focus on being the best on-premise AWS compatible cloud environment.

Prerequisites

Since Eucalyptus Identity and Access Management (IAM) is very similar to AWS’s IAM, at a minimum – the following Elastic Compute Cloud (EC2) actions need to be allowed:

In order to bundle, upload and register the CoreOS image, use the following AWS S3 policy (which can be generated using AWS Policy Generator):

{
  "Statement": [
    {
      "Sid": "Stmt1402675433766",
      "Action": "s3:*",
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

For more information about how to use Eucalyptus IAM, please refer to the Eucalyptus 4.0 Administrator documentation regarding access concepts and policy overview.

In addition to the correct IAM policy being applied to the user, here are the other prerequisites that need to be met:

Once these prerequisites are met, the Eucalyptus user will be able to implement the topic for this blog.

CoreOS CloudInit Config for Docker Storage Management

As mentioned in the CoreOS documentation regarding how to use CoreOS with Eucalyptus, the user needs to do the following:

  • Download the CoreOS image
  • Decompress the CoreOS image
  • Bundle, upload and register the image

For example:

# wget -q http://beta.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2

# bunzip2 coreos_production_openstack_image.img.bz2

# qemu-img convert -O raw coreos_production_openstack_image.img coreos_production_openstack_image.raw
# euca-bundle-and-upload-image -i coreos_production_openstack_image.raw -b coreos-production-beta -r x86_64
# euca-register -n coreos-production coreos-production-beta/coreos_production_openstack_image.raw.manifest.xml --virtualization-type hvm
IMAGE emi-98868F66

After the image is registered, create a security group and authorize port 22 for SSH access to the CoreOS instance:

# euca-create-group coreos-testing -d "Security Group for CoreOS Cluster"
GROUP sg-C8E3B168 coreos-testing Security Group for CoreOS Cluster
# euca-authorize -P tcp -p ssh coreos-testing
GROUP coreos-testing
PERMISSION coreos-testing ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0

Next, create a keypair that will be used to access the CoreOS instance:

# euca-create-keypair coreos > coreos.priv
# chmod 0600 coreos.priv

Now, we are need to create the cloud-init configuration file.  CoreOS implements a subset of cloud-init config spec with coreos-cloudinit.  The cloud-init config below will do the following:

  1. wipe the the ephemeral device – /dev/vdb (since the CoreOS EMI is an instance store-backed HVM image, ephemeral device will be /dev/vdb)
  2. format the ephemeral device with BTRFS filesystem
  3. mount /dev/vdb to /var/lib/docker (which is the location for images used by the Docker containers)

Create a cloud-init.config file with the following information:

#cloud-config
coreos:
 units:
 - name: format-ephemeral.service
 command: start
 content: |
 [Unit]
 Description=Formats the ephemeral drive
 [Service]
 Type=oneshot
 RemainAfterExit=yes
 ExecStart=/usr/sbin/wipefs -f /dev/vdb
 ExecStart=/usr/sbin/mkfs.btrfs -f /dev/vdb
 - name: var-lib-docker.mount
 command: start
 content: |
 [Unit]
 Description=Mount ephemeral to /var/lib/docker
 Requires=format-ephemeral.service
 Before=docker.service
 [Mount]
 What=/dev/vdb
 Where=/var/lib/docker
 Type=btrfs

Use euca-describe-instance-types to select the desired instance type for the CoreOS instance (in this example, c1.medium will be used).

# euca-describe-instance-types 
INSTANCETYPE Name CPUs Memory (MiB) Disk (GiB)
INSTANCETYPE t1.micro 1 256 5
INSTANCETYPE m1.small 1 512 10
INSTANCETYPE m1.medium 1 1024 10
INSTANCETYPE c1.xlarge 2 2048 10
INSTANCETYPE m1.large 2 1024 15
INSTANCETYPE c1.medium 1 1024 20
INSTANCETYPE m1.xlarge 2 1024 30
INSTANCETYPE m2.2xlarge 2 4096 30
INSTANCETYPE m3.2xlarge 4 4096 30
INSTANCETYPE m2.xlarge 2 2048 40
INSTANCETYPE m3.xlarge 2 2048 50
INSTANCETYPE cc1.4xlarge 8 3072 60
INSTANCETYPE m2.4xlarge 8 4096 60
INSTANCETYPE hi1.4xlarge 8 6144 120
INSTANCETYPE cc2.8xlarge 16 6144 120
INSTANCETYPE cg1.4xlarge 16 12288 200
INSTANCETYPE cr1.8xlarge 16 16384 240
INSTANCETYPE hs1.8xlarge 48 119808 24000

Use euca-run-instances to launch the CoreOS image as an instance, passing the cloud-init.config file using the –user-data-file option:

# euca-run-instances -k coreos -t c1.medium emi-98868F66 --user-data-file cloud-init-docker-storage.config
RESERVATION r-FC799274 408396244283 default
INSTANCE i-AF303D5D emi-98868F66 pending coreos 0 c1.medium 2014-06-12T13:38:31.008Z ViciousLiesAndDangerousRumors monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-A5133B59

Once the instance reaches the ‘running’ state, SSH into the instance to see the ephemeral storage mounted and formatted correctly:

# euca-describe-instances i-AF303D5D --region account1-user01@
RESERVATION r-FC799274 408396244283 default
INSTANCE i-AF303D5D emi-98868F66 euca-10-104-6-236.bigboi.acme.eucalyptus-systems.com euca-172-18-238-171.bigboi.internal running coreos 0 c1.medium 2014-06-12T13:38:31.008Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.236 172.18.238.17 instance-store hvm sg-A5133B59
# ssh -i coreos.priv core@euca-10-104-6-236.bigboi.acme.eucalyptus-systems.com
CoreOS (beta)
core@localhost ~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 254:0 0 8.3G 0 disk
|-vda1 254:1 0 128M 0 part
|-vda2 254:2 0 64M 0 part
|-vda3 254:3 0 1G 0 part
|-vda4 254:4 0 1G 0 part /usr
|-vda6 254:6 0 128M 0 part /usr/share/oem
`-vda9 254:9 0 6G 0 part /
vdb 254:16 0 11.7G 0 disk /var/lib/docker
core@localhost ~ $ mount
.......
/dev/vda6 on /usr/share/oem type ext4 (rw,nodev,relatime,commit=600,data=ordered
/dev/vdb on /var/lib/docker type btrfs (rw,relatime,space_cache)

The instance is now ready for docker containers to be created.  For some docker container examples, check out the CoreOS documentation and the Docker documentation.

Enjoy!

CoreOS CloudInit Config for Docker Storage Management

OpenLDAP Sandbox in the Clouds

Background

I really enjoy OpenLDAP.  I think folks really don’t understand the power of OpenLDAP, concerning its robustness (i.e. use multiple back-ends), speed and efficiency.

I think its important to have sandboxes to test various technologies.  The “cloud” is the best place for this.  To test out the latest builds provided by OpenLDAP (via git), I created a cloud-init script that allows me to configure, build, and install an OpenLDAP sandbox environment in the cloud (on-premise and/or public).  This script has been tested on AWS and Eucalyptus using Ubuntu Precise 12.04 LTS.   This blog entry is a compliment to my past blog regarding overlays, MDB and OpenLDAP.

Lean Requirements – Script, Image, and Cloud

When thinking about this setup, there were three goals in mind:

  1. Ease of configuration – this is why cloud-init was used.  Its very powerful in regards to bootstrapping instances as they boot up.  You can use Puppet, Chef or others (e.g. Salt Stack, Juju, etc.), but I decided to go with cloud-init.  The script does the following:
    • Downloads all the prerequisites for building OpenLDAP from source, including euca2ools.
    • Downloads OpenLDAP using Git
    • Set up ephemeral storage to be the installation point for OpenLDAP (e.g. configuration, storage, etc.)
    • Adds information into /etc/rc.local to make sure ephemeral gets re-mounted on reboots of the instance, and hostname is set.
    • Configures, builds and installs OpenLDAP.
  2. Cloud image that is ready to go – Ubuntu has done a wonderful job with their cloud images.  They have made it really easy to access them on AWS. These images can be used on Eucalyptus as well.
  3. Public and Private Cloud Deployment – Since Eucalyptus follows the AWS EC2 API very closely, it makes it really easy to test on both AWS and Eucalyptus.

Now that the background has been covered a bit, the next section will cover deploying the sandbox on AWS and/or Eucalyptus.

Deploy the Sandbox

To set the sandbox setup, use the following steps:

  1. Make sure and have an account on AWS and/or Eucalyptus (and the correct AWS/Eucalyptus IAM policies are in place so that you can bundle, upload and register images to AWS S3 and Eucalyptus Walrus).
  2. Make sure you have access to a registered AMI/EMI that runs Ubuntu Precise 12.04 LTS.  *NOTE* If you are using AWS, you can just go to the Ubuntu Precise Cloud Image download page, and select the AMI in the region that you have access to.
  3. Download the openldap cloud-init recipe from Eucalyptus/recipes repository.
  4. Download and install the latest Euca2ools (I used  the command-line tool euca-run-instances to run these instances).
  5. After you have downloaded your credentials from AWS/Eucalyptus, define your global environments by either following the documentation for AWS EC2 or the documentation for Eucalyptus.
  6. Use euca-run-instances with the –user-data-file option to launch the instance:  

    euca-run-instances -k hspencer.pem ....
     --user-data-file cloud-init-openldap.config [AMI | EMI]

After the instance is launched, ssh into the instance, and you will see something similar to the following:

ubuntu@euca-10-106-69-149:~$ df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 1.4G 1.2G 188M 86% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
none 0 0 0 - /sys/fs/fuse/connections
none 0 0 0 - /sys/kernel/debug
none 0 0 0 - /sys/kernel/security
udev 494M 12K 494M 1% /dev
devpts 0 0 0 - /dev/pts
tmpfs 200M 232K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
/dev/vda2 8.0G 159M 7.5G 3% /opt/openldap

Your sandbox environment is now set up.  From here, just following the instructions in the OpenLDAP Administrator’s Guide on configuring your openldap server, or continue from the “Setup – OLC and MDB” section located in my previous blog.  *NOTE* As you configure your openldap server, make sure and use euca-authorize to control access to your instance.

Enjoy!

OpenLDAP Sandbox in the Clouds

The Collaboration: Eustore with Varnish and Eucalyptus Walrus

In my last blog, I covered three ways Eucalyptus Systems uses the Varnish-Walrus architecture. This blog will cover how eustore takes advantage of this architecture.

Overview

Eustore is an image management tool developed by David Kavanagh. Its primary goal is to automate image bundling, uploading and registration.  The two commands provided by eustore is eustore-describe-images and eustore-install-image.


# eustore-describe-images --help
Usage: eustore-describe-images [options]

lists images from Eucalyptus.com

Options:
-h, --help show this help message and exit
-v, --verbose display more information about images

Standard Options:
-D, --debug Turn on all debugging output
--debugger Enable interactive debugger on error
-U URL, --url=URL Override service URL with value provided
--region=REGION Name of the region to connect to
-I ACCESS_KEY_ID, --access-key-id=ACCESS_KEY_ID
Override access key value
-S SECRET_KEY, --secret-key=SECRET_KEY
Override secret key value
--version Display version string

# eustore-install-image --help
Usage: eustore-install-image [options]

downloads and installs images from Eucalyptus.com

Options:
-h, --help show this help message and exit
-i IMAGE_NAME, --image_name=IMAGE_NAME
name of image to install
-b BUCKET, --bucket=BUCKET
specify the bucket to store the images in
-k KERNEL_TYPE, --kernel_type=KERNEL_TYPE
specify the type you're using [xen|kvm]
-d DIR, --dir=DIR specify a temporary directory for large files
--kernel=KERNEL Override bundled kernel with one already installed
--ramdisk=RAMDISK Override bundled ramdisk with one already installed

Standard Options:
-D, --debug Turn on all debugging output
--debugger Enable interactive debugger on error
-U URL, --url=URL Override service URL with value provided
--region=REGION Name of the region to connect to
-I ACCESS_KEY_ID, --access-key-id=ACCESS_KEY_ID
Override access key value
-S SECRET_KEY, --secret-key=SECRET_KEY
Override secret key value
--version Display version string

Eustore, by default, uses the images located on emis.eucalyptus.com.  Eustore can be configured to use other locations for images.  Eustore utilizes two components for image management:

  • JSON configuration file – catalog.json
  • Location for tar-gzipped EMIs

The images found on emis.eucalyptus.com and the JSON configuration file associated with those images are all located in unique Walrus buckets. The images shown in this blog are in the starter-emis bucket. The ACLs for these buckets allow for the objects to be publicly accessible. For more information on Walrus ACLs, please reference the section “Access Control List (ACL) Overview” in the AWS S3 Developer’s Guide.

The two commands mentioned above that eustore provides – eustore-describe-images and eustore-install-images – significantly cuts down the number of commands needed to be input by the user. Without using eustore, a user would need to run 3 commands (euca-bundle-image, euca-upload-image, and euca-register) for the kernel, ramdisk, and raw disk image for an EMI (this translates to a total of 9 commands).

The Collaboration

eustore-describe-images

When eustore-describe-images is ran, the following occurs:

Diagram of eustore-describe-images
eustore-describe-images
  1. eustore-describe-image requests information from JSON file (stored in Walrus bucket) from emis.eucalyptus.com (varnishd instance)
  2. ** If JSON file – catalog.json – is not present on emis.eucalyptus.com‘s cache, then JSON file is pulled from Walrus bucket
  3. Data from JSON file is returned to eustore-describe-images.


    # eustore-describe-images
    ....
    centos-x86_64-20120114 centos x86_64 2012.1.14 CentOS 5 1.3GB root, Single Kernel
    centos-lg-i386-20110702 centos i386 2011.07.02 CentOS 5 4.5GB root, Hypervisor-Specific Kernels
    centos-lg-x86_64-20110702centos x86_64 2011.07.02 CentOS 5 4.5GB root, Hypervisor-Specific Kernels
    centos-lg-x86_64-20111228centos x86_64 2011.12.28 CentOS 5 4.5GB root, Single Kernel
    centos-lg-x86_64-20120114centos x86_64 2012.1.14 CentOS 5 4.5GB root, Single Kernel
    debian-i386-20110702 debian i386 2011.07.02 Debian 6 1.3GB root, Hypervisor-Specific Kernels
    debian-x86_64-20110702 debian x86_64 2011.07.02 Debian 6 1.3GB root, Hypervisor-Specific Kernels
    debian-x86_64-20120114 debian x86_64 2012.1.14 Debian 6 1.3GB root, Single Kernel
    .....

eustore-install-image

eustore-install-image follows the same steps as eustore-describe-images, except it uses the information stored in the JSON file for each EMI.  The following information is present for each EMI:


{
"images": [
{
"name": "centos-x86_64-20120114",
"description": "CentOS 5 1.3GB root, Single Kernel",
"version": "2012.1.14",
"architecture": "x86_64",
"os": "centos",
"url": "starter-emis/euca-centos-2012.1.14-x86_64.tgz",
"date": "20120114150503",
"recipe": "centos-based",
"stamp": "28fc-4826",
"contact": "images@lists.eucalyptus.com"
},
.....
{
"name": "debian-x86_64-20120114",
"description": "Debian 6 1.3GB root, Single Kernel",
"version": "2012.1.14",
"architecture": "x86_64",
"os": "debian",
"url": "starter-emis/euca-debian-2012.1.14-x86_64.tgz",
"date": "20120114152138",
"recipe": "debian-based",
"stamp": "3752-f34a",
"contact": "images@lists.eucalyptus.com"
},
....
]
}

When eustore-install-image -i centos-x86_64-20120114 -b centos_x86-64 is executed, the following occurs:

Diagram of eustore-install-image
eustore-install-image
  1. eustore-install-image requests image (which is in a tar-gzipped form) to be downloaded from emis.eucalyptus.com
  2. ** If image is not available (euca-centos-2012.1.14-x86_64.tgz) in varnish cache, then varnishd (emis.eucalyptus.com) will pull image from starter-emis bucket and store it in ephemeral space for caching to handle future requests.
  3. Once the tar-gzipped file is downloaded, eustore-install-image will bundle, upload, and register the kernel (EKI), ramdisk (ERI) and image (EMI).

As demonstrated above, eustore definitely makes image management efficient and user-friendly. Stay tuned for upcoming blogs discussing more on how the Varnish-Walrus architecture is utilized.

For any questions, concerns, and/or suggestions, please email images@lists.eucalyptus.com or community@lists.eucalyptus.com. And as always, you can respond with comments here as well.:-)

Enjoy!

**This step won’t happen if the contents are cached on emis.eucalyptus.com

The Collaboration: Eustore with Varnish and Eucalyptus Walrus

Fun with Varnish and Walrus on Eucalyptus, Part 2

A few weeks ago, I posted a blog entitled “Fun with Varnish and Walrus on Eucalyptus, Part 1“. This blog will follow-up on my blog to showcase a few production use cases that utilize the VarnishWalrus architecture built on top of Eucalyptus.*NOTE* This architecture can also be leveraged using AWS EC2 and S3. This is one of the many benefits of Eucalyptus being AWS compatible.

The tools and web pages that take advantage of the Varnish-Walrus architecture on Eucalyptus are the following:

Eustore uses the Varnish-Walrus architecture by pulling images through emi.eucalyptus.com (the varnish instance). The data for each of the images is stored in a JSON file located in a Walrus bukkit. For more information about Eustore, please refer to David Kavanagh’s Eustore blog.

The Starter Eucalyptus Machine Images (EMIs) page uses the Varnish-Walrus architecture to allow users to download all of the EMIs that can be downloaded.

Starter EMIs
Starter Eucalyptus Machine Images (EMIs)
Since emis.eucalyptus.com is a varnish instance, you can query logs there to get statistics on how many of each EMI has been downloaded.

The Eucalyptus Machine Images page is a static web page for emis.eucalyptus.com, which is comprised of HTML, CSS, and jQuery – which are all stored in a Walrus bukkit.

http://emis.eucalyptus.com
Eucalyptus Machine Images Page
The web page for emis.eucalyptus.com definitely shows the power of using Walrus as a data store for various information. It accesses the same JSON file that is used by Eustore. We did this to make sure that there is consistency with all tools and web pages that provide access to the EMIs we create.

Hope you enjoyed this introduction to the use cases we use here at Eucalyptus. Stay tuned to the follow-up blogs that provide a more in-depth view as to how each use case utilizes our Varnish-Walrus infrastructure.

Thanks to David Kavanagh and Ian Struble for helping in this endeavor. This blog would have been out sooner, but I was busy at Scale 10x working the booth for Eucalyptus Systems. To see the fun we had at the conference, check out the following tumblr posts:

Till next time…

1Eustore was designed by David Kavanagh, one of the many great colleagues I work with at Eucalyptus Systems. It initially started as a project idea that spurred from various image management needs discussed in the Eucalyptus Image Management group.

Fun with Varnish and Walrus on Eucalyptus, Part 2

Fun with Varnish and Walrus on Eucalyptus, Part 1

After getting some free time to put together a high-level diagram of the Varish/Walrus setup we are using at Eucalyptus Systems, I decided to use it as an opportunity to make it my first technical blog.

The Inspiration

Here at Eucalyptus Systems, we are really big on “drinking our own champagne“.  We are in the process of migrating every-day enterprise services to utilize Eucalyptus.  My good friend and co-worker Graziano got our team started down this path with his blog posts on Drinking Champagne and Planet Eucalyptus.

The Problem

We needed to migrate storage of various tar-gzipped files from a virtual machine, to an infrastructure running on Eucalyptus.  Since Eucalyptus Walrus is compatible with Amazon’s S3, it serves as a great service for storing tons of static content and data.  Walrus – just as S3 – also has ACLs for data objects stored.

With all the coolness of storing data in Walrus, we needed to figure out a way lessen the network load to Walrus due to multiple HTTP GET requests. This is where Varnish comes to the rescue…

The Solution

Varnishd-Walrus Architecture

Above is the architectural diagram of how Varnishd can be set up as a caching service for objects stored in Walrus buckets.  Varnish is primarily developed as an HTTP accelerator.  In this setup, we use varnish to accomplish the following:

  • caching bucket objects requested through HTTP
  • custom URL naming for buckets
  • granular control to Walrus buckets

Bucket Objects in Walrus

We upload the bucket objects using a patched version of s3cmd.  To allow the objects to be accessed by the varnish caching instance, we use s3cmd as follows:

  • Create the bucket:

    s3cmd mb s3://bucket-name

  • Upload the object and make sure its public accessible:

    s3cmd put --acl-public --guess-mime-type object s3://bucket-name/object

And thats it.  All the other configuration is done on the varnish end.  Now, on to varnish..

Varnishd Setup

The instance that is running varnish is running Debian 6.0. We  process any request for specific bucket objects that come to the instance, and pull from the bucket where the object is located.  The instance is customized to take in scripts through user-data/user-data-file option thats can be used by the euca-run-instance command.  The rc.local script that enables this option in the image can be found here.  The script we use for this varnish setup – along with other deployment scripts – can be found here on projects.eucalyptus.com.

Thats it!  We can bring up another instance quickly without a problem – since it’s scripted..:-).  We also use Walrus to store our configurations as well.  For extra security, we don’t make those objects public.  We use s3cmd to download the objects, then move the configuration files to the correct location in the instance.

We hope this setup inspires other ideas that can be implemented with Eucalyptus.  Please feel free to give any feedback.  We are always open to improve things here at Eucalyptus.  Enjoy, and be on the look out for a follow-up post discussing how to add load balancing to this setup.

Fun with Varnish and Walrus on Eucalyptus, Part 1