Deploying the Eucalyptus Management Console on Eucalyptus

hspencer77:

More Eucalyptus Cloudformation goodness..this time discussing how to deploy Eucalyptus Management Console. Solid work here!

Originally posted on Coders Like Us:

The Eucalyptus Management Console can be deployed in a variety of ways, but we’d obviously like it to be scalable, highly available and responsive. Last summer, I wrote up the details of deploying the console with Auto Scaling coupled with Elastic Load Balancing. The Cloud Formations service ties this all together by putting all of the details of how to use these services together in one template. This post will describe an example of how you can do this which works well on Eucalyptus (and AWS) and may guide you with your own application as well.

Let’s tackle a fairly simple deployment for the first round. For now, we’ll setup a LaunchConfig, AS group and ELB. We’ll also set up a security group for the AS group and allow access only to the ELB. Finally, we’ll set up a self signed SSL cert for the console. In another post, we’ll add…

View original 375 more words

Deploying the Eucalyptus Management Console on Eucalyptus

EDGE Networking in Eucalyptus

Originally posted on A sysadmin born in the cloud:

We are just about to have Eucalyptus 4.1 released with VPC implementations and some new features, but I think that it is quite important to take a few time to dig into EDGE networking and networking modes in General with Eucalyptus.

For years, we had 3 mainly used modes :

  • MANAGED
    • AWS Security Groups supported and VLANs created to give L2 separation
    • All traffic goes via the Cluster Controller for cross-groups communication
    • Requires a DHCP clean-environment
  • MANAGED-NOVLAN
    • AWS SG supported but no L2 separation
    • All traffic goes via the Cluster Controller for cross-groups communication
    • Requires a DHCP clean environment
  • SYSTEM
    • Customer DHCP server will assign IP addresses to instances
    • No AWS SG compatibility

As we can figure out, if you needed the AWS compatibility you also had to deal with the CC handling all the instances traffic. But the problem is that you also had to dedicate a physical machine to…

View original 1,008 more words

EDGE Networking in Eucalyptus

The Case for a Policy Decision Point inside the LDAP Server

hspencer77:

Great insight as to the importance of Policy Decision Points with regards to security processes.

Originally posted on iamfortress:

Why on earth would you do that?

We all understand that runtime characteristics change as processes get moved around the network.  Having problems with network io?  Move the database daemon to the same tier as the client process.  Problems with file io?  Store the data in memory as opposed to disk.  etc…

These same techniques apply for system architecture and security.  Location of policy enforcement, decision, and database processes hugely impact the overall welfare of your organization’s computational systems.

With these kinds of thoughts, what happens when security processes get moved around the network?

But first, we must define the security processes:

1. Policy Enforcement Point (PEP)

The gatekeeper component.  It enforces the security policy on the client program.  PEPs come in many shapes and sizes.  Often times it’s a small block of code that gets embedded directly into a client program.

2. Database (DB)

The database is used by PDPs to house…

View original 637 more words

The Case for a Policy Decision Point inside the LDAP Server

Adding Eucalyptus Load Balancer Access Logging for Eucalyptus Cloud Users

Preface

Eucalyptus continues to strive as the best on-premise AWS-compatible Infrastructure as a Service (IaaS).  One of the great things about Eucalyptus being an open source platform, is that if there is an AWS feature that any cloud administrator/developer wants to add, they have the ability to do it.  This blog entry will cover how to enable cloud users to have access to the Eucalyptus Load Balancer access logs – similar to how this is accomplished with Amazon Web Services Elastic Load Balancer service.

Before we dive in, I would like to give special thanks to the following members of the Eucalyptus Engineering Team.  Without their hard work, this blog would not be possible:

Special thanks to these individuals for their continued contributions to the Eucalyptus software.

Overview

Currently, when a cloud user launches a Eucalyptus Load Balancer, they will see something similar to the following:

# eulb-create-lb hasp-euca-lb --listener "lb-port=80, protocol=http, instance-port=8888, instance-protocol=http" --availability-zone Honest
# eulb-describe-lbs
LOAD_BALANCER hasp-euca-elb hasp-euca-elb-325271821652.eulb.future.euca-hasp.cs.prc.eucalyptus-systems.com 2014-12-11T23:34:35.397Z

Notice the DNS name of the load balancer.  It has the following format:

{load balancer name}-{Account ID}.{Load Balancer DNS Subdomain}.{Eucalyptus Cloud DNS Domain}

The “{load balancer name}-{Account ID}” string is the important information in this value.

From the cloud administrator’s perspective, the load balancer is an AutoScaling group.  More information can be found in the following resources:

If the cloud administrator describes the instances running under the ‘eucalyptus‘ account and the load balancer above is running, the following would be displayed:

# euca-describe-instances 
RESERVATION r-278c161e 094999295155 euca-internal-325271821652-hasp-euca-elb
INSTANCE i-135b4b0a emi-7a4367b8 euca-10-104-7-21.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-156-121.future.internal running euca-elb 0 c1.medium 2014-12-11T23:34:44.428Z Honest monitoring-enabled 10.104.7.21 172.17.156.121 instance-store hvm c4946e25-64ed-4453-808c-9ff2ab831b47_Honest_1 sg-da911c98 arn:aws:iam::094999295155:instance-profile/internal/loadbalancer/loadbalancer-vm-325271821652-hasp-euca-elb x86_64
TAG instance i-135b4b0a Name loadbalancer-resources
TAG instance i-135b4b0a aws:autoscaling:groupName asg-euca-internal-elb-325271821652-hasp-euca-elb
TAG instance i-135b4b0a euca:node 10.104.1.218

Notice the ‘RESERVATION’ line that contains the security group that the instance is using.  If the ‘euca-internal-‘ prefix is removed, the security group has the following format:

{Account ID}-{load balancer Name}

This information matches the Load Balancer launched by the cloud user and will be the base for the solution.

Building the Foundation

In order to get started, the solution needs to be applied from the Cloud Administrator (i.e. admin user in the ‘eucalyptus’ account) perspective.  This solution can not be applied by any other type of cloud user.  In addition to cloud administrator user requirement, the following is needed:

Once these requirements are met, the environment is ready to go.

Create ELB Access Log User

A user (e.g. ‘elb-osg-logger’) needs to be created under the ‘eucalyptus’ account which will be used with the custom python script to store the load balancer access logs to the OSG bucket.  To create the user, after sourcing the cloud administrator credentials, use euare-usercreate:

# euare-usercreate -u elb-osg-logger -k 
AKILXXXXXXXXXXXXXX
PS6nXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Store these credentials in a safe place.  Next, customizing the load balancer instance.

Customize the Load Balancer

To begin, a Eucalyptus Load Balancer needs to be launched in order to modify it.  The goal here is to build an image from this instance using euca-bundle-instance.  We will start with the load balancer mentioned earlier:

# euca-describe-instances 
RESERVATION r-5e1d4d17 094999295155 euca-internal-325271821652-hasp-euca-lb
INSTANCE i-315dd646 emi-7a4367b8 euca-10-104-7-9.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-177-235.future.internal running euca-elb 0 c1.medium 2014-12-11T04:23:04.441Z Honest monitoring-enabled 10.104.7.9 172.17.177.235 instance-store hvm b134a0bc-cfc4-4c6e-84ba-4fd1df160407_Honest_1 sg-b6cc605e arn:aws:iam::094999295155:instance-profile/internal/loadbalancer/loadbalancer-vm-325271821652-hasp-euca-lb x86_64
TAG instance i-315dd646 Name loadbalancer-resources
TAG instance i-315dd646 aws:autoscaling:groupName asg-euca-internal-elb-325271821652-hasp-euca-lb
TAG instance i-315dd646 euca:node 10.104.1.218

To access the load balancer, authorize SSH to the instance:

# euca-authorize -P tcp -p 22 euca-internal-325271821652-hasp-euca-elb

Next, SSH into the ELB instance:

# ssh -i euca-elb.priv root@euca-10-104-7-9.future.future.euca-hasp.cs.prc.eucalyptus-systems.com

Once inside the instance, install the EPEL package repository:

# yum localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -y

After the package has been installed, use yum to install the python-pip package:

# yum install python-pip -y

Next, use pip to upgrade and install the ‘boto‘ and ‘argparse‘ modules:

# pip install --upgrade boto argparse

Now, its time to add the custom python script.

Add Access Logs Script

The Access Log Script performs the following actions:

  • Creates a bucket with READ bucket ACL for Account  ID which launches the Eucalyptus Load Balancer
    • bucket created with the following format – s3://access_logs-{LB name}_{public-IPV4 of LB}_{LB instance numeric ID}
  • Places a copy of “/var/log/load-balancer-access.log.1” in the bucket with READ object ACL for Account ID which owns the Eucalyptus Load Balancer
    • the file in the bucket will have the following naming format – elb-access-{timestamp DDMMYY-HourMinSec}.log
  • Bonus – since Eucalyptus 4.0.0, OSG has supported object lifecycle management.  If lifecycle value is passed and its greater than 0, the object lifecycle is applied to all objects in the bucket.

To add the script to the instance, use curl:

# curl http://euca-elb-access-log-blog.s3.amazonaws.com/access-log-transfer-s3.py -o access-log-transfer-s3.py

Once the script has been downloaded, edit the script and add the ‘elb-osg-logger’ user credentials, the S3_URL and EC2_URL to the script in the following locations:

 EC2Connection.DefaultRegionEndpoint = '<EC2_URL - Eucalyptus Cloud Compute API DNS Name>'
 ec2conn = EC2Connection(aws_access_key_id="<elb-osg-logger user Access Key ID>",
 aws_secret_access_key="<elb-osg-logger user Secret Access Key>",
 is_secure=False, port="8773")
 s3 = S3Connection(aws_access_key_id="<elb-osg-logger user Access Key ID>",
 aws_secret_access_key="<elb-osg-logger user Secret Access Key>",
 host="<S3_URL - Eucalyptus Cloud OSG API DNS Name>",
 is_secure=False, port=8773, calling_format=OrdinaryCallingFormat())

Set the script to be executable using chmod:

# chmod a+x /root/access-log-transfer-s3.py

Now its time to configure HAProxy to log information.

Enable HAProxy Logging

The Eucalyptus Load Balancer uses haproxy to perform load balancing.  To enable logging, the following files need to be edited:

  • /etc/load-balancer-servo/haproxy_template.conf 
    • under the ‘global’ section add – log 127.0.0.1 local3 info
    • under the ‘default’ section add – log global
  • /usr/lib/python2.6/site-packages/servo/haproxy/haproxy_conf.py
    • change the following section:
 if protocol == 'http' or protocol == 'https':
 self.__content_map[section_name].append('log-format httplog\ %f\ %b\ %s\ %ST\ %ts\ %Tq\ %Tw\ %Tc\ %Tr\ %Tt') 
 elif protocol == 'tcp' or protocol == 'ssl':
 self.__content_map[section_name].append('log-format tcplog\ %f\ %b\ %s\ %ts\ %Tw\ %Tc\ %Tt')

to

if protocol == 'http' or protocol == 'https':
 self.__content_map[section_name].append('log-format httplog\ %f\ %b\ %s\ %ST\ %ts\ %Tq\ %Tw\ %Tc\ %Tr\ %Tt\ %{+Q}r\ %ci:%cp\ %fi:%fp\ %si:%sp\ req_size=%U\ resp_size=%B')
 elif protocol == 'tcp' or protocol == 'ssl':
 self.__content_map[section_name].append('log-format tcplog\ %f\ %b\ %s\ %ts\ %Tw\ %Tc\ %Tt\ %{+Q}r\ %ci:%cp\ %fi:%fp\ %si:%sp\ req_size=%U\ resp_size=%B')

For more information about the log-format in HAProxy, reference the HAProxy documentation on log format. The information that can be logged is highly customizable.  Reference the AWS ELB documentation regarding Access Log Entries to get a better sense of the logging experience on AWS.

Logging for HAProxy is complete.  Next, rsyslog and logrotate need to be configured.

Log Management

Storing the HAProxy logs, and rotating them is very important to this solution.  The script takes the rotated log, and stores it in the OSG bucket for the access logs.  The purpose of this is to make sure the file is not being written to when its being sent to the OSG bucket.  To start out, download the load-balancer.conf file to use with logrotate using curl:

# curl http://euca-elb-access-log-blog.s3.amazonaws.com/load-balancer.conf -o load-balancer.conf

This is the logrotate configuration file that the cronjob script will call to rotate the log file, then execute the access-log-transfer-s3.py script with a 1 day object lifecycle. To change the lifecycle, just change the value of the –lifecycle option in the load-balancer.conf file.

Next, update rsyslog to make sure the latest is running on the instance:

# yum upgrade rsyslog -y

After this has completed, add the following to the /etc/rsyslog.d/load-balancer.conf file:

local3.*       /var/log/load-balancer-access.log

Follow this step up by uncommenting and adding the following lines in /etc/rsyslog.conf:

$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514

To wrap up, we need to add a script that will be kicked off by the cronjob.

Cronjob Script

To kick off the log rotation, add the ‘elb-logrotate‘ script to the instance using curl:

# curl http://euca-elb-access-log-blog.s3.amazonaws.com/elb-logrotate -o elb-logrotate

Using ‘crontab -e’, set up a cron for 5 minutes (or however often the access log information would like to be uploaded to the bucket):

*/5 * * * * /root/elb-logrotate

Clean Up

After completing all the customizations, the instance needs to be prepared for bundling.  Run the following commands to prepare the instance:

# echo "" > /etc/udev/rules.d/70-persistent-net.rules
# echo "" > /lib/udev/rules.d/75-persistent-net-generator.rules

If PERSISTENT_DHCLIENT is not in the  /etc/sysconfig/network-scripts/ifcfg-eth0 file, then add it:

# grep PERSISTENT_DHCLIENT /etc/sysconfig/network-scripts/ifcfg-eth0
# echo "PERSISTENT_DHCLIENT=yes" >> /etc/sysconfig/network-scripts/ifcfg-eth0

Now we can exit out the instance.

Creating the New Eucalyptus Load Balancer EMI

After finishing with the instance customizations, the instance is ready to be bundled and registered.  First, use euca-bundle-instance to bundle and upload the instance.  Use euca-describe-bundle-tasks to check on the status of the bundling operation.  Once the bundling operation has been completed, use euca-register to register the new ELB EMI:

# euca-bundle-instance -b load-balancer-access-logs -p eucalyptus-load-balancer-image-access-log i-315dd646
BUNDLE bun-315dd646 i-315dd646 load-balancer-access-logs eucalyptus-load-balancer-image-access-log 2014-12-11T04:07:59.835Z 2014-12-11T04:07:59.835Z pending 0 load-balancer-access-logs/eucalyptus-load-balancer-image-access-log.manifest.xml
....
# euca-describe-bundle-tasks
BUNDLE bun-315dd646 i-315dd646 load-balancer-access-logs eucalyptus-load-balancer-image-access-log 2014-12-11T04:07:59.835Z 2014-12-11T04:09:57.671Z complete 0 load-balancer-access-logs/eucalyptus-load-balancer-image-access-log.manifest.xml
# euca-register -a x86_64 -n load-balancer-access-logs load-balancer-access-logs/eucalyptus-load-balancer-image-access-log.manifest.xml --virtualization-type hvm
IMAGE emi-7a4367b8

Now that the new Eucalyptus Load Balancer EMI is register, update the cloud property ‘loadbalancing.loadbalancer_emi‘ to display the new ELB EMI:

# euca-modify-property -p loadbalancing.loadbalancer_emi=emi-7a4367b8
PROPERTY loadbalancing.loadbalancer_emi emi-7a4367b8 was emi-cf4fb988

Now, lets test out the changes.

Testing Out the ELB with Access Logging

To test it out, you can use either the Cloud Administrator, or a user from a ‘non-eucalyptus’ account.  In the example below, a user from a ‘non-eucalyptus’ account was used.  If a ‘non-eucalyptus’ account user is used, make sure the user has the appropriate IAM access policies for EC2 (Compute), S3 (OSG), and ELB (Eucalyptus Load Balancer).

First, create the Eucalyptus Load Balancer:

# eulb-create-lb hasp-euca-lb --listener "lb-port=80, protocol=http, instance-port=80, instance-protocol=http" --availability-zone Honest --region account2-user11@

Next, launch an instance that has a web service running on port 80.  In this example, I used a cloud-init configuration file to install nginx on an Ubuntu 14.04 (Trusty Tahr) Cloud Image:

# euca-run-instances -k account2-user11 -t m1.medium emi-59a742d0 --user-data-file nginx-cloudinit.config --region account2-user11@
....
# euca-describe-instances --region account2-user11@
RESERVATION r-5c16c716 325271821652 default
INSTANCE i-45c1ebd1 emi-59a742d0 euca-10-104-7-29.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-248-189.future.internal running account2-user11 0 m1.medium 2014-12-05T21:53:51.197Z Honest monitoring-disabled 10.104.7.29 172.17.248.189 instance-store hvm sg-6ef9907f x86_64

Register the instance with the ELB:

# eulb-register-instances-with-lb --instances i-45c1ebd1 hasp-euca-lb --region account2-user11@
INSTANCE i-45c1ebd1

Generate some traffic to the ELB using curl or some other tool to populate the HAProxy log file.  Based upon how often the cronjob was set to execute, use s3cmd to see the bucket created in the ‘eucalyptus’ account (i.e. Cloud Administrator) for the access logs.  For information regarding s3cmd configuration files, refer to my previous blog:

# ./s3cmd/s3cmd --config=.s3cfg-cloud-admin ls
2014-09-18 02:59 s3://51c700-download-manifests
2014-12-11 04:31 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646
2014-09-18 02:43 s3://centos-6.5-x86_64-20140917
2014-09-18 02:46 s3://centos-7-x86_64-20140917
2014-11-05 22:05 s3://centos6.4-kernel
2014-11-05 21:54 s3://centos6.4-ramdisk
2014-11-05 22:08 s3://centos6.4-test
2014-09-18 02:52 s3://debian-7-x86_64-20140917
# ./s3cmd/s3cmd --config=.s3cfg-cloud-admin ls s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646
2014-12-11 04:31 817 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-043122.log
2014-12-11 05:13 78764 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-051353.log
2014-12-11 05:20 58202 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-052002.log

Once that has been confirmed, create another s3cmd configuration file for the ‘non-eucalyptus’ user, and confirm the user can list the contents of the bucket:

# ./s3cmd/s3cmd --config=.s3cfg-acct2-user11 ls s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646
2014-12-11 04:31 817 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-043122.log
2014-12-11 05:13 78764 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-051353.log
2014-12-11 05:20 58202 s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-052002.log

After that has been confirmed, download one of the log files and confirm the contents:

# ./s3cmd/s3cmd --config=.s3cfg-acct2-user11 get s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-051353.log .
s3://access_logs-hasp-euca-lb_10.104.7.9_315dd646/elb-access-11122014-051353.log -> ./elb-access-11122014-051353.log [1 of 1]
 78764 of 78764 100% in 0s 238.84 kB/s done
 
# cat elb-access-11122014-051353.log
Dec 11 04:32:11 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 0 0 0 1 1 "HEAD / HTTP/1.1" 10.5.1.70:49960 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:35 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 0 0 0 1 1 "HEAD / HTTP/1.1" 10.5.1.70:50216 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:36 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 4 0 1 1 6 "HEAD / HTTP/1.1" 10.5.1.70:50217 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:38 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 5 0 0 1 6 "HEAD / HTTP/1.1" 10.5.1.70:50218 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:39 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 0 0 0 1 1 "HEAD / HTTP/1.1" 10.5.1.70:50219 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:40 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 0 0 0 1 1 "HEAD / HTTP/1.1" 10.5.1.70:50220 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:41 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 0 0 0 1 1 "HEAD / HTTP/1.1" 10.5.1.70:50221 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
Dec 11 05:05:42 localhost haproxy[1070]: httplog http-80 backend-http-80 http-80 200 -- 4 0 1 1 6 "HEAD / HTTP/1.1" 10.5.1.70:50222 172.17.177.235:80 10.104.7.29:8888 req_size=142 resp_size=241
.....

How is this ‘non-eucalyptus’ user able to see and download the contents of this bucket?  This is because of the script that creates the access log bucket, and uploads the logs to the bucket.  By grabbing the account ID from the instance metadata ‘security group’ category, the script adds bucket and object READ ACLs for the account ID.  The only issue here is that the cloud administrator will still need to communicate the bucket that the cloud user can access for the logs.  With the extra bonus of using the object lifecycle, the cloud administrator doesn’t have to worry about managing the buckets.  The objects will remove themselves after the define period of time.

Conclusion

Even though the solution isn’t exactly like AWS ELB Access Logs feature, it does provide a solution that is very similar to it.  The only thing missing is the service API interaction to enable/disable the access logging feature, set the interval and define the bucket that will be used.  Hopefully, this will be a feature we will see in the not too distant feature.  Thanks for hanging in there with me.  I hope you enjoy!  Feedback is always welcome.

Cheers!

Adding Eucalyptus Load Balancer Access Logging for Eucalyptus Cloud Users

Cloud Image Management on Eucalyptus: Creating a CentOS 6.6 EMI With ZFS Support

ZFS is a filesystem designed by Sun Microsystems that focuses on data integrity.  What makes this such an attractive filesystem to use in the cloud is that a cloud user can easily do the following:

  • set up an LVM + RAID filesystem for storing large amounts of data (e.g. database information)
  • expand the filesystem by adding more storage (i.e. EBS volumes)
  • backup the filesystem without taking the filesystem offline/unmounting
  • restore the filesystem

This blog entry will focus on how a cloud user can create their own Eucalyptus Machine Image (EMI) that has ZFS support.  The CentOS 6.5 EMI on the Eucalyptus Machine Image Catalog will be used as the base image.

Before Starting…

Before following the steps in this blog, make sure the following is in place:

Once these requirements have been met, everything should be ready to go.

Set Up Base Image/Instance

To begin, follow the ‘Quick Start’ instructions mentioned on the Eucalyptus Machine Image Catalog page.  This will install all the images provided by the catalog.  When the process has finished, list the CentOS 6.5 EMI.  For example:

# euca-describe-images emi-bdcec010 
IMAGE emi-bdcec010 centos-6.5-x86_64-20140917/centos.raw.manifest.xml 094999295155 available public x86_64 machine instance-store hvm

Once the CentOS 6.5 EMI has been listed, launch an instance from the EMI.  For example:

# euca-run-instances -k account2-user11 -t m1.medium emi-bdcec010 
RESERVATION r-a22f0201 325271821652 default
INSTANCE i-b9fccf9f emi-bdcec010 pending account2-user11 0 m1.medium 2014-12-03T22:52:41.522Z Honest monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-6ef9907f x86_64
# euca-describe-instances i-b9fccf9f
RESERVATION r-a22f0201 325271821652 default
INSTANCE i-b9fccf9f emi-bdcec010 euca-10-104-7-15.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-248-178.future.internal running account2-user11 0 m1.medium 2014-12-03T22:52:41.522Z Honest monitoring-disabled 10.104.7.15 172.17.248.178 instance-store hvm sg-6ef9907f x86_64

Once the instance is running, its ready to be customized.

Adding ZFS Support to the Instance

Now that the instance is running, SSH into the instance so the following ZFS repository can be added:

[root@odc-f-13 ~]# ssh -i account2-user11.priv root@euca-10-104-7-15.future.future.euca-hasp.cs.prc.eucalyptus-systems.com
[root@euca-172-17-248-178 ~]# yum localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@euca-172-17-248-178 ~]# yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
[root@euca-172-17-248-178 ~]# yum upgrade -y
[root@euca-172-17-248-178 ~]# yum install kernel-devel zfs -y

After all the packages have been installed, reboot the instance:

[root@euca-172-17-248-178 ~]# reboot

Preparing the Instance For EMI Creation

After rebooting the instance, SSH back into the instance and prepare the instance for EMI creation.  First, load the zfs module:

[root@odc-f-13 ~]# ssh -i account2-user11.priv root@euca-10-104-7-15.future.future.euca-hasp.cs.prc.eucalyptus-systems.com
[root@euca-172-17-248-178 ~]# modprobe zfs
[root@euca-172-17-248-178 ~]# lsmod | grep zfs
zfs 1195522 0
zcommon 46278 1 zfs
znvpair 80974 2 zfs,zcommon
zavl 6925 1 zfs
zunicode 323159 1 zfs
spl 266655 5 zfs,zcommon,znvpair,zavl,zunicode

After confirming that the ZFS module is loaded, clear the network udev rules, and confirm PERSISTENT_DHCLIENT is set to “yes” in the /etc/sysconfig/network-scripts/ifcfg-eth0 file:

[root@euca-172-17-248-178 ~]# echo "" > /etc/udev/rules.d/70-persistent-net.rules
[root@euca-172-17-248-178 ~]# echo "" > /lib/udev/rules.d/75-persistent-net-generator.rules
[root@euca-172-17-248-178 ~]# echo "PERSISTENT_DHCLIENT=yes" >> /etc/sysconfig/network-scripts/ifcfg-eth0

Confirm that the instance has been upgraded to CentOS 6.6, then exit the instance.

[root@euca-172-17-248-178 ~]# cat /etc/redhat-release
CentOS release 6.6 (Final)
[root@euca-172-17-248-178 ~]# exit

Create the CentOS 6.6 EMI with ZFS Support

The instance is now ready to be bundled.  Bundle the instance using the euca-bundle-instance command.  This command is used to bundle Windows instances, however Eucalyptus extended this command to work with Linux instances as well.  Use euca-describe-bundle-tasks to monitor the bundling status:

[root@odc-f-13 ~]# euca-bundle-instance --bucket centos6.6-zfs --prefix centos6.6-zfs i-b9fccf9f
BUNDLE bun-b9fccf9f i-b9fccf9f centos6.6-zfs centos6.6-zfs 2014-12-03T23:54:51.644Z 2014-12-03T23:54:51.644Z pending 0 centos6.6-zfs/centos6.6-zfs.manifest.xml
..
[root@odc-f-13 ~]# euca-describe-bundle-tasks
BUNDLE bun-b9fccf9f i-b9fccf9f centos6.6-zfs centos6.6-zfs 2014-12-03T23:54:51.644Z 2014-12-03T23:57:37.517Z complete 0 centos6.6-zfs/centos6.6-zfs.manifest.xml

Once the bundle task completes, register the instance store-backed HVM image using the euca-register command:

[root@odc-f-13 ~]# euca-register -a x86_64 -n centos6.6-zfs centos6.6-zfs/centos6.6-zfs.manifest.xml --virtualization-type hvm 
IMAGE emi-5e63f02c

The custom image has been registered. Now lets test it out.

ZFS Test

To test the image out, we will do the following:

  • Launch an instance from the new EMI
  • Create 5 volumes and attach them to the instance
  • Create a ZFS storage pool and dataset

To launch the instance, use the euca-run-instances command.  To create the 5 EBS volumes, use euca-create-volume command.  After the volumes are created, use euca-attach-volume to attach the volumes to the instance.  Once the volumes are attached, the output of euca-describe-instances should look similar to the following:

# euca-describe-instances i-0cd3b6b8
RESERVATION r-cf7c5c73 325271821652 default
INSTANCE i-0cd3b6b8 emi-5e63f02c euca-10-104-7-3.future.future.euca-hasp.cs.prc.eucalyptus-systems.com euca-172-17-248-184.future.internal running account2-user11 0 m1.medium 2014-12-04T00:16:52.887Z Honest monitoring-disabled 10.104.7.3 172.17.248.184 instance-store hvm sg-6ef9907f x86_64
BLOCKDEVICE /dev/sdd vol-a23cfb1f 2014-12-04T01:45:59.730Z false
BLOCKDEVICE /dev/sdh vol-a27b75a5 2014-12-04T01:47:31.162Z false
BLOCKDEVICE /dev/sdf vol-2a971204 2014-12-04T01:46:54.575Z false
BLOCKDEVICE /dev/sdg vol-b33e9890 2014-12-04T01:47:13.346Z false
BLOCKDEVICE /dev/sde vol-dcc8b6ac 2014-12-04T01:46:15.011Z false

SSH into the instance and check what block devices are associated with the EBS volumes using the lsblk command:

# ssh -i account2-user11.priv root@euca-10-104-7-3.future.future.euca-hasp.cs.prc.eucalyptus-systems.com
[root@euca-172-17-248-184 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 4.9G 0 disk
├─vda1 252:1 0 500M 0 part /boot
└─vda2 252:2 0 4.4G 0 part
 ├─VolGroup-lv_root (dm-0) 253:0 0 3.9G 0 lvm /
 └─VolGroup-lv_swap (dm-1) 253:1 0 500M 0 lvm [SWAP]
vdb 252:16 0 5.1G 0 disk
vdc 252:32 0 5G 0 disk
vdd 252:48 0 5G 0 disk
vde 252:64 0 5G 0 disk
vdf 252:80 0 5G 0 disk
vdg 252:96 0 5G 0 disk

The EBS volumes are /dev/vdc, /dev/vdd, /dev/vde, /dev/vdf, and /dev/vdg.  Use these devices to create the ZFS storage pool by using the zpool command:

[root@euca-172-17-248-184 ~]# zpool create -f app-pool vdc vdd vde vdf vdg
[root@euca-172-17-248-184 ~]# zpool status
 pool: app-pool
 state: ONLINE
 scan: none requested
config:
 NAME STATE READ WRITE CKSUM
 app-pool ONLINE 0 0 0
 vdc1 ONLINE 0 0 0
 vdd1 ONLINE 0 0 0
 vde1 ONLINE 0 0 0
 vdf1 ONLINE 0 0 0
 vdg1 ONLINE 0 0 0
errors: No known data errors

Next, we need to create a ZFS dataset.  For this example, this instance will end up being a MySQL server, so we will create a dataset for storing the MySQL data.

[root@euca-172-17-248-184 ~]# zfs create app-pool/mysql
[root@euca-172-17-248-184 ~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
app-pool 152K 24.5G 30K /app-pool
app-pool/mysql 30K 24.5G 30K /app-pool/mysql

The mount point of the dataset can be adjusted by setting the mountpoint option:

[root@euca-172-17-248-184 ~]# zfs set mountpoint=/opt/mysql app-pool/mysql
[root@euca-172-17-248-184 ~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
app-pool 162K 24.5G 31K /app-pool
app-pool/mysql 30K 24.5G 30K /opt/mysql

Thats it!  Notice how this only required 2 commands to set up a LVM + RAID filesystem, compared to around 7 commands using mdadm, pvcreate, vgcreate, mkfs, mkdir and mount. The instance is now ready to utilize the ZFS filesystem for the MySQL server.

Online Backup Example to OSG Bucket using s3cmd

As mentioned earlier, a slick feature of using ZFS is being able to perform backups online.  This section will show the following:

  • Setup and configure s3cmd
  • Create a ZFS snapshot, and use ZFS send with s3cmd to place the snapshot on an OSG bucket

To get started, in the instance, install the following packages:

[root@euca-172-17-248-184 ~]# yum install -y git python-dateutil.noarch xz

Next, clone the s3tools/s3cmd repository from Github:

[root@euca-172-17-248-184 ~]# git clone https://github.com/s3tools/s3cmd.git

If the instance was launched with an instance profile that assumes a role with OSG (S3) API access, s3cmd will pick up the temporary credentials and token through the Eucalyptus instance metadata service, as if the instance was launched on AWS EC2.  This wasn’t the case here, so we need to provide the Access Key ID and Secret Key manually:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: AKIRAGCHAGFE6IIX9BYF
Secret Key: GMdrL97AqcybhfyyxOpNmVUnBtiMenag3ju82L7L

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]:

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

New settings:
 Access Key: AKIRAGCHAGFE6IIX9BYF
 Secret Key: GMdrL97AqcybhfyyxOpNmVUnBtiMenag3ju82L7L
 Encryption password:
 Path to GPG program: /usr/bin/gpg
 Use HTTPS protocol: False
 HTTP Proxy server name:
 HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] n
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'

Edit the .s3cfg file to make sure to point to the OSG on your Eucalyptus 4.0.2 cloud.  For example, change the following:

host_base = s3.amazonaws.com

to

host_base = objectstorage.future.euca-hasp.cs.prc.eucalyptus-systems.com:8773

and

host_bucket = %(bucket)s.s3.amazonaws.com

to

host_bucket = %(bucket)s.objectstorage.future.euca-hasp.cs.prc.eucalyptus-systems.com:8773

Confirm that s3cmd is configured correctly.  For example:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd ls
2014-11-05 21:45 s3://centos-images
2014-12-03 23:54 s3://centos6.6-zfs
2014-10-08 01:50 s3://instance-profile-testing
2014-12-01 22:27 s3://mongodb-snapshots
2014-10-10 20:01 s3://new-ubuntu-bundled-image
2014-09-17 18:31 s3://s3cmd-testing
2014-09-30 01:58 s3://ubuntu-bundled-vol
2014-10-22 14:47 s3://ubuntu-docker-template
2014-10-08 13:39 s3://ubuntu-images
2014-10-02 01:42 s3://ubuntu-trusty-imported-20141001
2014-10-30 18:25 s3://ubuntu-trusty-imported-20141030
2014-10-29 02:18 s3://ubuntu-trusty-server-10282014
2014-10-01 00:28 s3://wrong-s3-url-test

To perform a ZFS snapshot of the app-pool/mysql dataset, do the following:

[root@euca-172-17-248-184 ~]# zfs snapshot app-pool/mysql@wednesday
[root@euca-172-17-248-184 ~]# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
app-pool/mysql@wednesday 0 - 30K -

After creating a bucket for the backup, send the ZFS snapshot to the bucket:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd mb s3://mysql-backups
[root@euca-172-17-248-184 ~]# zfs send app-pool/mysql@wednesday | xz | ./s3cmd/s3cmd put - s3://mysql-backups/mysql-backup-wednesday.img.xz
<stdin> -> s3://mysql-backups/mysql-backup-wednesday.img.xz [part 1, 1440B]
 1440 of 1440 100% in 2s 561.67 B/s done

To confirm if the snapshot is located in the bucket, use s3cmd:

[root@euca-172-17-248-184 ~]# ./s3cmd/s3cmd ls s3://mysql-backups
2014-12-04 02:22 1440 s3://mysql-backups/mysql-backup-wednesday.img.xz

Thats all folks.  We have successfully created a CentOS 6.6 EMI with ZFS support.  For more information regarding ZFS (and inspirations for this blog), check out the following resources:

Cloud Image Management on Eucalyptus: Creating a CentOS 6.6 EMI With ZFS Support

Monitor Eucalyptus JVMs with Zabbix

Originally posted on A sysadmin born in the cloud:

Zabbix is a rich monitoring system, which support SNMP, JMX, IPMI and uses its own agents for more functionality.
The great advantage of Zabbix is that it is self-sufficient : all the tools you need are in the packages and got connected to each others, whereas on Nagios you have to add tens of plugins.

Zabbix JMX monitoring is pretty simple : using a “JMX Gateway”, it will get connected to the server and collect information which will be transferred to the Zabbix Proxy / Server.

On the first place, we have to enable the JMX monitoring in Eucalyptus, so the JVM will allow connections to its monitoring systems. Edit /etc/eucalyptus/eucalyptus.conf

More details for more options regarding the JMX (ie : ssl support, passwords) here

Regarding the Zabbix ‘s configuration, something I am really fan of is Zabbix’s documentation. For example, click here to get all steps to activate the…

View original 61 more words

Monitor Eucalyptus JVMs with Zabbix

Using Eucalyptus 4.0.1 CloudFormation to Deploy a CoreOS (Docker) Cluster

In a previous blog, I discussed how cloud-init can be used to customize a CoreOS image deployed as an instance on Eucalyptus – which happens to work in the same fashion on AWS.  This is a follow-up blog to demonstrate how to use Eucalyptus Cloudformation (which is in Tech Preview in Eucalyptus 4.0.0/4.0.1) to deploy a CoreOS cluster on Eucalyptus, customizing each instance using the cloud-config service.  This setup will allow cloud users to test out CoreOS clusters on Eucalyptus, just as CoreOS recommends on AWS EC2.

Prerequisites

Just as in the previous blog discussing the use of CoreOS, using Eucalyptus IAM is highly recommended.  In addition, to the prerequisites mentioned in that blog, the following service API actions need to be allowed (at a minimum) in the IAM policy for the user(s) that want to utilize this blog:

In addition to having the correct IAM policy actions authorized, the cloud user needs to be using the latest version of euca2ools with Eucalyptus 4.0.1.  Once these prerequisites are met, the Eucalyptus cloud needs to be prepared with the correct EMI for the deployment.

Adding CoreOS Image To Eucalyptus

In order to deploy an CoreOS cluster on Eucalyptus, the CoreOS image needs to be bundled, uploaded and registered.  To obtain the CoreOS image, download the image from the CoreOS Beta Release site. For example:

# wget -q http://beta.release.core-os.net/amd64-usr/current/coreos_production_ami_image.bin.bz2
 # bunzip2 -d coreos_production_ami_image.bin.bz2
 # qemu-img info coreos_production_ami_image.bin
 image: coreos_production_ami_image.bin
 file format: raw
 virtual size: 4.4G (4699717632 bytes)
 disk size: 4.4G

Once the image has been downloaded and user credentials have been sourced, use euca-install-image to bundle, upload and register the image as an instance store-backed HVM image to be used with the Cloudformation template. In addition, note the EC2_USER_ID value present in the eucarc file as it will be used with the Cloudformation template as well.

# euca-install-image -b coreos-production-ami -i coreos_production_ami_image.bin --virtualization-type hvm -n coreos-hvm -r x86_64
 ....
 /var/tmp/bundle-WsLdGB/coreos_production_ami_image.bin.part.19 100% |=================================================================| 6.08 MB 12.66 MB/s Time: 0:00:00
 /var/tmp/bundle-WsLdGB/coreos_production_ami_image.bin.manifest.xml 100% |============================================================| 6.28 kB 2.66 kB/s Time: 0:00:02
 IMAGE emi-DAB316FD

CoreOS etcd Discovery Service Token

CoreOS uses a service called etcd on each machine to handle coordination of services in a cluster.  To make sure the machines know that they are part of the same cluster, a discovery token needs to be generated and shared with each instance using the cloud-config service.  To generate a custom token, open a browser and go to the following URL:

https://discovery.etcd.io/new

The URL similar to the example below should show up in the browser:

https://discovery.etcd.io/7b67f765e2f264cf65b850a849a7da7e

Take note of the URL because it will be needed later.

Select VM Type and Availability Zone on Eucalyptus

Before deploying the CoreOS cluster on Eucalyptus, the user needs to determine the instance type, and the availability zone (Eucalyptus Cluster). In order to do this, use euca-describe-instance-types to show the instance types, availability zone(s), and the capacity for each instance type available in the availability zone(s).

# euca-describe-instance-types --show-capacity --by-zone
 AVAILABILITYZONE SirLuciousLeftFoot
 INSTANCETYPE Name CPUs Memory (MiB) Disk (GiB) Used / Total Used %
 INSTANCETYPE t1.micro 1 256 5 0 / 6 0%
 INSTANCETYPE m1.small 1 512 10 0 / 6 0%
 INSTANCETYPE m1.medium 1 1024 10 0 / 6 0%
 INSTANCETYPE c1.xlarge 2 2048 10 0 / 3 0%
 INSTANCETYPE m1.large 2 1024 15 0 / 3 0%
 INSTANCETYPE c1.medium 1 1024 20 0 / 6 0%
 INSTANCETYPE m1.xlarge 2 1024 30 0 / 3 0%
 INSTANCETYPE m2.2xlarge 2 4096 30 0 / 3 0%
 INSTANCETYPE m3.2xlarge 4 4096 30 0 / 1 0%
 INSTANCETYPE m2.xlarge 2 2048 40 0 / 3 0%
 INSTANCETYPE m3.xlarge 2 2048 50 0 / 3 0%
 INSTANCETYPE cc1.4xlarge 8 3072 60 0 / 0
 INSTANCETYPE m2.4xlarge 8 4096 60 0 / 0
 INSTANCETYPE hi1.4xlarge 8 6144 120 0 / 0
 INSTANCETYPE cc2.8xlarge 16 6144 120 0 / 0
 INSTANCETYPE cg1.4xlarge 16 12288 200 0 / 0
 INSTANCETYPE cr1.8xlarge 16 16384 240 0 / 0
 INSTANCETYPE hs1.8xlarge 48 119808 24000 0 / 0
AVAILABILITYZONE ViciousLiesAndDangerousRumors
 INSTANCETYPE Name CPUs Memory (MiB) Disk (GiB) Used / Total Used %
 INSTANCETYPE t1.micro 1 256 5 4 / 12 33%
 INSTANCETYPE m1.small 1 512 10 4 / 12 33%
 INSTANCETYPE m1.medium 1 1024 10 4 / 12 33%
 INSTANCETYPE c1.xlarge 2 2048 10 2 / 6 33%
 INSTANCETYPE m1.large 2 1024 15 2 / 6 33%
 INSTANCETYPE c1.medium 1 1024 20 4 / 12 33%
 INSTANCETYPE m1.xlarge 2 1024 30 2 / 6 33%
 INSTANCETYPE m2.2xlarge 2 4096 30 0 / 2 0%
 INSTANCETYPE m3.2xlarge 4 4096 30 0 / 2 0%
 INSTANCETYPE m2.xlarge 2 2048 40 2 / 6 33%
 INSTANCETYPE m3.xlarge 2 2048 50 2 / 6 33%
 INSTANCETYPE cc1.4xlarge 8 3072 60 0 / 0
 INSTANCETYPE m2.4xlarge 8 4096 60 0 / 0
 INSTANCETYPE hi1.4xlarge 8 6144 120 0 / 0
 INSTANCETYPE cc2.8xlarge 16 6144 120 0 / 0
 INSTANCETYPE cg1.4xlarge 16 12288 200 0 / 0
 INSTANCETYPE cr1.8xlarge 16 16384 240 0 / 0
 INSTANCETYPE hs1.8xlarge 48 119808 24000 0 / 0

For this blog, the availability zone ‘ViciousLiesAndDangerousRumors’ and the instance type ‘c1.medium’ will be used as a parameter for the Cloudformation template.  Now, Eucalyptus Cloudformation is ready to be used.

Deploying the CoreOS Cluster

Final Preparations

Before using the Cloudformation template for the CoreOS cluster, a keypair needs to be created.  This keypair will also be used as a parameter for the Cloudformation template.

To obtain the template, download the template from coreos-cloudformation-template bucket on AWS S3.  Once the file has been downloaded, the following edits need to happen.

The first edit is to define the ‘AvailabilityZones’ in the ‘Properties’ section of the ‘CoreOsGroup’ resource.  For example, ‘ViciousLiesAndDangerousRumors’ has been placed as the value for ‘AvailabilityZones':

"CoreOsGroup" : {
 "Type" : "AWS::AutoScaling::AutoScalingGroup",
 "Properties" : {
 "AvailabilityZones" : [ "ViciousLiesAndDangerousRumors" ],
 "LaunchConfigurationName" : { "Ref" : "CoreOsLaunchConfig" },
 "MinSize" : { "Ref" : "ClusterSize" },
 "MaxSize" : { "Ref" : "ClusterSize" }
 }
 },

The second and final edit, is to update the ‘UserData’ property to have the correct value for the discovery token that was provided earlier in this blog.  For example:

"UserData" : { "Fn::Base64" : { "Fn::Join" : ["",[
 "#cloud-config","\n",
 "coreos:","\n",
 " etcd:","\n",
 " discovery: https://discovery.etcd.io/7b67f765e2f264cf65b850a849a7da7e","\n",
 " addr: $private_ipv4:4001","\n",
 " peer-addr: $private_ipv4:7001","\n",
 " units:","\n",

Now that these values have been updated, the CoreOS cluster can be deployed.

Create the Stack

To deploy the cluster, use euform-create-stack with the parameter values filled in appropriately.  For example:

# euform-create-stack --template-file cfn-coreos-as.json --parameter "CoreOSImageId=emi-DAB316FD" --parameter "UserKeyPair=account1-user01" --parameter "AcctId=408396244283" --parameter "ClusterSize=3" --parameter "VmType=c1.medium" CoreOSClusterStack
 arn:aws:cloudformation:bigboi:408396244283:stack/CoreOSClusterStack/43d53adb-68f2-4317-bd2b-3da661977ebc

The ‘ClusterSize’ parameter is completely dependent upon how big of a CoreOS cluster the user would like to have based upon the instance types supported on the Eucalyptus cloud.  Please refer to the CoreOS documentation regarding optimal cluster sizes to see what would best suit the use case of the cluster.

Check Out The Stack Resources

After deploying the Cloudformation stack, after a few minutes, use euform-describe-stacks to check the status of the stack. The status of the stack should return with CREATE_COMPLETE.

# euform-describe-stacks
 STACK CoreOSClusterStack CREATE_COMPLETE Complete! Deploy CoreOS Cluster 2014-08-28T22:31:02.669Z
 OUTPUT AutoScalingGroup CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG

To check out the resources associated with the Cloudformation stack, use euform-describe-stack-resources:

# euform-describe-stack-resources -n CoreOSClusterStack --region account1-user01@
 RESOURCE CoreOsSecurityGroupIngress2 CoreOsSecurityGroupIngress2 AWS::EC2::SecurityGroupIngress CREATE_COMPLETE
 RESOURCE CoreOsLaunchConfig CoreOSClusterStack-CoreOsLaunchConfig-FFSTY76SDQAWB AWS::AutoScaling::LaunchConfiguration CREATE_COMPLETE
 RESOURCE CoreOsSecurityGroup CoreOSClusterStack-CoreOsSecurityGroup-D3WCUH0SKHYVC AWS::EC2::SecurityGroup CREATE_COMPLETE
 RESOURCE CoreOsSecurityGroupIngress1 CoreOsSecurityGroupIngress1 AWS::EC2::SecurityGroupIngress CREATE_COMPLETE
 RESOURCE CoreOsGroup CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG AWS::AutoScaling::AutoScalingGroup CREATE_COMPLETE

Check the status of the instances by using the value returned for ‘AutoScalingGroup’ from the euform-describe-stacks output:

# euscale-describe-auto-scaling-groups CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG --region account1-user01@
 AUTO-SCALING-GROUP CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG CoreOSClusterStack-CoreOsLaunchConfig-FFSTY76SDQAWB ViciousLiesAndDangerousRumors 3 33 Default
 INSTANCE i-E6FB62D0 ViciousLiesAndDangerousRumors InService Healthy CoreOSClusterStack-CoreOsLaunchConfig-FFSTY76SDQAWB
 INSTANCE i-2AC4CC35 ViciousLiesAndDangerousRumors InService Healthy CoreOSClusterStack-CoreOsLaunchConfig-FFSTY76SDQAWB
 INSTANCE i-442C4692 ViciousLiesAndDangerousRumors InService Healthy CoreOSClusterStack-CoreOsLaunchConfig-FFSTY76SDQAWB

Check the Status of the CoreOS Cluster

In order to check the status of the CoreOS cluster, SSH into one of the instances (the port was opened in the security group as part of the Cloudformation template), and use the fleetctl command:

# euca-describe-instances i-E6FB62D0 i-2AC4CC35 i-442C4692 --region account1-user01@
 RESERVATION r-AF98046C 408396244283 CoreOSClusterStack-CoreOsSecurityGroup-D3WCUH0SKHYVC
 INSTANCE i-2AC4CC35 emi-DAB316FD euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com euca-172-18-223-111.bigboi.internal running account1-user01 0 c1.medium 2014-08-28T22:15:48.043Z ViciousLiesAndDangerousRumors monitoring-enabled 10.104.6.233 172.18.223.111 instance-store hvm d88cac3d-ce92-4c3b-98ee-7e507afc26cb_ViciousLiesAndDangerousR_1 sg-31503C69 x86_64
 TAG instance i-2AC4CC35 aws:autoscaling:groupName CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG
 RESERVATION r-A24611A2 408396244283 CoreOSClusterStack-CoreOsSecurityGroup-D3WCUH0SKHYVC
 INSTANCE i-442C4692 emi-DAB316FD euca-10-104-6-235.bigboi.acme.eucalyptus-systems.com euca-172-18-223-227.bigboi.internal running account1-user01 0 c1.medium 2014-08-28T22:15:48.056Z ViciousLiesAndDangerousRumors monitoring-enabled 10.104.6.235 172.18.223.227 instance-store hvm 1281a747-69a7-4f26-8fe2-2dea6b8b858d_ViciousLiesAndDangerousR_1 sg-31503C69 x86_64
 TAG instance i-442C4692 aws:autoscaling:groupName CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG
 RESERVATION r-089053BE 408396244283 CoreOSClusterStack-CoreOsSecurityGroup-D3WCUH0SKHYVC
 INSTANCE i-E6FB62D0 emi-DAB316FD euca-10-104-6-232.bigboi.acme.eucalyptus-systems.com euca-172-18-223-222.bigboi.internal running account1-user01 0 c1.medium 2014-08-28T22:15:38.146Z ViciousLiesAndDangerousRumors monitoring-enabled 10.104.6.232 172.18.223.222 instance-store hvm c0dc6cca-5fa3-4614-a4ec-8a902bf6ff66_ViciousLiesAndDangerousR_1 sg-31503C69 x86_64
 TAG instance i-E6FB62D0 aws:autoscaling:groupName CoreOSClusterStack-CoreOsGroup-G7Y7YVWI4DOPG
# ssh -i account1-user01/account1-user01.priv core@euca-10-104-6-232.bigboi.acme.eucalyptus-systems.com
 Last login: Thu Aug 28 15:32:34 2014 from 10.104.10.55
 CoreOS (beta)
 core@euca-172-18-223-222 ~ $ fleetctl list-machines -full=true
 MACHINE IP METADATA
 6f4e3de463490a7644e3d7c80d826770 172.18.223.227 -
 929c1f121860c63b506c0b951c19de7b 172.18.223.222 -
 a08155346fb55f9b53b154d6447af0fa 172.18.223.211 -
 core@euca-172-18-223-222 ~ $

The cluster status can also be checked by going to the discovery token URL that was placed in the Cloudformation template.

CoreOS etcd discovery cluster listing

Conclusion

Just as on AWS, Cloudformation can be used to deploy a CoreOS cluster on Eucalyptus.  Users will be able to test out different use cases, such as Cluster-Level Container Development with fleet, or get more familiar with CoreOS by going through the CoreOS documentation.  As always, feel free to ask any questions.  Feedback is always welcome.

Enjoy!

Using Eucalyptus 4.0.1 CloudFormation to Deploy a CoreOS (Docker) Cluster