Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). - Wikipedia
According to Wikipedia currently there are few popular service models exist.
1. Infrastructure as a service (IaaS)
2. Platform as a service (PaaS)
3. Software as a service (SaaS)
So, I have an Eucalyptus cloud, which is great, serves as AWS-like IaaS platform.
A few weeks back, I was doing some testing with the guys from AppScale to get an Eucalyptus Machine Image (EMI) to run on Eucalyptus. The image that was provided to me was an EBS-backed Amazon Machine Image (AMI), using a published EC2 Lucid Ubuntu Cloud image. This blog entry describes the procedure to convert an EBS-backed AMI to an Walrus-backed EMI. The goal here is to demonstrate how easy it is to use Ubuntu Cloud images to set up AppScale on both AWS and Eucalyptus as a hybrid cloud use case. There are many other hybrid cloud use cases that can be done with this setup, but this blog entry will focus on the migration of AMI images to EMI images.
*NOTE* This entry assumes that a user is experienced with both Amazon Web Services and Eucalyptus. For additional information, please refer to the following resources:
- Amazon Elastic Compute Cloud User Guide
- AWS Identity and Access Management – Using IAM Guide
- Amazon Elastic Compute Cloud CLI Guide
- Eucalyptus 3.2 Administrator’s Guide
- Eucalyptus 3.2 User Guide
Before getting started, the following is needed:
- For Amazon Web Services
- For Eucalyptus
*NOTE* Make sure there is an understanding of the IAM policies on AWS and Eucalyptus. These are key in making sure that the user on both AWS and Eucalyptus can perform all the steps covered in this topic.
Work in AWS…
After setting up the command-line tools for AWS EC2, and adding in the necessary EC2 and S3 IAM policies, everything is in place to get started with working with the AWS instances and images. *NOTE* To get help with setting up the IAM policies, check out the AWS Policy Generator. To make sure things look good, I tested out my EC2 access by running ec2-describe-availability-zones:
$ ec2-describe-availability-zones AVAILABILITYZONE us-east-1a available us-east-1 AVAILABILITYZONE us-east-1b available us-east-1 AVAILABILITYZONE us-east-1c available us-east-1 AVAILABILITYZONE us-east-1d available us-east-1
After that, I set up a keypair and SSH access for any instance that is launched within the default security group:
$ ec2-create-keypair hspencer-appscale –region ap-northeast-1 > hspencer-appscale.pem
$ ec2-authorize -P tcp -p 22 -s 0.0.0.0/0 default –region ap-northeast-1
With everything looking good, I went ahead and checked out the AMI that I was asked to test. Below is the AMI that was given to me:
$ ec2-describe-images ami-2e4bf22f --region ap-northeast-1 IMAGE ami-2e4bf22f 839953741869/appscale-lite-1.6.3-testing 839953741869 available public x86_64 machine aki-d409a2d5 ebs paravirtual xen BLOCKDEVICEMAPPING EBS /dev/sda1 snap-7953a059 8 true standard BLOCKDEVICEMAPPING EPHEMERAL /dev/sdb ephemeral0
As you can see, the AMI given to me is an EBS-backed image, and it is in a different region (ap-northeast-1). I could have done all my work in the ap-northeast-1 region, but I wanted to test out region-to-region migration of images on AWS S3 using ec2-migrate-manifest. In order to access the EBS-backed instance that is launched, I set up a keypair and SSH access for any instance that is launched within the default security group:
$ ec2-create-keypair hspencer-appscale --region ap-northeast-1 > hspencer-appscale.pem $ ec2-authorize -P tcp -p 22 -s 0.0.0.0/0 default --region ap-northeast-1
Now that I have my image, keypair and security group access, I am ready to launch an instance, so I can use the ec2-bundle-vol command to create an image of the instance. To launch the instance, I ran the following:
$ ec2-run-instances -k hspencer-appscale ami-2e4bf22f –region ap-northeast-1
After the instance is up and running, I scp’d my EC2_PRIVATE_KEY and EC2_CERT to the instance using the keypair created (hspencer-appscale.pem). The instance already had the latest version of ec2-api-tools and ec2-ami-tools as part of the installation of AppScale. Similar to the instructions provided by AWS for creating an instance-store backed AMI from an existing AMI, I used ec2-bundle-vol to bundle a new image and used /mnt/ (which is ephemeral storage) to store the manifest information.
root@ip-10-156-123-126:~# ec2-bundle-vol -u 9xxxxxxx3 -k pk-XXXXXXXXXXXXXXXX.pem -c cert-XXXXXXXXXXXXXXXXX.pem -d /mnt/ -e /mnt/ Please specify a value for arch [x86_64]: x86_64 Copying / into the image file /mnt/image... Excluding: /dev /sys /sys/kernel/security /sys/kernel/debug /proc /dev/pts /dev /media /mnt /proc /sys /mnt/ /mnt/image /mnt/img-mnt 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00990555 s, 106 MB/s mke2fs 1.41.11 (14-Mar-2010) Bundling image file... Splitting /mnt/image.tar.gz.enc... Created image.part.000 ……………..
Next, I need to inform the manifest to use us-west-1 as the region to store the image, and not ap-northeast-1. To do this, I used ec2-migrate-manifest. *NOTE* This tool can only be used in the following regions: EU,US,us-gov-west-1,us-west-1,us-west-2,ap-southeast-1,ap-southeast-2,ap-northeast-1,sa-east-1.
root@ip-10-156-123-126:~# ec2-migrate-manifest -m /mnt/image.manifest.xml -c cert-XXXXXXXXX.pem -k pk-XXXXXXXXXXXX.pem -a XXXXXXXXXX -s XXXXXXXXX --region us-west-1 Backing up manifest... warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session Successfully migrated /mnt/image.manifest.xml It is now suitable for use in us-west-1.
Time to upload the bundle to S3 using ec2-upload-bundle:
root@ip-10-156-123-126:~# ec2-upload-bundle -b appscale-lite-1.6.3-testing -m /mnt/image.manifest.xml -a XXXXXXXXXX -s XXXXXXXXXX --location us-west-1 You are bundling in one region, but uploading to another. If the kernel or ramdisk associated with this AMI are not in the target region, AMI registration will fail. You can use the ec2-migrate-manifest tool to update your manifest file with a kernel and ramdisk that exist in the target region. Are you sure you want to continue? [y/N]y Creating bucket... Uploading bundled image parts to the S3 bucket appscale-lite-1.6.3-testing ... Uploaded image.part.000 Uploaded image.part.001 Uploaded image.part.002 Uploaded image.part.003 ………….
After the image has been uploaded successfully, all that is left to do is register the image.
root@ip-10-156-123-126:~# export JAVA_HOME=/usr root@ip-10-156-123-126:~# ec2-register -K pk-XXXXXXXXXXXX.pem -C cert-XXXXXXXXXX.pem --region us-west-1 appscale-lite-1.6.3-testing/image.manifest.xml --name appscale1.6.3-testing IMAGE ami-705d7c35 $ ec2-describe-images ami-705d7c35 --region us-west-1 IMAGE ami-705d7c35 986451091583/appscale1.6.3-testing 986451091583 available private x86_64 machine aki-9ba0f1de instance-store paravirtual xen BLOCKDEVICEMAPPING EPHEMERAL /dev/sdb ephemeral0
Work in Eucalyptus…
Now that we have the image registered, we can use ec2-download-bundle and ec2-unbundle to get the machine image to an instance running on Eucalyptus, so that we can bundle, upload and register the image to Eucalyptus.
- lucid-server-cloudimg-amd64-loader (ramdisk)
- lucid-server-cloudimg-amd64-vmlinuz-virtual (kernel)
- lucid-server-cloudimg-amd64.img (root image)
After bundling, uploading and registering those images, I created a keypair, and SSH access for the instance that is launched within the default security group:
euca-add-keypair hspencer-euca > hspencer-euca.pem euca-authorize -P tcp -p 22 -s 0.0.0.0/0 default
Now, I run the EMI for the Lucid image that was registered:
euca-run-instance -k hspencer-euca --user-data-file cloud-init.config -t m1.large emi-29433329
I used vm.type m1.large so that I can use the space on ephemeral to store the image that I will pull from AWS.
Once the instance is running, I scp’d my EC2_PRIVATE_KEY and EC2_CERT to the instance using the keypair created (hspencer-euca.pem). After installing the ec2-ami-tools on the instance, I used ec2-download-bundle to download the bundle to /media/ephemeral0, and ec2-unbundle the image:
# ec2-download-bundle -b appscale-lite-1.6.3-testing -d /media/ephemeral0/ -a XXXXXXXXXXX -s XXXXXXXXXXXX -k pk-XXXXXXXXX.pem --url http://s3-us-west-1.amazonaws.com # ec2-unbundle -m /media/ephemeral0/image.manifest.xml -s /media/ephemeral0/ -d /media/ephemeral0/ -k pk-XXXXXXXXXX.pem
Now that I have the root image from AWS, I just need to bundle, upload and register the root image to Eucalyptus. To do so, I scp’d my Eucalyptus user credentials to the instance. After copying the Eucalyptus credentials to the instance, I ssh’ed into the instance and source the Eucalyptus credentials.
Since I have already bundled the kernel and ramdisk for the Ubuntu Cloud Lucid image before, I just need to upload, bundle and register the image I unbundled from AWS. To do so, I did the following:
euca-bundle-image -i image euca-upload-bundle -b appscale-1.6.3-x86_64 -m /tmp/image.manifest.xml euca-register -a x86_64 appscale-1.6.3-x86_64/image.manifest.xml
Now the image is ready to be launched on Eucalyptus.
As demonstrated above, because of the AWS fidelity that Eucalyptus provides, it enables setting up hybrid cloud environments with Eucalyptus and AWS that can be leveraged by applications, like AppScale.
Other examples of AMI to EMI conversions can be found here:
Introduction to EDBP
I had to come up with an acronym. This is particularly important when you're in the cloud business, I think it comes somewhere before the business plan and just after the beer.
Did you know that EUCALYPTUS itself is an Acronym for Elastic Utility Computing Architecture for Linking Your Programs…
Being a system administrator is the easiest job in the building when the system is working; no one questions your presence nor existence. Your tasks are highly under-appreciated during the time of peace, yet you do not mind for such vanity since you'd rather be reading blogs and watching youTubes in serenity. Every once in a while, an idiot cracks an ancient joke, "hey, aren't you supposed be working?".
After getting some free time to put together a high-level diagram of the Varish/Walrus setup we are using at Eucalyptus Systems, I decided to use it as an opportunity to make it my first technical blog.
Here at Eucalyptus Systems, we are really big on “drinking our own champagne“. We are in the process of migrating every-day enterprise services to utilize Eucalyptus. My good friend and co-worker Graziano got our team started down this path with his blog posts on Drinking Champagne and Planet Eucalyptus.
We needed to migrate storage of various tar-gzipped files from a virtual machine, to an infrastructure running on Eucalyptus. Since Eucalyptus Walrus is compatible with Amazon’s S3, it serves as a great service for storing tons of static content and data. Walrus – just as S3 – also has ACLs for data objects stored.
With all the coolness of storing data in Walrus, we needed to figure out a way lessen the network load to Walrus due to multiple HTTP GET requests. This is where Varnish comes to the rescue…
Above is the architectural diagram of how Varnishd can be set up as a caching service for objects stored in Walrus buckets. Varnish is primarily developed as an HTTP accelerator. In this setup, we use varnish to accomplish the following:
- caching bucket objects requested through HTTP
- custom URL naming for buckets
- granular control to Walrus buckets
Bucket Objects in Walrus
We upload the bucket objects using a patched version of s3cmd. To allow the objects to be accessed by the varnish caching instance, we use s3cmd as follows:
- Create the bucket:
s3cmd mb s3://bucket-name
- Upload the object and make sure its public accessible:
s3cmd put --acl-public --guess-mime-type object s3://bucket-name/object
And thats it. All the other configuration is done on the varnish end. Now, on to varnish..
The instance that is running varnish is running Debian 6.0. We process any request for specific bucket objects that come to the instance, and pull from the bucket where the object is located. The instance is customized to take in scripts through user-data/user-data-file option thats can be used by the euca-run-instance command. The rc.local script that enables this option in the image can be found here. The script we use for this varnish setup – along with other deployment scripts – can be found here on projects.eucalyptus.com.
Thats it! We can bring up another instance quickly without a problem – since it’s scripted..:-). We also use Walrus to store our configurations as well. For extra security, we don’t make those objects public. We use s3cmd to download the objects, then move the configuration files to the correct location in the instance.
We hope this setup inspires other ideas that can be implemented with Eucalyptus. Please feel free to give any feedback. We are always open to improve things here at Eucalyptus. Enjoy, and be on the look out for a follow-up post discussing how to add load balancing to this setup.