After getting some free time to put together a high-level diagram of the Varish/Walrus setup we are using at Eucalyptus Systems, I decided to use it as an opportunity to make it my first technical blog.
Here at Eucalyptus Systems, we are really big on “drinking our own champagne“. We are in the process of migrating every-day enterprise services to utilize Eucalyptus. My good friend and co-worker Graziano got our team started down this path with his blog posts on Drinking Champagne and Planet Eucalyptus.
We needed to migrate storage of various tar-gzipped files from a virtual machine, to an infrastructure running on Eucalyptus. Since Eucalyptus Walrus is compatible with Amazon’s S3, it serves as a great service for storing tons of static content and data. Walrus – just as S3 – also has ACLs for data objects stored.
With all the coolness of storing data in Walrus, we needed to figure out a way lessen the network load to Walrus due to multiple HTTP GET requests. This is where Varnish comes to the rescue…
Above is the architectural diagram of how Varnishd can be set up as a caching service for objects stored in Walrus buckets. Varnish is primarily developed as an HTTP accelerator. In this setup, we use varnish to accomplish the following:
- caching bucket objects requested through HTTP
- custom URL naming for buckets
- granular control to Walrus buckets
Bucket Objects in Walrus
We upload the bucket objects using a patched version of s3cmd. To allow the objects to be accessed by the varnish caching instance, we use s3cmd as follows:
- Create the bucket:
s3cmd mb s3://bucket-name
- Upload the object and make sure its public accessible:
s3cmd put --acl-public --guess-mime-type object s3://bucket-name/object
And thats it. All the other configuration is done on the varnish end. Now, on to varnish..
The instance that is running varnish is running Debian 6.0. We process any request for specific bucket objects that come to the instance, and pull from the bucket where the object is located. The instance is customized to take in scripts through user-data/user-data-file option thats can be used by the euca-run-instance command. The rc.local script that enables this option in the image can be found here. The script we use for this varnish setup – along with other deployment scripts – can be found here on projects.eucalyptus.com.
Thats it! We can bring up another instance quickly without a problem – since it’s scripted..:-). We also use Walrus to store our configurations as well. For extra security, we don’t make those objects public. We use s3cmd to download the objects, then move the configuration files to the correct location in the instance.
We hope this setup inspires other ideas that can be implemented with Eucalyptus. Please feel free to give any feedback. We are always open to improve things here at Eucalyptus. Enjoy, and be on the look out for a follow-up post discussing how to add load balancing to this setup.