Install Eucalyptus 4.0 Using Motherbrain and Chef

hspencer77:

Great blog discussing how to use Motherbrain, Chef and Eucalyptus.

Originally posted on Testing Clouds at 128bpm:

Introduction

Installing distributed systems can be a tedious and time consuming process. Luckily there are many solutions for distributed configuration management available to the open source community. Over the past few months, I have been working on the Eucalyptus cookbook which allows for standardized deployments of Eucalyptus using Chef. This functionality has already been implemented in MicroQA using individual calls to Knife (the Chef command line interface) for each machine in the deployment. Orchestration of the deployment is rather static and thus only 3 topologies have been implemented as part of the deployment tab

Last month, Riot Games released Motherbrain, their orchestration framework that allows flexible, repeatable, and scalable deployment of multi-tiered applications. Their approach to the deployment roll out problem is simple and understandable. You configure manifests that define how your application components are split up then define the order in which they should be deployed.

For…

View original 508 more words

12 Steps To EBS-Backed EMI Bliss on Eucalyptus

In previous posts, I shared how to use Ubuntu Cloud Images and eustore with Eucalyptus and AWS.  This blog entry will focus on how to use these assets to create EBS-backed EMIs in 12 steps.   These steps can be used on AWS as well, but instead of creating an instance store-backed AMI first, Ubuntu has already provided AMIs that can be used as the building block instance on AWS.  Let’s get started.

Prerequisites

On Eucalyptus and AWS, it is required the user has the appropriate IAM policy in order to perform these steps.  The policy should contain the following EC2 Actions at a minimum:

  • RunInstances
  • AttachVolume
  • AuthorizeSecurityGroupEgress
  • AuthorizeSecurityGroupIngress
  • CreateKeyPair
  • CreateSnapshot
  • CreateVolume
  • DescribeImages
  • DescribeInstances
  • DescribeInstanceStatus
  • DescribeSnapshots
  • DetachVolume
  • RegisterImage

In addition, the user needs an access key ID and secret key.  For more information, check out the following resources:

This entry also assumes Eucalyptus euca2ools are installed on the client machine.

The 12 Steps

Although the Ubuntu Cloud Image used in this entry is Ubuntu Precise (12.04) LTS, any of of the maintained Ubuntu Cloud images can be used.

  1. Use wget to download tar-gzipped precise-server-cloudimg:
    $ wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64.tar.gz
  2. After setting the EC2_ACCESS_KEY, EC2_SECRET_KEY, and EC2_URL, use eustore-install-image to an instance stored-backed EMI:
    $ eustore-install-image -t precise-server-cloudimg-amd64.tar.gz \
    -b ubuntu-latest-precise-x86_64 --hypervisor universal \
    -s "Ubuntu Cloud Image - Precise Pangolin - 12.04 LTS"
  3. Create a keypair using euca-create-keypair, then use euca-run-instances to launch an instance from the EMI returned from eustore-install-image. For example:
    $ euca-run-instances -t m1.medium \
    -k account1-user01 emi-5C8C3909
  4. Use euca-create-volume to create a volume based upon the size of how big you want the root filesystem to be.  The availability zone (-z option) will be based on if you are using Eucalyptus or AWS:
    $ euca-create-volume -s 6 \
    -z LayinDaSmackDown
  5. Using euca-attach-volume, attach the resulting volume to the running instance. For example:
    $ euca-attach-volume -d /dev/vdd \
    -i i-839E3FB0 vol-B5863B3B
  6. Use euca-authorize to open SSH access to the instance, SSH into the instance, then use wget to download the Ubuntu Precise Cloud Image (qcow2 format):
    $ ssh -i account1-user01.priv ubuntu@euca-10-104-7-10.eucalyptus.euca-hasp.eucalyptus-systems.com
    # sudo -s
    # wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
  7. Install qemu-utils:
    # apt-get install -y qemu-utils
  8. Use qemu-img to convert image from qcow2 to raw:
    # qemu-img convert \
    -O raw precise-server-cloudimg-amd64-disk1.img precise-server-cloudimg-amd64-disk1-raw.img
  9. dd raw image to block device where volume is attached (use dmesg to figure that out easily):
    # dmesg | tail
    [ 7026.943212] virtio-pci 0000:00:05.0: using default PCI settings
    [ 7026.943249] pci 0000:00:07.0: no hotplug settings from platform
    [ 7026.943251] pci 0000:00:07.0: using default PCI settings
    [ 7026.945964] virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
    [ 7026.955143] virtio-pci 0000:00:07.0: PCI INT A -> Link[LNKC] -> GSI 10 (level, high) -> IRQ 10
    [ 7026.955180] virtio-pci 0000:00:07.0: setting latency timer to 64
    [ 7026.955429] virtio-pci 0000:00:07.0: irq 45 for MSI/MSI-X
    [ 7026.955456] virtio-pci 0000:00:07.0: irq 46 for MSI/MSI-X
    [ 7026.986990] vdb: unknown partition table
    [10447.093426] virtio-pci 0000:00:07.0: PCI INT A disabled
    # dd if=/mnt/precise-server-cloudimg-amd64-disk1-raw.img of=/dev/vdb bs=1M
  10. Log out the instance, and use euca-detach-volume to detach the volume:
    $ euca-detach-volume vol-B5863B3B
  11. Use euca-create-snapshot to create a snapshot of the volume:
    $ euca-create-snapshot vol-B5863B3B
  12. Use euca-register to register the resulting snapshot to create the EBS-backed EMI:
    $ euca-register --name ebs-precise-x86_64-sda \
    --snapshot snap-EFDB40A1 --root-device-name /dev/sda

Thats it!  You have successfully created an EBS-backed EMI/AMI.  As mentioned earlier, these steps can be used on AWS just as well (just skip steps 1 & 2, and use one of the Ubuntu Cloud Images in the AWS region of your choice).  Enjoy!

How to add second interface to Eucalyptus instances

hspencer77:

Great blog on the power of using nc-hooks in Eucalyptus

Originally posted on teemuinclouds:

and play around with eucalyptus nc-hooks while we are at it ;)

Reasoning

Do you really really need and want to do custom modifications to your working environment … think twice!

Now that you have through and through thought out the maintenance burden and other facts lets do it :)

Use Case

User needs to add 100(s) or 1000(s) of IP addresses  to instance(s). Currently in Eucalyptus you can only add one elastic IP to an instance. in AWS VPC you can add up to 240 IP addresses to one big instance.

Modifying Stylesheet

What is stylesheet ? XML stylesheet is used to generate the XML file that is used to start up the virtual machine. In Eucalyptus it is located at  NodeController’s/etc/eucalyptus/libvirt.xsl. the input file is generated by NC.

You can manually test your modifications using xsltproc like this:

Copy the style sheet and use some existing instances data…

View original 399 more words

Using Aminator with Eucalyptus

hspencer77:

More Netflix OSS goodness on Eucalyptus…Enjoy!

Originally posted on Testing Clouds at 128bpm:

Introduction to Aminator

The Netflix Cloud Platform has shown how a large scale system can be deployed in a public cloud and maintain an extreme level of performance and reliability. As Adrian Cockcroft has said in the past, Netflix focused on having functional and scalable code rather than worrying immediately about how to make it portable.  At Eucalyptus we have been working over the past year on making sure that as many of the NetflixOSS projects that interact directly with the cloud can be used on Eucalyptus. This level of portability means that anyone with even 1 single linux box can use their system as a test bed for deploying NetflixOSS more broadly on a public cloud. So far at Eucalyptus we have working versions of Asgard, Simian Army, Aminator, and Edda. These tools are cornerstones for app deployment and monitoring with the NetflixOSS stack. In this post I will…

View original 677 more words

Extracting Info From Euca’s Logs

hspencer77:

Great example of how to get performance information from Eucalyptus components.

Originally posted on Testing Clouds at 128bpm:

logstashIntroduction

Throughout my tenure as a Quality Engineer, I have had a love/hate relationship with logs. On one hand, they can be my proof that a problem is occurring and possibly the key to tracking down a fix. On the other hand, they can be an endless stream of seemingly unintelligible information. In debugging a distributed system, such as Eucalyptus, logging can be your only hope in tracing down issues with operations that require the coordination of many components.

Logs are generally presented to users by applications as flat text files that rotate their contents over time in order to bound the amount of space they will take up on the filesystem. Gathering information from these files often involves terminal windows, tail, less, and timestamp correlation. The process of manually aggregating, analyzing and correlating logs can be extremely taxing on the eyes and brain. Having a centralized logging mechanism is…

View original 508 more words

Getting Started with EucaLobo

hspencer77:

Nice tool for AWS/Eucalyptus hybrid utilization.

Originally posted on Testing Clouds at 128bpm:

Initial Setup

In my previous post, I described the story behind EucaLobo, a graphical interface for managing workloads on AWS and Eucalyptus clouds through a <cliche>single pane of glass</cliche>. The tool is built using Javascript and the XUL framework allowing it to be used on Linux, Windows, and Mac for the following APIs:

  • EC2
  • EBS
  • S3
  • IAM
  • CloudWatch
  • AutoScaling
  • Elastic Load Balancing

To get started download the binary for your platform:

Once installation is complete and EucaLobo starts for the first time you will be prompted to enter an endpoint. My esteemed colleague Tony Beckham has created a great intro video showing how to create and edit credentials and endpoints. The default values have been set to the Eucalyptus Community Cloud, a free and easy way to get started using Eucalyptus and clouds in general. This is a great resource for users who want to get a…

View original 653 more words

Bind DNS + OpenLDAP MDB == Dynamic Domain and Fully Delegated Sub-Domain Configuration of DNS

This blog post was driven by the need to make it easier to test Eucalyptus DNS in a lab environment.   The goal was to have a scriptable way to add/delete fully delegated sub-domains without having to reload/restart DNS when Eucalyptus clouds were being deployed/destroyed.   This was tested on an CentOS 6 instance running in a Eucalyptus 3.3 HA Cloud.

Prerequisites

This entry will not cover setting up Eucalyptus HA, creating a Eucalyptus user, using eustore to register the image, and/or opening up ports in security groups.  Its assumed the reader understands these concepts.  The focus will be configuring and deploying Bind9 and OpenLDAP.

In addition to using a CentOS 6.4 image, the following is needed:

  • ports open for DNS (tcp and udp 53)
  • port open for OpenLDAP (tcp 389)
  • port open for SSH (tcp 22)

Now that the prereqs have been covered, lets jump into setting up the environment.

Base Software Installation

There are a series of packages needed to install OpenLDAP (since we are building from source), and Bind9.  Once the instance is launched and running, SSH into the instance, and run the following commands:

# sudo yum -y upgrade
# sudo yum install -y git cyrus-sasl gcc glibc-devel libtool-ltdl \
db4-devel openssl-devel unixODBC-devel libtool-ltdl-devel libtool \
cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-lib cyrus-sasl-md5 \
make bind-dyndb-ldap

All the packages except for bind-dyndb-ldap are needed to build OpenLDAP from source.  The reason we are building OpenLDAP from source is to take advantage of their powerful backend – MDB.  Check out my previous blogs on this topic from this listing.

The key package for Bind9 DNS to communicate to OpenLDAP as a backend is bind-dyndb-ldap.  This plug-in is used by the FreeIPA Identity/Policy Management application to help leverage 389 Directory Servers (which is based off OpenLDAP) for storing domain name information.

OpenLDAP Installation and Configuration

Since all the base packages are installed, we can now grab and install the latest source version of OpenLDAP.  While still being logged into the instance, run the following command:

# git clone git://git.openldap.org/openldap.git ~/openldap

Since we are working with an instance based on the CentOS 6 image on eustore, we will use the ephemeral store (which is mounted under /media/ephemeral0) for the location of our OpenLDAP installation.  Create a directory for installing OpenLDAP on the ephemeral store by running the command below:

# mkdir /media/ephemeral0/openldap

Next, configure OpenLDAP:

# cd ~/openldap
# ./configure --prefix=/media/ephemeral0/openldap --enable-debug=yes \
--enable-syslog --enable-dynamic --enable-slapd --enable-dynacl \
--enable-spasswd --enable-modules --enable-rlookups --enable-mdb \
--enable-monitor --enable-overlays --with-cyrus-sasl --with-threads \
--with-tls=openssl CC="gcc" LDFLAGS="-L/usr/lib64/sasl2" CPPFLAGS="-I/usr/include/sasl"

After thats completed successfully, compile and install OpenLDAP:

# make depend
# make
# sudo make install

After the installation is complete, create the openldap user that will be responsible for running OpenLDAP:

# sudo useradd -m -U -c "OpenLDAP User" -s /bin/bash openldap
# sudo passwd -l openldap

Since the bind-dyndb-ldap package was installed earlier, copy the schema to where OpenLDAP stores its schemas, so that it can be added to the OpenLDAP configuration:

# sudo cp /usr/share/doc/bind-dyndb-ldap-2.3/schema \
/media/ephemeral0/openldap/etc/openldap/schema/bind-dyndb-ldap.schema

Next, create the LDAP password for the cn=admin,cn=config user.  This user is responsible for managing the configuration of the OpenLDAP server using OLC:

# /media/ephemeral0/openldap/sbin/slappasswd -h {SSHA}

Modify the slapd.conf file located under /media/ephemeral0/openldap/etc/openldap/, to set up the base configuration structure (cn=config) for OpenLDAP.  When completed, it should look like the following:

#######################################################################
# Config database definitions
#######################################################################
pidfile /media/ephemeral0/openldap/var/run/slapd.pid
argsfile /media/ephemeral0/openldap/var/run/slapd.args
database config
rootdn cn=admin,cn=config
rootpw {SSHA}xxxxxxxxxxxxxxxxxxxxxxxx - (password created for cn=admin,cn=config user)
# Schemas, in order
include /media/ephemeral0/openldap/etc/openldap/schema/core.schema
include /media/ephemeral0/openldap/etc/openldap/schema/cosine.schema
include /media/ephemeral0/openldap/etc/openldap/schema/inetorgperson.schema
include /media/ephemeral0/openldap/etc/openldap/schema/collective.schema
include /media/ephemeral0/openldap/etc/openldap/schema/corba.schema
include /media/ephemeral0/openldap/etc/openldap/schema/duaconf.schema
include /media/ephemeral0/openldap/etc/openldap/schema/dyngroup.schema
include /media/ephemeral0/openldap/etc/openldap/schema/misc.schema
include /media/ephemeral0/openldap/etc/openldap/schema/nis.schema
include /media/ephemeral0/openldap/etc/openldap/schema/openldap.schema
include /media/ephemeral0/openldap/etc/openldap/schema/ppolicy.schema
include /media/ephemeral0/openldap/etc/openldap/schema/bind-dyndb-ldap.schema

Create the slapd.d directory under /media/ephemeral0/openldap/etc/openldap/.  This will contain all the directory information:

# sudo chown -R openldap:openldap /media/ephemeral0/openldap/* 
# su - openldap -c "mkdir /media/ephemeral0/openldap/etc/openldap/slapd.d"

Populate the slapd.d directory with the base configuration by running the following command:

su - openldap -c "/media/ephemeral0/openldap/sbin/slaptest \
-f /media/ephemeral0/openldap/etc/openldap/slapd.conf \
-F /media/ephemeral0/openldap/etc/openldap/slapd.d"
config file testing succeeded

For this example, we will be setting up the directory to use dc=eucalyptus,dc=com as the LDAP base.  Create the cn=Directory Manager,dc=eucalyptus,dc=com LDAP password:

# /media/ephemeral0/openldap/sbin/slappasswd -h {SSHA}

Create an LDIF that will define the configuration of the DB associated with the information regarding the DNS entries.  For this example, the LDIF will be called directory-layout.ldif.  It should look like the following:

#######################################################################
# MDB database definitions
#######################################################################
#
dn: olcDatabase=mdb,cn=config
changetype: add
objectClass: olcDatabaseConfig
objectClass: olcMdbConfig
olcDatabase: mdb
olcSuffix: dc=eucalyptus,dc=com
olcRootDN: cn=Directory Manager,dc=eucalyptus,dc=com
olcRootPW: {SSHA}xxxx - (password of cn=Directory Manager,dc=eucalyptus,dc=com user)
olcDbDirectory: /media/ephemeral0/openldap/var/openldap-data/dns
olcDbIndex: objectClass eq
olcAccess: to attrs=userPassword by dn="cn=Directory Manager,dc=eucalyptus,dc=com"
 write by anonymous auth by self write by * none
olcAccess: to attrs=shadowLastChange by self write by * read
olcAccess: to dn.base="" by * read
olcAccess: to * by dn="cn=Directory Manager,dc=eucalyptus,dc=com" write by * read
olcDbMaxReaders: 0
olcDbMode: 0600
olcDbSearchStack: 16
olcDbMaxSize: 4294967296
olcAddContentAcl: FALSE
olcLastMod: TRUE
olcMaxDerefDepth: 15
olcReadOnly: FALSE
olcSyncUseSubentry: FALSE
olcMonitoring: TRUE
olcDbNoSync: FALSE
olcDbEnvFlags: writemap
olcDbEnvFlags: nometasync

Make sure and create the directory where the DB information will be stored:

# su - openldap -c "mkdir /media/ephemeral0/openldap/var/openldap-data/dns"

Start up the OpenLDAP directory:

# sudo /media/ephemeral0/openldap/libexec/slapd -h "ldap:/// ldapi:///" \
-u openldap -g openldap

After OpenLDAP has been started successfully,  upload the directory-layout.ldif as the cn=admin,cn=config user:

# /media/ephemeral0/openldap/bin/ldapadd -D cn=admin,cn=config -W \
-f directory-layout.ldif
Enter LDAP Password:
adding new entry "olcDatabase=mdb,cn=config"

To allow search access to the directory, create an LDIF called frontend.ldif, that contains the following:

dn: olcDatabase={-1}frontend,cn=config
changetype: modify
replace: olcAccess
olcAccess: to dn.base="" by * read
olcAccess: to dn.base="cn=Subschema" by * read
olcAccess: to * by self write by users read by anonymous auth

Upload the LDIF using the ldapmodify command:

# /media/ephemeral0/openldap/bin/ldapmodify -D cn=admin,cn=config -W -f frontend.ldif
Enter LDAP Password:
modifying entry "olcDatabase={-1}frontend,cn=config"

To check the results of these changes, use ldapsearch:

# /media/ephemeral0/openldap/bin/ldapsearch -D cn=admin,cn=config -W -b cn=config
Enter LDAP Password:
(ldapsearch results...)

After confirming that the base configurations have been stored, create an LDIF called dns-domain.ldif that lays out the directory structure for the database.  As seen previously in the directory-layout.ldif, the base is dc=eucalyptus,dc=com. The dns-domain.ldif file should look like the following:

dn: dc=eucalyptus,dc=com
objectClass: top
objectClass: dcObject
objectclass: organization
o: Eucalyptus Systems Inc - QA DNS Domain
dc: eucalyptus
description: Test LDAP+DNS Setup
 
dn: ou=dns,dc=eucalyptus,dc=com
objectClass: organizationalUnit
ou: dns

After creating the dns-domain.ldif file, upload the file using the cn=Directory Manager,dc=eucalyptus,dc=com user:

# /media/ephemeral0/openldap/bin/ldapadd -H ldap://localhost \
-D "cn=Directory Manager,dc=eucalyptus,dc=com" -W -f dns-domain.ldif
Enter LDAP Password:
adding new entry "dc=eucalyptus,dc=com"
adding new entry "ou=dns,dc=eucalyptus,dc=com"

To allow the cn=Directory Manager,dc=eucalyptus,dc=com user to see what updates are being done to the Directory, enable the Access Log overlay.  To enable this option, create an LDIF file called access-log.ldif.  The contents should look like the following:

dn: olcDatabase={2}mdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcMdbConfig
olcDatabase: {2}mdb
olcDbDirectory: /media/ephemeral0/openldap/var/openldap-data/access
olcSuffix: cn=log
olcDbIndex: reqStart eq
olcDbMaxSize: 1073741824
olcDbMode: 0600
olcAccess: {1}to * by dn="cn=Directory Manager,dc=eucalyptus,dc=com" read

dn: olcOverlay={1}accesslog,olcDatabase={3}mdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcAccessLogConfig
olcOverlay: {1}accesslog
olcAccessLogDB: cn=log
olcAccessLogOps: all
olcAccessLogPurge: 7+00:00 1+00:00
olcAccessLogSuccess: TRUE
olcAccessLogOld: (objectclass=idnsRecord)

After creating the access-log.ldif, create the directory for storing the access database:

# su - openldap -c "mkdir /media/ephemeral0/openldap/var/openldap-data/access"

Upload the LDIF using the cn=admin,cn=config user:

# /media/ephemeral0/openldap/bin/ldapadd -D cn=admin,cn=config -W -f access-log.ldif

Now that OpenLDAP is ready to go, let’s work on configuring Bind9 DNS.

Bind9 DNS Configuration

Since bind-dyndb-ldap has a dependency package of bind,  named has already been installed on the instance.  The only thing left to do is edit /etc/named.conf so that we are able to use the dynamic ldap backend module.  Edit /etc/named.conf so that it looks like the following:

options {
 listen-on port 53 { <private IP address of the instance>; };
 listen-on-v6 port 53 { ::1; };
 directory "/var/named";
 dump-file "/var/named/data/cache_dump.db";
 statistics-file "/var/named/data/named_stats.txt";
 memstatistics-file "/var/named/data/named_mem_stats.txt";
 recursion yes;

 dnssec-enable yes;
 dnssec-validation yes;
 dnssec-lookaside auto;

/* Path to ISC DLV key */
 bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
 allow-recursion { any; };
};

dynamic-db "qa_dns_test" {
 library "ldap.so";
 arg "uri ldap://localhost";
 arg "base ou=dns,dc=eucalyptus, dc=com";
 arg "auth_method none";
 arg "cache_ttl 10";
 arg "zone_refresh 1";
 arg "dyn_update yes";
};

logging {
 channel default_debug {
 file "data/named.run";
 severity debug;
 print-time yes;
 };
};

zone "." IN {
 type hint;
 file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

As seen above, the dynamic-db section is the configuration for connecting to the OpenLDAP server.  For more advanced configurations, please reference the README in the bind-dyndb-ldap repository on git.fedorahosted.org.

Now we are ready to start named (the bind DNS server).  Before starting the server, make sure and create the rndc key, then start named:

# rndc-confgen -a -r /dev/urandom
# service named start

You have successfully created an Bind9 DNS + OpenLDAP deployment.  Let’s run a quick test.

Test the Deployment

To test the deployment, I created an LDIF called test-cloud.ldif.  The configuration sets up a domain called eucalyptus-systems.com.  It also creates a sub-domain that will be forwarding requests for euca-hasp.eucalyptus-systems.com to the CLCs of the Eucalyptus HA deployment that has been set up.  The contents of the file are as follows:

dn: idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: top
objectClass: idnsZone
objectClass: idnsRecord
idnsName: eucalyptus-systems.com
idnsUpdatePolicy: grant EUCALYPTUS-SYSTEMS.COM krb5-self * A;
idnsZoneActive: TRUE
idnsSOAmName: server.eucalyptus-systems.com
idnsSOArName: root.server.eucalyptus-systems.com
idnsAllowQuery: any;
idnsAllowDynUpdate: TRUE
idnsSOAserial: 1
idnsSOArefresh: 10800
idnsSOAretry: 900
idnsSOAexpire: 604800
idnsSOAminimum: 86400
NSRecord: ns
ARecord: 192.168.55.103 - (the public IP of the instance)

dn: idnsName=ns,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: ns
aRecord: 192.168.55.103

dn: idnsName=server,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: server
CNAMERecord: eucalyptus-systems.com.

dn: idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com
objectClass: idnszone
objectClass: idnsrecord
objectClass: top
idnsName: 39.168.192.in-addr.arpa.
idnsSOAmName: server.eucalyptus-systems.com
idnsSOArName: root.server.eucalyptus-systems.com
idnsSOAserial: 1350039556
idnsSOArefresh: 10800
idnsSOAretry: 900
idnsSOAexpire: 604800
idnsSOAminimum: 86400
idnsZoneActive: TRUE
idnsAllowDynUpdate: TRUE
idnsAllowQuery: any;
idnsAllowTransfer: none;
idnsUpdatePolicy: grant EUCALYPTUS-SYSTEMS.COM krb5-subdomain 39.168.192.in-addr.arpa. PTR;
nSRecord: server.eucalyptus-systems.com.

dn: idnsName=_ldap._tcp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: _ldap._tcp
SRVRecord: 0 100 389 server

dn: idnsName=_ntp._udp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: _ntp._udp
SRVRecord: 0 100 123 server

# The DNS entries for the CLCs of the cloud - viking-01 and viking-02

dn: idnsName=viking-02,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: viking-02
aRecord: 192.168.39.102

dn: idnsname=102,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsrecord
objectClass: top
idnsName: 102
pTRRecord: viking-02.eucalyptus-systems.com.

dn: idnsName=viking-01,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: viking-01
aRecord: 192.168.39.101

dn: idnsname=101,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsrecord
objectClass: top
idnsName: 101
pTRRecord: viking-01.eucalyptus-systems.com.

# The delegated zone - euca-hasp.eucalyptus-systems.com

dn: idnsName=euca-hasp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: top
objectClass: idnsRecord
objectClass: idnsZone
idnsForwardPolicy: first
idnsAllowDynUpdate: FALSE
idnsZoneActive: TRUE
idnsAllowQuery: any;
idnsForwarders: 192.168.39.101
idnsForwarders: 192.168.39.102
idnsName: euca-hasp
idnsSOAmName: server.eucalyptus-systems.com
idnsSOArName: root.server.eucalyptus-systems.com
idnsSOAretry: 15
idnsSOAserial: 1
idnsSOArefresh: 80
idnsSOAexpire: 120
idnsSOAminimum: 30
nSRecord: viking-01.eucalyptus-systems.com
nSRecord: viking-02.eucalyptus-systems.com

Since this was intended for a lab environment where sub-domains (and possibly domains) would be added/deleted on a regular basis, the SOA records were not set to the RFC 1912 standards defined for production DNS use.

After creating this LDIF,  it was uploaded to the LDAP server as the cn=Directory Manager,dc=eucalyptus,dc=com user:

# /media/ephemeral0/openldap/bin/ldapadd -H ldap://localhost \
-D "cn=Directory Manager,dc=eucalyptus,dc=com" -W -f test-cloud.ldif
Enter LDAP Password:
adding new entry "idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=ns,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=server,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=_ldap._tcp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=_ntp._udp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=viking-02,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsname=102,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=viking-01,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsname=101,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=euca-hasp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"

To test out the setup, tests were ran against the public IP address of the instance to resolve for the various configurations:

# nslookup viking-01.eucalyptus-systems.com 192.168.55.103
Server: 192.168.55.103
Address: 192.168.55.103#53

Name: viking-01.eucalyptus-systems.com
Address: 192.168.39.101

# nslookup 192.168.39.101 192.168.55.103
Server: 192.168.55.103
Address: 192.168.55.103#53

101.39.168.192.in-addr.arpa name = viking-01.eucalyptus-systems.com.

# nslookup eucalyptus.euca-hasp.eucalyptus-systems.com 192.168.55.103
Server: 192.168.55.103
Address: 192.168.55.103#53

Non-authoritative answer:
Name: eucalyptus.euca-hasp.eucalyptus-systems.com
Address: 192.168.39.102

# nslookup walrus.euca-hasp.eucalyptus-systems.com 192.168.55.103
Server: 192.168.55.103
Address: 192.168.55.103#53

Non-authoritative answer:
Name: walrus.euca-hasp.eucalyptus-systems.com
Address: 192.168.39.101

As see above, not only did the resolution come back correct for the machines under the eucalyptus-systems.com domain, but it also forwarded the requests for the hosts under euca-hasp.eucalyptus-systems.com correctly, and returned the correct response.

To delete the set up, an LDIF called delete-test-cloud.ldif from the test-cloud.ldif as follows:

# tac test-cloud.ldif | grep dn: > delete-test-cloud.ldif

Open up the delete-test-cloud.ldif and add the following lines between each dn:

changetype: delete
(empty line)

Now, use ldapmodify as the cn=Directory Manager,dc=eucalyptus,dc=com user to delete the entries:

# /media/ephemeral0/openldap/bin/ldapmodify -H ldap://localhost -D "cn=Directory Manager,dc=eucalyptus,dc=com" -W -f delete-test-cloud.ldif
Enter LDAP Password:
deleting entry "idnsName=euca-hasp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsname=101,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=viking-01,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsname=102,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=viking-02,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=_ntp._udp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=_ldap._tcp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=server,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=ns,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"

To confirm, do a lookup against one of the entries to see if it still exists:

# nslookup viking-01.eucalyptus-systems.com 192.168.55.103
Server: 192.168.55.103
Address: 192.168.55.103#53

** server can't find viking-01.eucalyptus-systems.com: NXDOMAIN

There you have it!  A successful Bind9 DNS + OpenLDAP deployment is ready to be used.

Enjoy!  And as always, questions/suggestions/comments are always welcome.

Big Data on the Cloud using Ansible, RHadoop, AppScale, and AWS/Eucalyptus

Background

Big Data has been a hot topic over the last few years.  Big Data on public clouds, such as AWS’s Elastic MapReduce, has been gaining even more popularity as cloud computing becomes more of an industry standard.

R  is an open source project for statistical computing and graphics.  It has been growing in popularity for doing  linear and nonlinear modeling, classical statistical tests, time-series analysis and others, at various Universities and companies.

R Project

R Project

RHadoop was developed by Revolution Analytics to interface with Hadoop.  Revolution Analytics builds analytic software solutions using R.

Revolution Analytics

Revolution Analytics

AppScale is an open source PaaS that implements the Google AppEngine API on IaaS environments.  One of the Google AppEngine APIs that is implemented is AppEngine MapReduce.   The back-end support for this API that AppScale using Cloudera’s Distribution for Apache Hadoop.

AppScale Inc.

AppScale Inc.

Ansible is an open source orchestration software that utilizes SSH for handling configuration management for physical/virtual machines, and machines running in the cloud.

Ansible Works

Ansible Works

Amazon Web Services is a public IaaS that provides infrastructure and application services in the cloud.  Eucalyptus is an open source software solution that provides the AWS APIs for EC2, S3, and IAM for on-premise cloud environments.

Amazon AWS EC2

Amazon AWS EC2

Eucalyptus Systems Inc.

Eucalyptus Systems Inc.

This blog entry will cover how to deploy AppScale (either on AWS or Eucalyptus), then use Ansible to configure each AppScale node with R, and the RHadoop packages in order allow programs written in R to utilize MapReduce in the cloud.

Pre-requisites

To get started, the following is needed on a desktop/laptop computer:

*NOTE:  These variables are used by AppScale Tools version 1.6.9.  Check the AWS and Eucalyptus documentation regarding obtaining user credentials. 

Deployment

AppScale

After installing AppScale Tools and Ansible, the AppScale cluster needs to be deployed.  After defining the AWS/Eucalyptus variables,  initialize the creation of the AppScale cluster configuration file – AppScalefile.

$ ./appscale-tools/bin/appscale init cloud

Edit the AppScalefile, providing information for the keypair, security group, and AppScale AMI/EMI.  The keypair and security group do not need to be pre-created. AppScale will handle this.  The AppScale AMI on AWS (us-east-1) is ami-4e472227.  The Eucalyptus EMI will be unique based upon the Eucalyptus cloud that is being used.  In this example, the AWS AppScale AMI will be used, and the AppScale cluster size will be 3 nodes.  Here is the example AppScalefile:

---
group : 'appscale-rmr'
infrastructure : 'ec2'
instance_type : 'm1.large'
keyname : 'appscale-rmr'
machine : 'ami-4e472227'
max : 3
min : 3
table : 'hypertable'

After editing the AppScalefile, start up the AppScale cluster by running the following command:

$ ./appscale-tools/bin/appscale up

Once the cluster finishes setting up, the status of the cluster can be seen by running the command below:

$ ./appscale-tools/bin/appscale status

R, RHadoop Installation Using Ansible

Now that the cluster is up and running, grab the Ansible playbook for installing R, and RHadoop rmr2 and rhdfs packages onto the AppScale nodes.  The playbook can be downloaded from github using git:

$ git clone https://github.com/hspencer77/ansible-r-appscale-playbook.git

After downloading the playbook, the ansible-r-appscale-playbook/production file needs to be populated with the information of the AppScale cluster.  Grab the cluster  node information by running the following command:

$ ./appscale-tools/bin/appscale status | grep amazon | grep Status | awk '{print $5}' | cut -d ":" -f 1
ec2-50-17-96-162.compute-1.amazonaws.com
ec2-50-19-45-193.compute-1.amazonaws.com
ec2-67-202-23-157.compute-1.amazonaws.com

Add those DNS entries to the ansible-r-appscale-playbook/production file.  After editing, the file will look like the following:

[appscale-nodes]
ec2-50-17-96-162.compute-1.amazonaws.com
ec2-50-19-45-193.compute-1.amazonaws.com
ec2-67-202-23-157.compute-1.amazonaws.com

Now the playbook can be executed.  The playbook requires the SSH private key to the nodes.  This key will be located under the ~/.appscale folder.  In this example, the key file is named appscale-rmr.key.  To execute the playbook, run the following command:

$ ansible-playbook -i r-appscale-deployment/production 
--private-key=~/.appscale/appscale-rmr.key -v r-appscale-deployment/site.yml

Testing Out The Deployment – Wordcount.R

Once the playbook has finished running, the AppScale cluster is now ready to be used.  To test out the setup, SSH into the head node of the AppScale cluster.  To find out the head node of the cluster, execute the following command:

$ ./appscale-tools/bin/appscale status

After discovering the head node, SSH into the head node using the private key located in the ~/.appscale directory:

$ ssh -i ~/.appscale/appscale-rmr.key root@ec2-50-17-96-162.compute-1.amazonaws.com

To test out the R setup on all the nodes, grab the wordcount.R program:

root@appscale-image0:~# tar zxf rmr2_2.0.2.tar.gz rmr2/tests/wordcount.R

In the wordcount.R file, the following lines are present

rmr2:::hdfs.put("/etc/passwd", "/tmp/wordcount-test")
out.hadoop = from.dfs(wordcount("/tmp/wordcount-test", pattern = " +"))

When the wordcount.R program is executed, it will grab the /etc/password file from the head node, copy it to the hdfs filesystem, then run wordcount on /etc/password to look for the pattern ” +”.   NOTE: wordcount.R can be edited to use any file and pattern desired.

Run wordcount.R:

root@appscale-image0:~# R

R version 2.15.3 (2013-03-01) -- "Security Blanket"
Copyright (C) 2013 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

[Previously saved workspace restored]

> source('rmr2/tests/wordcount.R')
Loading required package: Rcpp
Loading required package: RJSONIO
Loading required package: digest
Loading required package: functional
Loading required package: stringr
Loading required package: plyr
13/04/05 02:33:41 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
13/04/05 02:33:43 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
packageJobJar: [/tmp/RtmprcYtsu/rmr-local-env19811a7afd54, /tmp/RtmprcYtsu/rmr-global-env1981646cf288, /tmp/RtmprcYtsu/rmr-streaming-map198150b6ff60, /tmp/RtmprcYtsu/rmr-streaming-reduce198177b3496f, /tmp/RtmprcYtsu/rmr-streaming-combine19813f7ea210, /var/appscale/hadoop/hadoop-unjar5632722635192578728/] [] /tmp/streamjob8198423737782283790.jar tmpDir=null
13/04/05 02:33:44 WARN snappy.LoadSnappy: Snappy native library is available
13/04/05 02:33:44 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/04/05 02:33:44 INFO snappy.LoadSnappy: Snappy native library loaded
13/04/05 02:33:44 INFO mapred.FileInputFormat: Total input paths to process : 1
13/04/05 02:33:44 INFO streaming.StreamJob: getLocalDirs(): [/var/appscale/hadoop/mapred/local]
13/04/05 02:33:44 INFO streaming.StreamJob: Running job: job_201304042111_0015
13/04/05 02:33:44 INFO streaming.StreamJob: To kill this job, run:
13/04/05 02:33:44 INFO streaming.StreamJob: /root/appscale/AppDB/hadoop-0.20.2-cdh3u3/bin/hadoop job  -Dmapred.job.tracker=10.77.33.247:9001 -kill job_201304042111_0015
13/04/05 02:33:44 INFO streaming.StreamJob: Tracking URL: http://appscale-image0:50030/jobdetails.jsp?jobid=job_201304042111_0015
13/04/05 02:33:45 INFO streaming.StreamJob:  map 0%  reduce 0%
13/04/05 02:33:51 INFO streaming.StreamJob:  map 50%  reduce 0%
13/04/05 02:33:52 INFO streaming.StreamJob:  map 100%  reduce 0%
13/04/05 02:33:59 INFO streaming.StreamJob:  map 100%  reduce 33%
13/04/05 02:34:02 INFO streaming.StreamJob:  map 100%  reduce 100%
13/04/05 02:34:04 INFO streaming.StreamJob: Job complete: job_201304042111_0015
13/04/05 02:34:04 INFO streaming.StreamJob: Output: /tmp/RtmprcYtsu/file1981524ee1a3
13/04/05 02:34:05 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
13/04/05 02:34:07 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
13/04/05 02:34:08 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
13/04/05 02:34:10 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
Deleted hdfs://10.77.33.247:9000/tmp/wordcount-test
>quit("yes")

Thats it!  The AppScale cluster is ready for additional R programs that utilize MapReduce.   Enjoy the world of Big Data on public/private IaaS.

What’s new in Ansible 1.1 for AWS and Eucalyptus users?

hspencer77:

Ansible providing more AWS/Eucalyptus support!

Originally posted on Take that to the bank and cash it!:

I thought the Ansible 1.0 development cycle was busy but 1.1 is crammed full of orchestration goodness.  On Tuesday, 1.1 was released and you can read more about it here: http://blog.ansibleworks.com/2013/04/02/ansible-1-1-released/

For those working on AWS and Eucalyptus, 1.1 brings some nice module improvements as well as a new cloudformation and s3 module.  It’s great to see the AWS-related modules becoming so popular so quickly.  Here are some more details about the changes but you can find info in the changelog here: https://github.com/ansible/ansible/blob/devel/CHANGELOG.md

Security group ID support

It’s now possible to specify the security group by its ID.  This is quite typically behaviour in EC2 and Eucalyptus will support this with the pending 3.3 release.  The parameter is optional.

VPC subnet ID

VPC users can now specify a subnet ID associated with their instance.

Instance state wait timeout

In 1.0 there was no way to specify how long to wait…

View original 300 more words

Using Ansible to Deploy Neo4j HA Cluster on AWS/Eucalyptus

As a follow-up to my last Neo4j, AWS/Eucalyptus blog, this entry demonstrates another great example of AWS/Eucalyptus fidelity by using Ansible to deploy a Neo4j High Available cluster.

Neo4j - Graph Database

Neo4j – Graph Database

Amazon AWS EC2

Amazon AWS EC2

Eucalyptus Systems Inc.

Eucalyptus Systems Inc.

Pre-requisites

In order to use this Ansible playbook on AWS/Eucalyptus, the following is needed:

Before deploying the cluster, a security group needs to be created that the cluster will use.  The security group must allow the following:

  • port 22 (SSH)
  • all instances part of the security group allowed to community with each other (ports 0 - 65535)

To create the security group and authorize the ports, make sure the user’s access key, secret access key, and EC2 URL are noted, and do the following:

  1. Create the security group

    ec2-create-group --aws-access-key <EC2_ACCESS_KEY> 
    --aws-secret-key <EC2_SECRET_KEY> 
    --url <EC2_URL> -g neo4j-cluster -d "Neo4j HA Cluster"

  2. Authorize port for SSH in neo4j-cluster security group

    ec2-authorize 
    --aws-access-key <EC2_ACCESS_KEY> 
    --aws-secret-key <EC2_SECRET_KEY> 
    --url <EC2_URL> -P tcp -p 22 -s 0.0.0.0/0 neo4j-cluster

  3. Authorize all port communication between cluster members 

    ec2-authorize 
    --aws-access-key <EC2_ACCESS_KEY> --aws-secret-key <EC2_SECRET_KEY> 
    --url <EC2_URL> -P tcp -o neo4j-cluster -p -1 neo4j-cluster

After completing these steps, use

ec2-describe-group

to view the security group:

ec2-describe-group --aws-access-key <EC2_ACCESS_KEY> 
--aws-secret-key <EC2_SECRET_KEY> --url <EC2_URL> neo4j-cluster

GROUP sg-1cbc5777 986451091583 neo4j-cluster Neo4j HA Cluster
PERMISSION 986451091583 neo4j-cluster ALLOWS tcp 0 65535 FROM 
USER 986451091583 NAME neo4j-cluster ID sg-1cbc5777 ingress
PERMISSION 986451091583 neo4j-cluster 
ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0 ingress

Neo4j HA Cluster Deployment

Once the security group is created with the correct ports authorized, the cluster can be deployed.  To deploy the cluster, do the following:

  1. Obtain Ansible from git and setup the environment by following the instructions mentioned here - http://ansible.cc/docs/gettingstarted.html#getting-ansible
  2. Obtain the Ansible Playbook for Neo4j HA Cluster using git

    git clone https://github.com/hspencer77/ansible-neo4j-cluster.git

  3. Change directory into ansible-neo4j-cluster  

    cd ansible-neo4j-cluster

  4. Set up /etc/ansible/hosts with the following information:
    [local]
    127.0.0.1
  5. Populate vars/ec2-config with either Eucalyptus/AWS information. vars/ec2-config contains the following variables:
    keypair: <EC2/Eucalyptus Keypair>
    ec2_access_key: <EC2_ACCESS_KEY>
    ec2_secret_key: <EC2_SECRET_KEY>
    ec2_url: <EC2_URL>
    instance_type: m1.small
    security_group: <AWS/Eucalyptus Security Group>
    image: <AMI/EMI>
  6. 
    

    Execute the following command:

    ansible-playbook neo4j-cluster.yml \
     --private-key=<AWS/Eucalyptus Private Key file> --extra-vars "node_count=3"
  7. After the playbook finishes, there will be an URL provided to access the cluster – similar to the example below:
    TASK: [Display HAProxy URL] *********************
    changed: [23.22.248.75] => {"changed": true, "cmd": 
    "echo \"HAProxy URL for Neo4j -
     http://ec2-23-22-248-75.compute-1.amazonaws.com/webadmin/#/info/org.neo4j/High%20Availability/\" ",
     "delta": "0:00:00.006835", "end": "2013-03-30 19:54:31.104320", 
    "rc": 0, "start": "2013-03-30 19:54:31.097485", "stderr": "", 
    "stdout": 
    "HAProxy URL for Neo4j - 
    http://ec2-23-22-248-75.compute-1.amazonaws.com/webadmin/#/info/org.neo4j/High%20Availability/"}

    To view the status of cluster in the browser, open up http://ec2-23-22-248-75.compute-1.amazonaws.com/webadmin/#/info/org.neo4j/High%20Availability/.

  8. To get the status of the cluster, use curl:
    curl -H "Content-Type:application/json" -d '["org.neo4j:*"]' 
    
    http://ec2-23-22-248-75.compute-1.amazonaws.com/db/manage/server/jmx/query

Thats it!  A Neo4j HA cluster with an HA Proxy server serving as an endpoint is available to be used.   If a bigger cluster is desired, just change the

node_count

value.   For additional information regarding this playbook, and how it handles the cluster membership, please refer to the following URL - https://github.com/hspencer77/ansible-neo4j-cluster/blob/master/README.md.

Hope you enjoy!  As always, questions/comments/suggestions are always welcome.