Using Boto3 Against HPE Helion Eucalyptus 4.2 Deployments

Recently, there was a blog entry posted on the AWS Developer Blog discussing how to migrate to boto3.  Since HPE Helion Eucalyptus strives to provide 100% AWS-compatible APIs for implemented services, AWS SDKs – such as the AWS SDK for Python – works solidly.  This blog entry will demonstrate how to use boto3 – the latest version of AWS SDK for Python – with HPE Helion Eucalyptus 4.2.

At the time of the posting of this blog entry, the following AWS service APIs are supported by HPE Helion Eucalyptus 4.2:

Installation

As mentioned on the boto3 documentation, install boto3 using pip:

# pip install boto3

Configuration

Again, as mentioned in the boto3 documentation, configuration can be done by using AWS CLI, or manually creating the config and credentials file under the .aws directory.  For example, here are the contents of the .aws/config and .aws/credentials files that will be used for this demonstration:

# cat .aws/config
[profile devops-admin]
output = json
region = us-east-1
# cat .aws/credentials
[devops-admin]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXX

If these files do not want to be used, as an alternative, you can pass the AWS Access Key ID and AWS Secret Key programmatically.  This will be referenced later on in this blog entry.

Using Boto3

To demonstrate how to use boto3, ipython will be utilized.  To get started, the Session class will be imported from the boto3 library:

# ipython
Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.2 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: from boto3.session import Session

Next invoke the session:

In [2]: session = Session(region_name='us-east-1', profile_name="devops-admin")

As mentioned earlier, alternatively, if we want to programmatically pass the AWS Access Key ID and the AWS Secret Key, it can be done when the session is invoked:

In [2]: session = Session(aws_access_key_id='XXXXXXXXXXXXXX', aws_secret_access_key='XXXXXXXXXXXXXXXXXXXXXXXX', region_name='us-east-1')

Even though region_name has a value here, when the client connection is created, the service endpoint will be a HPE Helion Eucalyptus service endpoint.  Any valid AWS region name can be used with HPE Helion Eucalyptus.  The important piece will be the endpoint URL.

From here, we can use the session to establish a client connection with a given HPE Helion Eucalyptus service endpoint.  Since the HPE Helion Eucalyptus cloud used in this example contains HTTPS endpoints, the trusted root certificate for the cloud subdomain will be passed as well.

Examples

Here is an example connecting to the EC2 service endpoint provided by the HPE Helion Eucalyptus Compute service to discover what instances as associated with the authenticated user account:

In [3]: client = session.client('ec2', endpoint_url='https://ec2.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [4]: for reservation in client.describe_instances()['Reservations']: 
  for instance in reservation['Instances']:
    print instance['InstanceId']
 ...:
i-4064f4e7
i-1c8515dd
i-79e96bc1
i-d43f50f1
i-b4adc06b
i-c4025e42

Below is another example connecting to the S3 service endpoint provided by the HPE Helion Eucalyptus Object Storage Gateway (OSG) service to list the buckets owned by the authenticated user account:

In [5]: client = session.client('s3', endpoint_url='https://s3.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [6]: for bucket in client.list_buckets()['Buckets']: 
  print bucket['Name']
 ...:
cfn-templates
ubuntu-trusty-x86_64-hvm-20151218
ubuntu-xenial-x86_64-hvm-20151217

Another example connecting to the Cloudformation service endpoint provided by the HPE Helion Eucalyptus Cloudformation service:

In [7]: client = session.client('cloudformation', endpoint_url='https://cloudformation.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [8]: for stack in client.describe_stacks()['Stacks']:
 print "Stack Name: " + stack['StackName']
 print "Status: " + stack['StackStatus']
 print "ID: " + stack['StackId']
 ...:
Stack Name: CoreOSCluster
Status: CREATE_COMPLETE
ID: arn:aws:cloudformation::001520216600:stack/CoreOSCluster/12437fe7-8a03-4920-9e34-270764450fa0

And for the last example, connecting to the AutoScaling service endpoint provided by the HPE Helion Eucalyptus AutoScaling service:

In [9]: client = session.client('autoscaling', endpoint_url='https://autoscaling.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [10]: for asg in client.describe_auto_scaling_groups()['AutoScalingGroups']:
 print "AutoScaling Group Name: " + asg['AutoScalingGroupName']
 print "Launch Config: " + asg['LaunchConfigurationName']
 print "Availability Zones:"
 for az in asg['AvailabilityZones']:
 print "\t" + az
 print "AutoScaling Group Instances:"
 for instance in asg['Instances']:
 print "\t" + instance['InstanceId']
 ....:
AutoScaling Group Name: CoreOSCluster-CoreOsGroup-JTKMRINKKMYDI
Launch Config: CoreOSCluster-CoreOsLaunchConfig-LAWHOT5X5K5PX
Availability Zones:
 us-east-1c
 us-east-1b
 us-east-1a
AutoScaling Group Instances:
 i-79e96bc1
 i-4064f4e7
 i-c4025e42
 i-d43f50f1
 i-1c8515dd

Conclusion

As mentioned earlier, boto3 can be used with any AWS compatible service implemented by HPE Helion Eucalyptus.  If your team isn’t ready to use boto3 yet, boto can still be used with HPE Helion Eucalyptus.

As always, I hope you enjoyed this entry.  Please let me know if there are any questions/suggestion/ideas regarding this blog topic.

Enjoy!

 

Using Boto3 Against HPE Helion Eucalyptus 4.2 Deployments

Updated CoreOS Cluster Cloudformation Template for HPE Helion Eucalyptus 4.2 VPC Deployments

In 2014, I created a series of blog posts that have discussed using CoreOS on Eucalyptus cloud infrastructures.  This blog post is an updated version of the entry which discussed how to deploy a CoreOS cluster using a cloudformation template on Eucalyptus 4.0.1.  It will cover how to deploy a CoreOS cluster using Cloudformation on a HPE Helion Eucalyptus 4.2 VPC environment.

In HPE Helion Eucalyptus 4.1, VPC (Virtual Private Cloud) was in technical preview state.  With the release of Eucalyptus 4.2, VPC was upgraded to stable release.  HPE Helion Eucalyptus VPC provides similar features as AWS VPC.  For more information about what is currently supported in Eucalyptus VPC, please refer to the online documentation.

Prerequisites

Prerequisites for this blog entry are listed in the following previous blogs:

Please note the information regarding HPE Helion Eucalyptus IAM and how to obtain the CoreOS Beta AMI image in the previous listed blog entries.

CoreOS ETCD Discovery Service Token

When setting up the CoreOS cluster, the method used to handle cluster membership is using etcd Discovery.  This provides a unique discovery URL that will show all the members of the cluster.  To obtain a token for the size of the cluster you desire, use the following URL and add the value for the size of the cluster.  For example, if the cluster will have five members, using curl – the request URL will look like the following:

curl https://discovery.etcd.io/new?size=5

The value returned will look similar to the following:

https://discovery.etcd.io/fdd7d8ac203d2cac0c27ead148ad83ed

This URL can be referenced to see if all the members of the cluster registered successfully.

Deploying the Cluster on HPE Helion Eucalyptus VPC

When deploying the cluster on a Eucalyptus VPC environment, there are additional variables that have to be taken into account.  To download the example template, use the following URL:

https://s3-us-west-1.amazonaws.com/cfn-coreos-deployment/cfn-coreos-as-vpc.json

After downloading the template, use either euca2ools or AWS CLI to validate the template.  This will display the arguments that need to be passed when creating the cloudformation stack on Eucalyptus.  For example:

# euform-validate-template --template-file cfn-coreos-as.json 
DESCRIPTION Deploy CoreOS Cluster on Eucalyptus VPC
PARAMETER VpcId false VpcId of your existing Virtual Private Cloud (VPC)
PARAMETER Subnets false The list of SubnetIds in your Virtual Private Cloud (VPC)
PARAMETER AZs false The list of AvailabilityZones for your Virtual Private Cloud (VPC)
PARAMETER CoreOSImageId false CoreOS Image Id
PARAMETER UserKeyPair true User Key Pair
PARAMETER ClusterSize false Desired CoreOS Cluster Size
PARAMETER VmType false Desired VM Type for Instances

Notice the template requires unique variables associated with HPE Helion Eucalyptus VPC.

Now that the template has been downloaded, create the CoreOS stack using euca2ools.  For example:

# euform-create-stack CoreOSCluster --template-file cfn-coreos-as.json --parameter Subnets=subnet-0814e7aa,subnet-5d816215,subnet-c3755d6c --parameter AZs=euca-east-1c,euca-east-1b,euca-east-1a --parameter CoreOSImageId=emi-dfa27782 --parameter UserKeyPair=devops-admin --parameter ClusterSize=5 --parameter VmType=m1.large --parameter VpcId=vpc-d7fcff27

Once the cluster has been deployed, confirm that the cloudformation stack deployed successfully:

# euform-describe-stacks
STACK CoreOSCluster CREATE_COMPLETE Complete! Deploy CoreOS Cluster on Eucalyptus VPC 2016-01-01T21:09:10.965Z
PARAMETER VpcId vpc-d7fcff27
PARAMETER Subnets subnet-0814e7aa,subnet-5d816215,subnet-c3755d6c
PARAMETER AZs euca-east-1c,euca-east-1b,euca-east-1a
PARAMETER CoreOSImageId emi-dfa27782
PARAMETER UserKeyPair ****
PARAMETER ClusterSize 5
PARAMETER VmType m1.large
OUTPUT AutoScalingGroup CoreOSCluster-CoreOsGroup-JTKMRINKKMYDI

Check the discovery URL using curl, wget or any browser to confirm that the cluster membership completed:

# curl https://discovery.etcd.io/fdd7d8ac203d2cac0c27ead148ad83ed
{"action":"get","node":{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed","dir":true,"nodes":[{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/d0a4c6d73d0d8d17","value":"8981923b54d7d7f46fabc527936a7dcf=http://172.31.4.17:2380","modifiedIndex":953833155,"createdIndex":953833155},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/12b6e6e78c9cb70c","value":"33a3209006d2be1d5be0da6eaea007c5=http://172.31.19.215:2380","modifiedIndex":953833156,"createdIndex":953833156},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/d5c5d93e360ba87","value":"e71b1fefcd65c43a0fbacc7103efbc2b=http://172.31.22.157:2380","modifiedIndex":953833162,"createdIndex":953833162},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/cffd4985c990f872","value":"f047b9ff24f3d0c4e74c660709103b36=http://172.31.6.166:2380","modifiedIndex":953833167,"createdIndex":953833167},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/8e6ccfef42f98260","value":"c48b163558b61733c1aa44dccb712406=http://172.31.47.175:2380","modifiedIndex":953833339,"createdIndex":953833339}],"modifiedIndex":953831075,"createdIndex":953831075}}

To confirm the health of the cluster, SSH into one of the cluster nodes, and use fleetctl and etcdctl:

# ssh -i devops-admin-key core@euca-10-116-131-230.eucalyptus.c-05.autoqa.qa1.eucalyptus-systems.com
Last login: Sat Jan 2 23:53:25 2016 from 10.111.1.71
CoreOS beta (877.1.0)
core@euca-172-31-22-157 ~ $ fleetctl list-machines
MACHINE IP METADATA
33a32090... 10.116.131.107 purpose=coreos-cluster,region=euca-us-east-1
8981923b... 10.116.131.121 purpose=coreos-cluster,region=euca-us-east-1
c48b1635... 10.116.131.213 purpose=coreos-cluster,region=euca-us-east-1
e71b1fef... 10.116.131.230 purpose=coreos-cluster,region=euca-us-east-1
f047b9ff... 10.116.131.197 purpose=coreos-cluster,region=euca-us-east-1
core@euca-172-31-22-157 ~ $ etcd
etcd etcd2 etcdctl
core@euca-172-31-22-157 ~ $ etcdctl cluster-health
member d5c5d93e360ba87 is healthy: got healthy result from http://10.116.131.230:2379
member 12b6e6e78c9cb70c is healthy: got healthy result from http://10.116.131.107:2379
member 8e6ccfef42f98260 is healthy: got healthy result from http://10.116.131.213:2379
member cffd4985c990f872 is healthy: got healthy result from http://10.116.131.197:2379
member d0a4c6d73d0d8d17 is healthy: got healthy result from http://10.116.131.121:2379
cluster is healthy
core@euca-172-31-22-157 ~ $ etcdctl member list
d5c5d93e360ba87: name=e71b1fefcd65c43a0fbacc7103efbc2b peerURLs=http://172.31.22.157:2380 clientURLs=http://10.116.131.230:2379
12b6e6e78c9cb70c: name=33a3209006d2be1d5be0da6eaea007c5 peerURLs=http://172.31.19.215:2380 clientURLs=http://10.116.131.107:2379
8e6ccfef42f98260: name=c48b163558b61733c1aa44dccb712406 peerURLs=http://172.31.47.175:2380 clientURLs=http://10.116.131.213:2379
cffd4985c990f872: name=f047b9ff24f3d0c4e74c660709103b36 peerURLs=http://172.31.6.166:2380 clientURLs=http://10.116.131.197:2379
d0a4c6d73d0d8d17: name=8981923b54d7d7f46fabc527936a7dcf peerURLs=http://172.31.4.17:2380 clientURLs=http://10.116.131.121:2379

Thats it! The CoreOS cluster has been successfully deployed.  Given HPE Helion Eucalyptus’s AWS compatibility, this template can be used on AWS as well.

As always, please let me know if there are any questions.  Enjoy!

Updated CoreOS Cluster Cloudformation Template for HPE Helion Eucalyptus 4.2 VPC Deployments

Setting Up 3-Factor Authentication (Keypair, Password, Google Authenticator) for Eucalyptus Cloud Instances

Recently, I was logging into my AWS account, where I have multi-factor authentication (MFA) enabled, using the Google Authenticator application on my smart phone.  This inspired me to research how to enable MFA for any Linux distribution.  I ran across the following blog entries:

From there, I figured I would try to create a Eucalyptus EMI that would support three-factor authentication on a Eucalyptus 4.0 cloud.  The trick here was to figure out how to display the Google Authenticator information so users could configure Google Authenticator.  The euca2ools command ‘euca-get-console-output‘ proved to be the perfect mechanism to provide this information to the cloud user.  This blog will show how to configure an Ubuntu Trusty (14.04) Cloud image to support three-factor authentication.

Prerequisites

In order to leverage the steps mentioned in this blog, the following is needed:

Now that the prereqs have been mentioned, lets get started.

Updating the Ubuntu Image

Before we can update the Ubuntu image, let’s download the image:

[root@odc-f-13 ~]# wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img

After the image has been downloaded successfully, the image needs to be converted to a raw format.  Use qemu-img for this conversion:

[root@odc-f-13 ~]# qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw

After converting the image to a raw format, we need to mount it in order to update the image accordingly:

[root@odc-f-13 ~]# losetup /dev/loop0 trusty-server-cloudimg-amd64-disk1.raw
[root@odc-f-13 ~]# kpartx -av /dev/loop0
add map loop0p1 (253:2): 0 4192256 linear /dev/loop0 2048
[root@odc-f-13 ~]# mkdir /mnt/ubuntu
[root@odc-f-13 ~]# mount /dev/mapper/loop0p1 /mnt/ubuntu
[root@odc-f-13 ~]# chroot /mnt/ubuntu

The above command ‘chroot’ allows us to edit the image as if its the current running Linux operating system.  We have to install a couple of packages in the image.  Before we do, use the resolvconf to create the necessary information in /etc/resolv.conf.

root@odc-f-13:/# resolvconf -I

Confirm the settings are correct by running ‘apt-get update’:

root@odc-f-13:/#  apt-get update

Once that command runs successfully, install the PAM module for Google Authenticator and the whois package:

root@odc-f-13:/# apt-get install libpam-google-authenticator whois

After these packages have been installed, run the ‘google-authenticator’ command to see all the available options:

root@odc-f-13:/# google-authenticator --help
google-authenticator [<options>]
 -h, --help Print this message
 -c, --counter-based Set up counter-based (HOTP) verification
 -t, --time-based Set up time-based (TOTP) verification
 -d, --disallow-reuse Disallow reuse of previously used TOTP tokens
 -D, --allow-reuse Allow reuse of previously used TOTP tokens
 -f, --force Write file without first confirming with user
 -l, --label=<label> Override the default label in "otpauth://" URL
 -q, --quiet Quiet mode
 -Q, --qr-mode={NONE,ANSI,UTF8}
 -r, --rate-limit=N Limit logins to N per every M seconds
 -R, --rate-time=M Limit logins to N per every M seconds
 -u, --no-rate-limit Disable rate-limiting
 -s, --secret=<file> Specify a non-standard file location
 -w, --window-size=W Set window of concurrently valid codes
 -W, --minimal-window Disable window of concurrently valid codes

Updating PAM configuration

Next the PAM configuration file /etc/pam.d/common-auth needs to be updated.  Find the following line in that file:

auth[success=1 default=ignore]pam_unix.so nullok_secure

Replace it with the following lines:

auth requisite pam_unix.so nullok_secure
auth requisite pam_google_authenticator.so
auth [success=1 default=ignore] pam_permit.so

Next, we need to update SSHD configuration.

Update SSHD configuration

We need to modify the /etc/ssh/sshd_config file to help make sure the Google Authenticator PAM module works successfully.  Modify/add the following lines to the /etc/ssh/sshd_config file:

ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive

Updating Cloud-Init Configuration

The next modification involves enabling the ‘ubuntu‘ user to have a password.  By default, the account is locked (i.e. doesn’t have a password assigned) in the cloud-init configuration file.  For this exercise, we will enable it, and assign a password.  Just like the old Ubuntu Cloud images, we will assign the ‘ubuntu‘ user the password ‘ubuntu‘.

Use ‘mkpasswd‘ as mentioned in the cloud-init documentation to create the password for the user:

root@odc-f-13:/# mkpasswd --method=SHA-512
Password:
$6$8/.y8gwYT$dVmtT7jXdBrz0w1ku5mh6HOC.vngjsXpehyeEicJT4kIyhvUMV3p9VGUIDC42Z1mjXdfAaQkINcCfcFe5jEKX/

In the file /etc/cloud/cloud.cfg, find the section ‘default_user‘.  Change the following line from:

lock_passwd: True

to

lock_passwd: False
passwd: $6$8/.y8gwYT$dVmtT7jXdBrz0w1ku5mh6HOC.vngjsXpehyeEicJT4kIyhvUMV3p9VGUIDC42Z1mjXdfAaQkINcCfcFe5jEKX/

The value for the ‘passwd‘ option is the output from the mkpasswd command executed earlier.

Updating /etc/rc.local

The final update to the image is to add some bash code to the /etc/rc.local file.   The reason for this update is so the information for configuring Google Authenticator with the instance can be presented to the user through the output of ‘euca-get-console-output‘.  Add the following code to the /etc/rc.local file above the ‘exit 0‘ line:

if [ ! -f /home/ubuntu/.google_authenticator ]; then
 /bin/su ubuntu -c "google-authenticator -t -d -f -r 3 -R 30 -w 4" > /root/google-auth.txt
 echo "############################################################"
 echo "Google Authenticator Information:"
 echo "############################################################"
 cat /root/google-auth.txt
 echo "############################################################"
fi

Thats it!  Now we need to bundle, upload and register the image.

Bundle, Upload and Register the Image

Since we are using an HVM image, we don’t have to worry about the kernel and ramdisk.  We can just bundle, upload and register the image.  To do so, use the euca-install-image command.  Before we do that, we need to exit out of the chroot environment and unmount the image:

root@odc-f-13:/# exit
[root@odc-f-13 ~]# umount /mnt/ubuntu
[root@odc-f-13 ~]# kpartx -dv /dev/loop0
del devmap : loop0p1
[root@odc-f-13 ~]# losetup -d /dev/loop0

After unmounting the image, bundle, upload and register the image with the euca-install-image command:

[root@odc-f-13 ~]# euca-install-image -b ubuntu-trusty-server-google-auth-x86_64-hvm -i trusty-server-cloudimg-amd64-disk1.raw --virtualization-type hvm -n trusty-server-google-auth -r x86_64
/var/tmp/bundle-Q8yit1/trusty-server-cloudimg-amd64-disk1.raw.manifest.xml 100% |===============| 7.38 kB 3.13 kB/s Time: 0:00:02
IMAGE emi-FF439CBA

After the image is registered, launch the instance with a keypair that has been created using the ‘euca-create-keypair‘ command:

[root@odc-f-13 ~]# euca-run-instances -k account1-user01 -t m1.medium emi-FF439CBA
RESERVATION r-B79E6A59 408396244283 default
INSTANCE i-48D98090 emi-FF439CBA pending account1-user01 0 m1.medium 2014-07-21T20:23:10.285Z ViciousLiesAndDangerousRumors monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-A5133B59

Once the instance has reached the ‘running’ state, use ‘euca-get-console-ouptut’ to grab the Google Authenticator information:

[root@odc-f-13 ~]# euca-describe-instances i-48D98090
RESERVATION r-B79E6A59 408396244283 default
INSTANCE i-48D98090 emi-FF439CBA euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com euca-172-18-238-157.bigboi.internal running account1-user01 0 m1.medium 2014-07-21T20:23:10.285Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.237 172.18.238.157 instance-store hvm sg-A5133B59
[root@odc-f-13 ~]# euca-get-console-output i-48D98090
.......
############################################################
Google Authenticator Information:
############################################################
https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/ubuntu@euca-172-18-238-157%3Fsecret%3D2MGKGDZTFLVE5LCX
Your new secret key is: 2MGKGDZTFLVE5LCX
Your verification code is 275414
Your emergency scratch codes are:
 59078604
 17425999
 89676696
 65201554
 14740079
############################################################
.....

Now we are ready to test access to the instance.

Testing Access to the Instance

To test access to the instance, make sure the Google Authenticator application is installed on your smart phone/hand-held device.  Next, copy the URL seen in the output (e.g. https://www.google.com/chart?chs=200×200&chld=M|0&cht=qr&chl=otpauth://totp/ubuntu@euca-172-18-238-157%3Fsecret%3D2MGKGDZTFLVE5LCX) from ‘euca-get-console-output’, and past it into a browser:

OTPAUTH URL for Google Authenticator
OTPAUTH URL for Google Authenticator

Use the ‘Google Authenticator’ application on your smart phone/hand-held device, and scan the QR Code:

Google Authenticator Application
Google Authenticator Application

 

Google Authenticator Application - Set Up Account
Google Authenticator Application – Set Up Account

 

After selecting the ‘Set up account‘ option, select ‘Scan a barcode‘, hold your smartphone/hand-held device to the screen where your browser is showing the QR code, and scan:

Google Authenticator Application - Scan Barcode
Google Authenticator Application – Scan Barcode

 

After scanning the QR code, you should see the account get added, and the verification codes begin to populate for the account:

Verification Code For Instance
Verification Code For Instance

 

Finally, SSH into the instance using the following:

  • the private key of the keypair used when launching the instance with euca-run-instances
  • the password ‘ubuntu
  • the verification code displayed in Google Authenticator for the new account added

With the information above, the SSH authentication should look similar to the following:

[root@odc-f-13 ~]# ssh -i account1-user01/account1-user01.priv ubuntu@euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com
The authenticity of host 'euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com (10.104.6.237)' can't be established.
RSA key fingerprint is c9:37:18:66:e3:ee:66:d2:8a:ac:a4:21:a6:84:92:08.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com,10.104.6.237' (RSA) to the list of known hosts.
Authenticated with partial success.
Password:
Verification code:
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-32-generic x86_64)

* Documentation: https://help.ubuntu.com/

System information as of Mon Jul 21 13:23:48 UTC 2014

System load: 0.0 Memory usage: 5% Processes: 68
 Usage of /: 56.1% of 1.32GB Swap usage: 0% Users logged in: 0

Graph this data and manage this system at:
 https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@euca-172-18-238-157:~$

Three-factor authentication has been successfully configured for the Ubuntu cloud image.  If cloud administrators would like to use different authentication for the instance user, I suggest investigating how to set up PAM LDAP authentication, where SSH public keys are stored in OpenLDAP.  In order to do this, the Ubuntu image  would have to be updated to work.  I would check out the ‘sss_ssh_authorizedkeys‘ command, and the pam-script module to potentially help get this working.

Enjoy!

Setting Up 3-Factor Authentication (Keypair, Password, Google Authenticator) for Eucalyptus Cloud Instances

Eucalyptus 4.0 Load Balancer Statistics Web UI for the Cloud Administrator

Background

From the cloud user’s perspective, the Eucalyptus Load Balancer is a “black box“.  The only interaction cloud user’s have with the Eucalyptus Load Balancer is through the eulb-* commands in euca2ools or the AWS Elastic Load Balancing API tools.   In Eucalyptus 3.4 and greater, the cloud administrator (any user under the ‘eucalyptus’ account) has the ability to access the instance that implements the load balancing solution used by the Eucalyptus Load Balancing service.  This access can be used to help troubleshoot the Eucalyptus Load Balancer if there are any issues reported by the cloud user.

The Eucalyptus Load Balancer utilizes HAProxy to implement the load balancing solution.  HAProxy has a cool feature to enable the ability to display a statistics page for the HAProxy application.  Enabling this feature to the Eucalyptus Load Balancer can help cloud administrators obtain valuable information from the load balancer in the following areas:

  • Network traffic to the backend instances registered with the load balancer
  • Network traffic to the load balancer
  • Triaging any Eucalyptus Load Balancer behavior associated with Eucalyptus CloudWatch alarms

Before getting into the details, I would like to thank Nathan Evans for his entry entitled “Cultural learnings of HA-Proxy, for make benefit…“, which helped influence this blog entry.   Now on to the fun stuff….

Prerequisites

The prerequisites for this blog entry are pretty straight forward – just read my previous entry entitled “Customizing Eucalyptus Load Balancer for Eucalyptus 4.0“.  To enable the web UI stats page, we will just add information to the /etc/load-balancer-servo/haproxy_template.conf file in the load balancer image.

In addition, the cloud administrator credentials will be needed, along with euca2ools 3.1 installed.

Enabling the HAProxy Web Statistics Page

After downloading and mounting the Eucalyptus Load Balancer image (as mentioned in my previous blog entry), to enable the HAProxy web statistics page, update the /etc/load-balancer-servo/haproxy_template.conf to look like the following:

[root@odc-f-13 /]# cat etc/load-balancer-servo/haproxy_template.conf
#template
global
 maxconn 100000
 ulimit-n 655360
 pidfile /var/run/haproxy.pid

#drop privileges after port binding
 user servo
 group servo

defaults
 timeout connect 5s
 timeout client 2m
 timeout server 2m
 timeout http-keep-alive 10s
 timeout queue 1m
 timeout check 5s
 retries 3
 option dontlognull
 option redispatch
 option http-server-close # affects KA on/off

 userlist UsersFor_HAProxyStatistics
  group admin users admin
  user admin insecure-password pwd*4admin
  user stats insecure-password pwd*4stats

listen HAProxy-Statistics *:81
 mode http
 stats enable
 stats uri /haproxy?stats
 stats refresh 60s
 stats show-node
 stats show-legends
 acl AuthOkay_ReadOnly http_auth(UsersFor_HAProxyStatistics)
 acl AuthOkay_Admin http_auth_group(UsersFor_HAProxyStatistics) admin
 stats http-request auth realm HAProxy-Statistics unless AuthOkay_ReadOnly
 stats admin if AuthOkay_Admin

For more information regarding these options, please refer to the HAProxy 1.5 documentation.  The key options here are as follows:

  • The port defined in the ‘listen’ section – listen HAProxy-Statistics *:81
  • The username and passwords defined in the ‘userlist‘ subsection under the ‘defaults’ section.
  • The URI defined in the ‘listen’ section – stats uri /haproxy?stats

After making these changes, confirm that there aren’t any configuration file errors:

[root@odc-f-13 /]# /usr/sbin/haproxy -c -f etc/load-balancer-servo/haproxy_template.conf
 Configuration file is valid

Next, unmount the image, and tar-gzip the image:

[root@odc-f-13 eucalyptus-load-balancer-image]# umount /mnt/centos
[root@odc-f-13 eucalyptus-load-balancer-image]# kpartx -dv /dev/loop0
del devmap : loop0p1
[root@odc-f-13 eucalyptus-load-balancer-image]# losetup -d /dev/loop0
[root@odc-f-13 eucalyptus-load-balancer-image]# tar -zcvf eucalyptus-load-balancer-image-monitored.tgz eucalyptus-load-balancer-image.img
eucalyptus-load-balancer-image.img

Use euca-install-load-balancer to upload the new image:

[root@odc-f-13 eucalyptus-load-balancer-image]# cd
[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:

Version 2 (enabled)
emi-F0D5828C (loadbalancer-v2/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-05-28 at 11:10:03 PDT

[root@odc-f-13 ~]# euca-install-load-balancer -t eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-monitored.tgz
Decompressing tarball: eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-monitored.tgz
Bundling and uploading image to bucket: loadbalancer-v3
Registering image manifest: loadbalancer-v3/eucalyptus-load-balancer-image.img.manifest.xml
Registered image: emi-DB150EC0
PROPERTY loadbalancing.loadbalancer_emi emi-DB150EC0 was emi-F0D5828C

Load Balancing Support is Enabled
[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:

Version 2
emi-F0D5828C (loadbalancer-v2/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-05-28 at 11:10:03 PDT

Version 3 (enabled)
emi-DB150EC0 (loadbalancer-v3/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-07-08 at 18:38:29 PDT

Testing the Eucalyptus Load Balancer Statistics Page

To view the HAProxy statistics page, create a Eucalyptus Load Balancer instance by using eulb-create-lb:

[root@odc-f-13 ~]# eulb-create-lb TestLoadBalancer -z ViciousLiesAndDangerousRumors -l "lb-port=80, protocol=HTTP, instance-port=80, instance-protocol=HTTP"
DNS_NAME TestLoadBalancer-408396244283.elb.acme.eucalyptus-systems.com

[root@odc-f-13 ~]# euca-describe-instances
RESERVATION r-06DF089F 944786667073 euca-internal-408396244283-TestLoadBalancer
INSTANCE i-3DA342C2 emi-DB150EC0 euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com euca-172-18-229-187.bigboi.internal running euca-elb 0 m1.medium 2014-07-09T01:45:11.753Z ViciousLiesAndDangerousRumors monitoring-enabled 10.104.6.233 172.18.229.187 instance-store hvm 8ba248ae-dbeb-41ce-97df-fb13b91a337b_ViciousLiesAndDangerousR_1 sg-3EA4ADEC arn:aws:iam::944786667073:instance-profile/internal/loadbalancer/loadbalancer-vm-408396244283-TestLoadBalancer
TAG instance i-3DA342C2 Name loadbalancer-resources
TAG instance i-3DA342C2 aws:autoscaling:groupName asg-euca-internal-elb-408396244283-TestLoadBalancer
TAG instance i-3DA342C2 euca:node 10.105.1.188

Since the web statistics page is configured to display on port 81, use euca-authorize to allow access to that port in the load balancer’s security group.  I recommend limiting access to the port for security reasons.  In the example below, access is limited to only the client 192.168.30.25:

[root@odc-f-13 ~]# euca-authorize -P tcp -p 81 -s 192.168.30.25/32 euca-internal-408396244283-TestLoadBalancer
 GROUP euca-internal-408396244283-TestLoadBalancer
 PERMISSION euca-internal-408396244283-TestLoadBalancer ALLOWS tcp 81 81 FROM CIDR 192.168.30.25/32

Finally, use a browser on the authorized client to view the statistics page on the load balancer.  In this example, the URL – http://testloadbalancer-408396244283.elb.acme.eucalyptus-systems.com:81/haproxy?stats – will be used.  Use the username and password credentials that were added to to the HAProxy configuration file to view the page.  It should look similar to the screenshot below:

HAProxy Statistics Web Page of the Eucalyptus Load Balancer
HAProxy Statistics Web Page of the Eucalyptus Load Balancer

 

Thats it!  For any load balancer thats launched on the Eucalyptus 4.0 cloud, the cloud administrator will be able to display statistics of the load balancer.  This is also something that the cloud administrator can provide to cloud users as a service.  By leveraging restrictions placed in security groups of the load balancer, cloud administrators can limit access to the statistics page based upon the source IP addresses of the cloud users’ client machine(s).

Enjoy!

Eucalyptus 4.0 Load Balancer Statistics Web UI for the Cloud Administrator

IAM Roles and Instance Profiles in Eucalyptus 3.4

IAM Roles in AWS are quite powerful – especially when users need instances to access service APIs to implement complex deployments.  In the past, this could be accomplished by passing access keys and secret keys through the instance user data service, which can be cumbersome and is quite insecure.  With IAM roles, instances can be launched with profiles that allow them to leverage various IAM policies provided by the user to control what service APIs  instances can access in a secure manner.  As part of  constant pursuit for AWS compatibility, one of the new features in Eucalyptus 3.4 is the support of IAM roles and instance profiles (and yes, it works with tools like ec2-api-tools, and libraries like boto, which support accessing IAM roles through the instance meta data service).

This blog entry will demonstrate the following:

  • Set up an Eucalyptus IAM role
  • Create an Eucalyptus instance profile
  • Assign an instance profile when launching an instance
  • Leverage the IAM role from within the instance to access a service API (for this example, it will be the EC2 service API on Eucalyptus)

Prerequisites

To use IAM roles on Eucalyptus, the following is required:

  • A Eucalyptus 3.4 cloud – These packages can be downloaded from the Eucalyptus 3.4 nightly repo.  For additional information regarding downloading nightly builds of Eucalyptus, please refer the Eucalyptus Install Guide (note: anywhere there is a “3.3” reference, replace with “3.4”)
  • User Credentials – User credentials for an account administrator (admin user), and credentials of a non-admin user of a non-eucalyptus account.
  • Apply an IAM policy for the non-admin user to launch instances, and pass roles to instances launched by that user using euare-useruploadpolicy.  An example policy is below:

    {"Statement": [
     "Effect":"Allow",
     "Action":"iam:PassRole",
     "Resource":"*"
     },
     {
     "Effect":"Allow",
     "Action":"iam:ListInstanceProfiles",
     "Resource":"*"
     },
     {
     "Effect":"Allow",
     "Action":"ec2:*",
     "Resource":"*"
     }]
    }

  • AWS IAM CLI Tools and Euca2ools 3 – The AWS IAM CLI tools are for creating IAM roles and instance profiles; euca2ools for launching instances. There will be one configuration file for the AWS IAM CLI tools that will contain the credentials of the account admin user (for example, account1-admin.config).  Euca2ools will only need the credentials of the non-admin user in the euca2ools.ini file (for example, creating a user section called account1-user01].

Creating  a Eucalyptus IAM Role

Just as in AWS IAM, iam-rolecreate can be used with Eucalyptus IAM to create IAM roles.  To create a IAM role on Eucalyptus, run the following command:

# iam-rolecreate --aws-credential-file account1-admin.config 
--url http://10.104.10.6:8773/services/Euare/ -r ACCT1-EC2-ACTIONS 
-s http://10.104.10.6:8773/services/Eucalyptus
# iam-rolelistbypath --aws-credential-file account1-admin.config
 --url http://10.104.10.6:8773/services/Euare/
arn:aws:iam::735723906303:role/ACCT1-EC2-ACTIONS
IsTruncated: false

This will create a IAM role called ACCT1-EC2-ACTIONS.  Next, we need to add an IAM policy to the role.  As mentioned earlier, the IAM policy will allow the instance to execute an EC2 API call (in this case, ec2-describe-availability-zones).  Use iam-roleuploadpolicy to upload the following IAM policy file:

{
"Statement": [
{
"Sid": "Stmt1381454720306",
"Action": [
"ec2:DescribeAvailabilityZones"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

After the IAM policy file has been created (e.g. ec2-describe-az), upload the policy to the role:

# iam-roleuploadpolicy --aws-credential-file account1-admin.config 
--url http://10.104.10.6:8773/services/Euare/ -p ec2-describe-az 
-f ec2-describe-az -r ACCT1-EC2-ACTIONS
# iam-rolelistpolicies --aws-credential-file account1-admin.config 
--url http://10.104.10.6:8773/services/Euare/ -r ACCT1-EC2-ACTIONS -v
ec2-describe-az
{
 "Statement": [
 {
 "Sid": "Stmt1381454720306",
 "Action": [
 "ec2:DescribeAvailabilityZones"
 ],
 "Effect": "Allow",
 "Resource": "*"
 }
 ]
}
IsTruncated: false

As displayed, the IAM role has been created, and an IAM policy has been added to the role successfully.  Now its time to deal with instance profiles.

Create an Instance Profile and Add a Role to the Profile

Instance profiles are used to pass the IAM role to the instance.  An IAM role can be associated to many instance profiles, but an instance profile can be associated to only one IAM role.  To create an instance profile, use iam-instanceprofilecreate.  Since the IAM role ACCT1-EC2-ACTIONS was previously created, the role can be added as the instance profile is created:

# iam-instanceprofilecreate 
--aws-credential-file account1-admin.config 
--url http://10.104.10.6:8773/services/Euare/ -r ACCT1-EC2-ACTIONS 
-s instance-ec2-actions
# iam-instanceprofilelistbypath --aws-credential-file acct1-user1-aws-iam.config 
--url http://10.104.10.6:8773/services/Euare/
arn:aws:iam::735723906303:instance-profile/instances-ec2-actions
IsTruncated: false

We have successfully created an instance profile and associated an IAM role to it.  All that is left to do is test it out.

Testing out the Instance Profile

Before testing out the instance profile, make sure that the euca2ools.ini file has the correct user and region information for the non-admin user of the account (for this example, the user will be user01).  For information about obtaining the credentials for the user, please refer to the section “Create Credentials” in the Eucalyptus User Guide.

After setting up the euca2ools.ini file, use euca-run-instance to launch an instance with an instance profile.  The image used here is the Ubuntu Raring Cloud Image.  The keypair account1-user01 was created using euca-create-keypair.  To open up SSH access to the instance, use euca-authorize.   Create a cloud-init user data file to enable the multiverse repository.

# cat cloud-init.config
#cloud-config
apt_sources:
 - source: deb $MIRROR $RELEASE multiverse
apt_update: true
apt_upgrade: true
disable_root: true
# euca-run-instances --key account1-user1 emi-C25538DA 
--instance-type m1.large --user-data-file cloud-init.config 
--iam-profile arn:aws:iam::407837561996:instance-profile/instance-ec2-actions 
--region account1-user01@
RESERVATION r-CED1435E 407837561996 default
INSTANCE i-72F244CC emi-C25538DA 0.0.0.0 0.0.0.0 pending account1-user01 0 
m1.large 2013-10-10T22:08:00.589Z Exodus eki-C9083808 eri-39BC3B99 
monitoring-disabled 0.0.0.0 0.0.0.0 instance-store paravirtualized 
arn:aws:iam::407837561996:instance-profile/instance-ec2-actions
....
# euca-describe-instances --region account1-user01@
RESERVATION r-CED1435E 407837561996 default
INSTANCE i-72F244CC emi-C25538DA 10.104.7.22 172.17.190.157 
running account1-user01 0 m1.large 2013-10-10T22:08:00.589Z Exodus eki-C9083808 
eri-39BC3B99 monitoring-disabled 10.104.7.22 172.17.190.157 
instance-store paravirtualized 
arn:aws:iam::407837561996:instance-profile/instance-ec2-actions
TAG instance i-72F244CC euca:node 10.105.10.11

Next, SSH into the instance and confirm the instance profile is accessible by the instance meta-data service.

[root@odc-c-06 ~]# ssh-keygen -R 10.104.7.22
/root/.ssh/known_hosts updated.
Original contents retained as /root/.ssh/known_hosts.old
[root@odc-c-06 ~]# ssh -i euca-admin.priv ubuntu@10.104.7.22
The authenticity of host '10.104.7.22 (10.104.7.22)' can't be established.
RSA key fingerprint is a1:b2:5d:1a:be:e3:cb:0b:58:5f:bd:c1:e2:1f:e3:2d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.104.7.22' (RSA) to the list of known hosts.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
Welcome to Ubuntu 13.04 (GNU/Linux 3.8.0-31-generic x86_64)
* Documentation: https://help.ubuntu.com/
.....
Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud
Use Juju to deploy your cloud instances and workloads:
 https://juju.ubuntu.com/#cloud-raring
0 packages can be updated.
0 updates are security updates.
ubuntu@ip-172-17-190-157:~$ curl http://169.254.169.254/latest/meta-data/
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
iam/
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
mac
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
security-groups
### check for IAM role temporary secuirty credentials ###
ubuntu@ip-172-17-190-157:~$ curl http://169.254.169.254/latest/meta-data/iam/
security-credentials/ACCT1-EC2-ACTIONS
{
 "Code": "Success",
 "LastUpdated": "2013-10-11T18:07:37Z",
 "Type": "AWS-HMAC",
 "AccessKeyId": "AKIYW7FDRV8ZG5HIM91D",
 "SecretAccessKey": "sgVOgLJoc3wXjI5mu7yrYXI3NHtiq18cJuOT7Mwh",
 "Token": "ZXVjYQABQe4E4f2NnIsnvT/5jfpauKh3dClPVwPEoMepqk0lViODSgk4axiQb9rRQyU7Qnhvxb22wO201EoT6Ay/
rg+1i3+2xQLfbkh7kqy4CmqdGM3Q7LNI1dFPSz332E6us5BsSdHpiw3VGLyMLnDAkV8BMi+6lKE5eaJ+hpFI/
KXEVPSNkFMI9R+9bKPIFZvceiBE1w+kAEJC/18uCpZ0kSNy2iFBYcZ+zTwrYTgnsqNYcEIuWzEh4z1WIA==",
 "Expiration": "2013-10-11T19:07:37Z"
}

Install the ec2-api-tools from the Ubuntu Raring multiverse repository.

ubuntu@ip-172-17-190-157:~$ sudo apt-get update
Get:1 http://security.ubuntu.com raring-security Release.gpg [933 B]
Hit http://Exodus.clouds.archive.ubuntu.com raring Release.gpg
......
Ign http://Exodus.clouds.archive.ubuntu.com raring-updates/main Translation-en_US
Ign http://Exodus.clouds.archive.ubuntu.com raring-updates/multiverse Translation-en_US
Ign http://Exodus.clouds.archive.ubuntu.com raring-updates/universe Translation-en_US
Fetched 8,015 kB in 19s (421 kB/s)
Reading package lists... Done
ubuntu@ip-172-17-190-157:~$ sudo apt-get install ec2-api-tools
Reading package lists... Done
The following extra packages will be installed:
 ca-certificates-java default-jre-headless fontconfig-config
 icedtea-7-jre-jamvm java-common libavahi-client3 libavahi-common-data 
libavahi-common3 libcups2 libfontconfig1 libjpeg-turbo8 libjpeg8 liblcms2-2
 libnspr4 libnss3 libnss3-1d openjdk-7-jre-headless openjdk-7-jre-lib 
ttf-dejavu-core tzdata-java
......
Adding debian:TDC_Internet_Root_CA.pem
Adding debian:SecureTrust_CA.pem
done.
Setting up openjdk-7-jre-lib (7u25-2.3.10-1ubuntu0.13.04.2) ...
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
Processing triggers for ca-certificates ...
Updating certificates in /etc/ssl/certs... 0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....
done.
done.

Finally, run ec2-describe-availability-zones using the –url option to point to the Eucalyptus cloud being used.

ubuntu@ip-172-17-190-157:~$ ec2-describe-availability-zones 
-U http://10.104.10.6:8773/services/Eucalyptus/
AVAILABILITYZONE Legend 10.104.1.185 
arn:euca:eucalyptus:Legend:cluster:IsThisLove/
AVAILABILITYZONE Exodus 10.104.10.22 
arn:euca:eucalyptus:Exodus:cluster:NaturalMystic/

Thats it!  Notice how there wasn’t a need to pass any access key and secret key.  All that information is grabbed from the instance meta-data service.

IAM roles and instance profiles are quite powerful.  Great use cases include enabling CloudWatch metrics, and deploying ELBs on Eucalyptus.

I hope this has been helpful.  As always, any questions/suggestions/ideas/feedback are greatly appreciated.

 

IAM Roles and Instance Profiles in Eucalyptus 3.4

Bind DNS + OpenLDAP MDB == Dynamic Domain and Fully Delegated Sub-Domain Configuration of DNS

This blog post was driven by the need to make it easier to test Eucalyptus DNS in a lab environment.   The goal was to have a scriptable way to add/delete fully delegated sub-domains without having to reload/restart DNS when Eucalyptus clouds were being deployed/destroyed.   This was tested on an CentOS 6 instance running in a Eucalyptus 3.3 HA Cloud.

Prerequisites

This entry will not cover setting up Eucalyptus HA, creating a Eucalyptus user, using eustore to register the image, and/or opening up ports in security groups.  Its assumed the reader understands these concepts.  The focus will be configuring and deploying Bind9 and OpenLDAP.

In addition to using a CentOS 6.4 image, the following is needed:

  • ports open for DNS (tcp and udp 53)
  • port open for OpenLDAP (tcp 389)
  • port open for SSH (tcp 22)

Now that the prereqs have been covered, lets jump into setting up the environment.

Base Software Installation

There are a series of packages needed to install OpenLDAP (since we are building from source), and Bind9.  Once the instance is launched and running, SSH into the instance, and run the following commands:

# sudo yum -y upgrade
# sudo yum install -y git cyrus-sasl gcc glibc-devel libtool-ltdl \
db4-devel openssl-devel unixODBC-devel libtool-ltdl-devel libtool \
cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-lib cyrus-sasl-md5 \
make bind-dyndb-ldap

All the packages except for bind-dyndb-ldap are needed to build OpenLDAP from source.  The reason we are building OpenLDAP from source is to take advantage of their powerful backend – MDB.  Check out my previous blogs on this topic from this listing.

The key package for Bind9 DNS to communicate to OpenLDAP as a backend is bind-dyndb-ldap.  This plug-in is used by the FreeIPA Identity/Policy Management application to help leverage 389 Directory Servers (which is based off OpenLDAP) for storing domain name information.

OpenLDAP Installation and Configuration

Since all the base packages are installed, we can now grab and install the latest source version of OpenLDAP.  While still being logged into the instance, run the following command:

# git clone git://git.openldap.org/openldap.git ~/openldap

Since we are working with an instance based on the CentOS 6 image on eustore, we will use the ephemeral store (which is mounted under /media/ephemeral0) for the location of our OpenLDAP installation.  Create a directory for installing OpenLDAP on the ephemeral store by running the command below:

# mkdir /media/ephemeral0/openldap

Next, configure OpenLDAP:

# cd ~/openldap
# ./configure --prefix=/media/ephemeral0/openldap --enable-debug=yes \
--enable-syslog --enable-dynamic --enable-slapd --enable-dynacl \
--enable-spasswd --enable-modules --enable-rlookups --enable-mdb \
--enable-monitor --enable-overlays --with-cyrus-sasl --with-threads \
--with-tls=openssl CC="gcc" LDFLAGS="-L/usr/lib64/sasl2" CPPFLAGS="-I/usr/include/sasl"

After thats completed successfully, compile and install OpenLDAP:

# make depend
# make
# sudo make install

After the installation is complete, create the openldap user that will be responsible for running OpenLDAP:

# sudo useradd -m -U -c "OpenLDAP User" -s /bin/bash openldap
# sudo passwd -l openldap

Since the bind-dyndb-ldap package was installed earlier, copy the schema to where OpenLDAP stores its schemas, so that it can be added to the OpenLDAP configuration:

# sudo cp /usr/share/doc/bind-dyndb-ldap-2.3/schema \
/media/ephemeral0/openldap/etc/openldap/schema/bind-dyndb-ldap.schema

Next, create the LDAP password for the cn=admin,cn=config user.  This user is responsible for managing the configuration of the OpenLDAP server using OLC:

# /media/ephemeral0/openldap/sbin/slappasswd -h {SSHA}

Modify the slapd.conf file located under /media/ephemeral0/openldap/etc/openldap/, to set up the base configuration structure (cn=config) for OpenLDAP.  When completed, it should look like the following:

#######################################################################
# Config database definitions
#######################################################################
pidfile /media/ephemeral0/openldap/var/run/slapd.pid
argsfile /media/ephemeral0/openldap/var/run/slapd.args
database config
rootdn cn=admin,cn=config
rootpw {SSHA}xxxxxxxxxxxxxxxxxxxxxxxx - (password created for cn=admin,cn=config user)
# Schemas, in order
include /media/ephemeral0/openldap/etc/openldap/schema/core.schema
include /media/ephemeral0/openldap/etc/openldap/schema/cosine.schema
include /media/ephemeral0/openldap/etc/openldap/schema/inetorgperson.schema
include /media/ephemeral0/openldap/etc/openldap/schema/collective.schema
include /media/ephemeral0/openldap/etc/openldap/schema/corba.schema
include /media/ephemeral0/openldap/etc/openldap/schema/duaconf.schema
include /media/ephemeral0/openldap/etc/openldap/schema/dyngroup.schema
include /media/ephemeral0/openldap/etc/openldap/schema/misc.schema
include /media/ephemeral0/openldap/etc/openldap/schema/nis.schema
include /media/ephemeral0/openldap/etc/openldap/schema/openldap.schema
include /media/ephemeral0/openldap/etc/openldap/schema/ppolicy.schema
include /media/ephemeral0/openldap/etc/openldap/schema/bind-dyndb-ldap.schema

Create the slapd.d directory under /media/ephemeral0/openldap/etc/openldap/.  This will contain all the directory information:

# sudo chown -R openldap:openldap /media/ephemeral0/openldap/* 
# su - openldap -c "mkdir /media/ephemeral0/openldap/etc/openldap/slapd.d"

Populate the slapd.d directory with the base configuration by running the following command:

su - openldap -c "/media/ephemeral0/openldap/sbin/slaptest \
-f /media/ephemeral0/openldap/etc/openldap/slapd.conf \
-F /media/ephemeral0/openldap/etc/openldap/slapd.d"
config file testing succeeded

For this example, we will be setting up the directory to use dc=eucalyptus,dc=com as the LDAP base.  Create the cn=Directory Manager,dc=eucalyptus,dc=com LDAP password:

# /media/ephemeral0/openldap/sbin/slappasswd -h {SSHA}

Create an LDIF that will define the configuration of the DB associated with the information regarding the DNS entries.  For this example, the LDIF will be called directory-layout.ldif.  It should look like the following:

#######################################################################
# MDB database definitions
#######################################################################
#
dn: olcDatabase=mdb,cn=config
changetype: add
objectClass: olcDatabaseConfig
objectClass: olcMdbConfig
olcDatabase: mdb
olcSuffix: dc=eucalyptus,dc=com
olcRootDN: cn=Directory Manager,dc=eucalyptus,dc=com
olcRootPW: {SSHA}xxxx - (password of cn=Directory Manager,dc=eucalyptus,dc=com user)
olcDbDirectory: /media/ephemeral0/openldap/var/openldap-data/dns
olcDbIndex: objectClass eq
olcAccess: to attrs=userPassword by dn="cn=Directory Manager,dc=eucalyptus,dc=com"
 write by anonymous auth by self write by * none
olcAccess: to attrs=shadowLastChange by self write by * read
olcAccess: to dn.base="" by * read
olcAccess: to * by dn="cn=Directory Manager,dc=eucalyptus,dc=com" write by * read
olcDbMaxReaders: 0
olcDbMode: 0600
olcDbSearchStack: 16
olcDbMaxSize: 4294967296
olcAddContentAcl: FALSE
olcLastMod: TRUE
olcMaxDerefDepth: 15
olcReadOnly: FALSE
olcSyncUseSubentry: FALSE
olcMonitoring: TRUE
olcDbNoSync: FALSE
olcDbEnvFlags: writemap
olcDbEnvFlags: nometasync

Make sure and create the directory where the DB information will be stored:

# su - openldap -c "mkdir /media/ephemeral0/openldap/var/openldap-data/dns"

Start up the OpenLDAP directory:

# sudo /media/ephemeral0/openldap/libexec/slapd -h "ldap:/// ldapi:///" \
-u openldap -g openldap

After OpenLDAP has been started successfully,  upload the directory-layout.ldif as the cn=admin,cn=config user:

# /media/ephemeral0/openldap/bin/ldapadd -D cn=admin,cn=config -W \
-f directory-layout.ldif
Enter LDAP Password:
adding new entry "olcDatabase=mdb,cn=config"

To allow search access to the directory, create an LDIF called frontend.ldif, that contains the following:

dn: olcDatabase={-1}frontend,cn=config
changetype: modify
replace: olcAccess
olcAccess: to dn.base="" by * read
olcAccess: to dn.base="cn=Subschema" by * read
olcAccess: to * by self write by users read by anonymous auth

Upload the LDIF using the ldapmodify command:

# /media/ephemeral0/openldap/bin/ldapmodify -D cn=admin,cn=config -W -f frontend.ldif
Enter LDAP Password:
modifying entry "olcDatabase={-1}frontend,cn=config"

To check the results of these changes, use ldapsearch:

# /media/ephemeral0/openldap/bin/ldapsearch -D cn=admin,cn=config -W -b cn=config
Enter LDAP Password:
(ldapsearch results...)

After confirming that the base configurations have been stored, create an LDIF called dns-domain.ldif that lays out the directory structure for the database.  As seen previously in the directory-layout.ldif, the base is dc=eucalyptus,dc=com. The dns-domain.ldif file should look like the following:

dn: dc=eucalyptus,dc=com
objectClass: top
objectClass: dcObject
objectclass: organization
o: Eucalyptus Systems Inc - QA DNS Domain
dc: eucalyptus
description: Test LDAP+DNS Setup
 
dn: ou=dns,dc=eucalyptus,dc=com
objectClass: organizationalUnit
ou: dns

After creating the dns-domain.ldif file, upload the file using the cn=Directory Manager,dc=eucalyptus,dc=com user:

# /media/ephemeral0/openldap/bin/ldapadd -H ldap://localhost \
-D "cn=Directory Manager,dc=eucalyptus,dc=com" -W -f dns-domain.ldif
Enter LDAP Password:
adding new entry "dc=eucalyptus,dc=com"
adding new entry "ou=dns,dc=eucalyptus,dc=com"

To allow the cn=Directory Manager,dc=eucalyptus,dc=com user to see what updates are being done to the Directory, enable the Access Log overlay.  To enable this option, create an LDIF file called access-log.ldif.  The contents should look like the following:

dn: olcDatabase={2}mdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcMdbConfig
olcDatabase: {2}mdb
olcDbDirectory: /media/ephemeral0/openldap/var/openldap-data/access
olcSuffix: cn=log
olcDbIndex: reqStart eq
olcDbMaxSize: 1073741824
olcDbMode: 0600
olcAccess: {1}to * by dn="cn=Directory Manager,dc=eucalyptus,dc=com" read

dn: olcOverlay={1}accesslog,olcDatabase={3}mdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcAccessLogConfig
olcOverlay: {1}accesslog
olcAccessLogDB: cn=log
olcAccessLogOps: all
olcAccessLogPurge: 7+00:00 1+00:00
olcAccessLogSuccess: TRUE
olcAccessLogOld: (objectclass=idnsRecord)

After creating the access-log.ldif, create the directory for storing the access database:

# su - openldap -c "mkdir /media/ephemeral0/openldap/var/openldap-data/access"

Upload the LDIF using the cn=admin,cn=config user:

# /media/ephemeral0/openldap/bin/ldapadd -D cn=admin,cn=config -W -f access-log.ldif

Now that OpenLDAP is ready to go, let’s work on configuring Bind9 DNS.

Bind9 DNS Configuration

Since bind-dyndb-ldap has a dependency package of bind,  named has already been installed on the instance.  The only thing left to do is edit /etc/named.conf so that we are able to use the dynamic ldap backend module.  Edit /etc/named.conf so that it looks like the following:

options {
 listen-on port 53 { <private IP address of the instance>; };
 listen-on-v6 port 53 { ::1; };
 directory "/var/named";
 dump-file "/var/named/data/cache_dump.db";
 statistics-file "/var/named/data/named_stats.txt";
 memstatistics-file "/var/named/data/named_mem_stats.txt";
 recursion yes;

 dnssec-enable yes;
 dnssec-validation yes;
 dnssec-lookaside auto;

/* Path to ISC DLV key */
 bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
 allow-recursion { any; };
};

dynamic-db "qa_dns_test" {
 library "ldap.so";
 arg "uri ldap://localhost";
 arg "base ou=dns,dc=eucalyptus, dc=com";
 arg "auth_method none";
 arg "cache_ttl 10";
 arg "zone_refresh 1";
 arg "dyn_update yes";
};

logging {
 channel default_debug {
 file "data/named.run";
 severity debug;
 print-time yes;
 };
};

zone "." IN {
 type hint;
 file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

As seen above, the dynamic-db section is the configuration for connecting to the OpenLDAP server.  For more advanced configurations, please reference the README in the bind-dyndb-ldap repository on git.fedorahosted.org.

Now we are ready to start named (the bind DNS server).  Before starting the server, make sure and create the rndc key, then start named:

# rndc-confgen -a -r /dev/urandom
# service named start

You have successfully created an Bind9 DNS + OpenLDAP deployment.  Let’s run a quick test.

Test the Deployment

To test the deployment, I created an LDIF called test-cloud.ldif.  The configuration sets up a domain called eucalyptus-systems.com.  It also creates a sub-domain that will be forwarding requests for euca-hasp.eucalyptus-systems.com to the CLCs of the Eucalyptus HA deployment that has been set up.  The contents of the file are as follows:

dn: idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: top
objectClass: idnsZone
objectClass: idnsRecord
idnsName: eucalyptus-systems.com
idnsUpdatePolicy: grant EUCALYPTUS-SYSTEMS.COM krb5-self * A;
idnsZoneActive: TRUE
idnsSOAmName: server.eucalyptus-systems.com
idnsSOArName: root.server.eucalyptus-systems.com
idnsAllowQuery: any;
idnsAllowDynUpdate: TRUE
idnsSOAserial: 1
idnsSOArefresh: 10800
idnsSOAretry: 900
idnsSOAexpire: 604800
idnsSOAminimum: 86400
NSRecord: ns
ARecord: 192.168.55.103 - (the public IP of the instance)

dn: idnsName=ns,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: ns
aRecord: 192.168.55.103

dn: idnsName=server,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: server
CNAMERecord: eucalyptus-systems.com.

dn: idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com
objectClass: idnszone
objectClass: idnsrecord
objectClass: top
idnsName: 39.168.192.in-addr.arpa.
idnsSOAmName: server.eucalyptus-systems.com
idnsSOArName: root.server.eucalyptus-systems.com
idnsSOAserial: 1350039556
idnsSOArefresh: 10800
idnsSOAretry: 900
idnsSOAexpire: 604800
idnsSOAminimum: 86400
idnsZoneActive: TRUE
idnsAllowDynUpdate: TRUE
idnsAllowQuery: any;
idnsAllowTransfer: none;
idnsUpdatePolicy: grant EUCALYPTUS-SYSTEMS.COM krb5-subdomain 39.168.192.in-addr.arpa. PTR;
nSRecord: server.eucalyptus-systems.com.

dn: idnsName=_ldap._tcp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: _ldap._tcp
SRVRecord: 0 100 389 server

dn: idnsName=_ntp._udp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: _ntp._udp
SRVRecord: 0 100 123 server

# The DNS entries for the CLCs of the cloud - viking-01 and viking-02

dn: idnsName=viking-02,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: viking-02
aRecord: 192.168.39.102

dn: idnsname=102,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsrecord
objectClass: top
idnsName: 102
pTRRecord: viking-02.eucalyptus-systems.com.

dn: idnsName=viking-01,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsRecord
objectClass: top
idnsName: viking-01
aRecord: 192.168.39.101

dn: idnsname=101,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com
objectClass: idnsrecord
objectClass: top
idnsName: 101
pTRRecord: viking-01.eucalyptus-systems.com.

# The delegated zone - euca-hasp.eucalyptus-systems.com

dn: idnsName=euca-hasp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com
objectClass: top
objectClass: idnsRecord
objectClass: idnsZone
idnsForwardPolicy: first
idnsAllowDynUpdate: FALSE
idnsZoneActive: TRUE
idnsAllowQuery: any;
idnsForwarders: 192.168.39.101
idnsForwarders: 192.168.39.102
idnsName: euca-hasp
idnsSOAmName: server.eucalyptus-systems.com
idnsSOArName: root.server.eucalyptus-systems.com
idnsSOAretry: 15
idnsSOAserial: 1
idnsSOArefresh: 80
idnsSOAexpire: 120
idnsSOAminimum: 30
nSRecord: viking-01.eucalyptus-systems.com
nSRecord: viking-02.eucalyptus-systems.com

Since this was intended for a lab environment where sub-domains (and possibly domains) would be added/deleted on a regular basis, the SOA records were not set to the RFC 1912 standards defined for production DNS use.

After creating this LDIF,  it was uploaded to the LDAP server as the cn=Directory Manager,dc=eucalyptus,dc=com user:

# /media/ephemeral0/openldap/bin/ldapadd -H ldap://localhost \
-D "cn=Directory Manager,dc=eucalyptus,dc=com" -W -f test-cloud.ldif
Enter LDAP Password:
adding new entry "idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=ns,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=server,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=_ldap._tcp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=_ntp._udp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=viking-02,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsname=102,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=viking-01,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsname=101,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
adding new entry "idnsName=euca-hasp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"

To test out the setup, tests were ran against the public IP address of the instance to resolve for the various configurations:

# nslookup viking-01.eucalyptus-systems.com 192.168.55.103
Server: 192.168.55.103
Address: 192.168.55.103#53

Name: viking-01.eucalyptus-systems.com
Address: 192.168.39.101

# nslookup 192.168.39.101 192.168.55.103
Server: 192.168.55.103
Address: 192.168.55.103#53

101.39.168.192.in-addr.arpa name = viking-01.eucalyptus-systems.com.

# nslookup eucalyptus.euca-hasp.eucalyptus-systems.com 192.168.55.103
Server: 192.168.55.103
Address: 192.168.55.103#53

Non-authoritative answer:
Name: eucalyptus.euca-hasp.eucalyptus-systems.com
Address: 192.168.39.102

# nslookup walrus.euca-hasp.eucalyptus-systems.com 192.168.55.103
Server: 192.168.55.103
Address: 192.168.55.103#53

Non-authoritative answer:
Name: walrus.euca-hasp.eucalyptus-systems.com
Address: 192.168.39.101

As see above, not only did the resolution come back correct for the machines under the eucalyptus-systems.com domain, but it also forwarded the requests for the hosts under euca-hasp.eucalyptus-systems.com correctly, and returned the correct response.

To delete the set up, an LDIF called delete-test-cloud.ldif from the test-cloud.ldif as follows:

# tac test-cloud.ldif | grep dn: > delete-test-cloud.ldif

Open up the delete-test-cloud.ldif and add the following lines between each dn:

changetype: delete
(empty line)

Now, use ldapmodify as the cn=Directory Manager,dc=eucalyptus,dc=com user to delete the entries:

# /media/ephemeral0/openldap/bin/ldapmodify -H ldap://localhost -D "cn=Directory Manager,dc=eucalyptus,dc=com" -W -f delete-test-cloud.ldif
Enter LDAP Password:
deleting entry "idnsName=euca-hasp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsname=101,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=viking-01,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsname=102,idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=viking-02,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=_ntp._udp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=_ldap._tcp,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsname=39.168.192.in-addr.arpa.,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=server,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=ns,idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"
deleting entry "idnsName=eucalyptus-systems.com,ou=dns,dc=eucalyptus,dc=com"

To confirm, do a lookup against one of the entries to see if it still exists:

# nslookup viking-01.eucalyptus-systems.com 192.168.55.103
Server: 192.168.55.103
Address: 192.168.55.103#53

** server can't find viking-01.eucalyptus-systems.com: NXDOMAIN

There you have it!  A successful Bind9 DNS + OpenLDAP deployment is ready to be used.

Enjoy!  And as always, questions/suggestions/comments are always welcome.

Bind DNS + OpenLDAP MDB == Dynamic Domain and Fully Delegated Sub-Domain Configuration of DNS

OpenLDAP: A comparison of back-mdb and back-hdb performance

Great insight on how much performance improvement you get with OpenLDAP when you use back-mdb instead of back-hdb.

Quanah's LDAP Blog

One of the biggest changes to OpenLDAP in years has made its way into the latest OpenLDAP 2.4 releases, and that is a brand new backend named “back-mdb”.  This new backend leverages the Lightning Memory-Mapped Database from Symas.  To see why this new backend was introduced, it is useful to look at the differences in performance and resource utilization between old BDB based back-hdb and the new LMDB based back-mdb.

Hardware details

  • Dell PowerEdge R710
  • 36GB of RAM
  • ESXi 5.1 hypervisor
  • 2 CPU, 4 cores per CPU, with hyperthreading (16 vCPUs)
  • 1.2 TB RAID array from 4x SEAGATE ST9300603SS 300GB 10kRPM drives
  • Ubuntu12 64-bit OS (3.5.0-28-generic #48~precise1-Ubuntu SMP kernel)
  • LDAP data is stored on its own /ldap partition, using ext2 as the filesystem type
  • ext2 options: noatime,defaults

Software details

  • OpenLDAP 2.4 Engineering from 5/10/2013
  • Berkeley DB 5.2.36 for the back-hdb backend
  • For read tests, slamd 2.0.1 was used to…

View original post 658 more words

OpenLDAP: A comparison of back-mdb and back-hdb performance

Advanced Configuration of DRBD: Eucalyptus 3.2 Walrus High Availability

On December 18, 2012, Eucalyptus  v3.2 was released.  One of the main focuses on this release was to harden the High Availability design of Eucalyptus.   I recently have been looking at additional configuration features of DRBD – which is used by Eucalyptus Walrus HA – that can be used in the Enterprise to help add robustness and more efficiency in disaster recovery efforts.  *NOTE* This blog entry’s main focus is DRBD, which is separate from Eucalyptus.  The goal is to shed light to the additional configuration options that helps DRBD be the robust and reliable product that it is, AND to show how Eucalyptus works with various open source products.

The Baseline

Before getting into the resource configuration options with DRBD, lets talk about the disk setup that was used.  I recommend using LVM for the backing device used with DRBD.  The features that LVM provides allows a cloud admin to add in additional backup measures (e.g. LVM snapshots), and recover from outages more efficiently – minimizing end-user perceived outages.  For more information regarding what LVM is, and all its features, CentOS/RHEL provide great documentation around this application.  Once you feel comfortable with LVM, check out DRBD’s LVM Primer to see how you can leverage LVM with DRBD.  For this blog entry,  the LVM/DRBD setup implemented was using a Logical Volume as a DRBD backing device.

Additional Resource Configuration Options in DRBD

The additional resource configuration options cover the following areas:

All of these options, except the last, is covered in the DRBD 8.3 User Guide.  Although the scripts associated with the automated LVM snapshots are mentioned in the DRBD 8.4 User Guide, the scripts are available and can be used in DRBD 8.3.x. *NOTE* When enabling these options, make sure that Eucalyptus Walrus is stopped.  Also, makes sure the configuration options are done on both nodes.  After the configurations have been done, just run  drbdadm adjust [resource name]  on both nodes for them to take effect.  Typically, I like to test out the configuration changes, then test failover of the DRBD nodes, before starting Eucalyptus Walrus. 

Traffic Integrity Checking

Making sure that all the data replicated between the DRBD is very important.   DRBD has the resource configuration option to use cryptographic message digest algorithms such as MD5, SHA-1 or CRC-32C for end-to-end message integrity checking.  If verification fails for the replicated block against the digest, the peer requests retransmission.

To enable this option for SHA-1 integrity checking, add the following entry to the resource configuration file:

.....
net {
......
......
 data-integrity-alg sha1;
 }
......

For more information regarding this resource configuration option, please refer to the section “Configuring replication traffic integrity checking” in the DRBD 8.3 User Guide.

Efficient Synchronization

DRBD offers checksum-based synchronization to help with making syncing between the DRBD nodes more efficient.  As mentioned in the section “Efficient Synchronization” in the DRBD 8.3 User Guide:

When using checksum-based synchronization, then rather than performing a brute-force overwrite of blocks marked out of sync, DRBD reads blocks before synchronizing them and computes a hash of the contents currently found on disk.  It then compares this hash with one computed from the same sector on the peer, and omits re-writing this block if the hashes match. This can dramatically cut down synchronization times in situation where a filesystem re-writes a sector with identical contents while DRBD is in disconnected mode.

To enable this configuration option, add the following to the resource configuration file:

........
syncer {
........
 csums-alg sha1;
 }
.........

To learn more about this option, please refer to the “Configuring checksum-based synchronization”  in the DRBD 8.3 User Guide.

Automated LVM Snapshots During DRBD Synchronization

When doing DRBD synchronization between nodes, there is chance that if the SyncSourcefails, the result will be a node, with good data, being dead, and a surviving node with bad data.   When serving DRBD off an LVM Logical Volume, you can mitigate this problem by creating an automated snapshot when synchronization starts, and automatically removing that same snapshot once synchronization has completed successfully.

There are a couple of things to keep in mind when configuring this option:

  • Make sure the volume group has enough space on each node to handle the LVM snapshot
  • You should review dangling snapshots as soon as possible. A full snapshot causes both the snapshot itself and its origin volume to fail.

To enable this configuration option, do the following to the resource configuration file:

.......
handlers {
 before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh";
 after-resync-target "/usr/lib/drbd/unsnapshot-resync-target-lvm.sh";
 }
.........

To learn more about this option, please refer to the section “Using automated LVM snapshots during DRBD synchronization” in the DRBD User Guide.

Conclusion

After enabling these options, the Walrus DRBD resource configuration file will look similar to the following:

resource r0 {

 on viking-01.eucalyptus-systems.com {
 device /dev/drbd1;
 disk /dev/vg02/lv_srv;
 address 192.168.39.101:7789;
 meta-disk internal;
 }
on viking-02.eucalyptus-systems.com {
 device /dev/drbd1;
 disk /dev/vg02/lv_srv;
 address 192.168.39.102:7789;
 meta-disk internal;
 }
syncer {
 rate 40M;
 csums-alg sha1;
 }
net {
 after-sb-0pri discard-zero-changes;
 after-sb-1pri discard-secondary;
 data-integrity-alg sha1;
 }
handlers {
 before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh";
 after-resync-target "/usr/lib/drbd/unsnapshot-resync-target-lvm.sh";
 }
}

As mentioned earlier, after making these changes – and making sure both DRBD resource files look the same – just run drbdadm adjust [resource name].

Enjoy!

Advanced Configuration of DRBD: Eucalyptus 3.2 Walrus High Availability

OpenLDAP Sandbox in the Clouds

Background

I really enjoy OpenLDAP.  I think folks really don’t understand the power of OpenLDAP, concerning its robustness (i.e. use multiple back-ends), speed and efficiency.

I think its important to have sandboxes to test various technologies.  The “cloud” is the best place for this.  To test out the latest builds provided by OpenLDAP (via git), I created a cloud-init script that allows me to configure, build, and install an OpenLDAP sandbox environment in the cloud (on-premise and/or public).  This script has been tested on AWS and Eucalyptus using Ubuntu Precise 12.04 LTS.   This blog entry is a compliment to my past blog regarding overlays, MDB and OpenLDAP.

Lean Requirements – Script, Image, and Cloud

When thinking about this setup, there were three goals in mind:

  1. Ease of configuration – this is why cloud-init was used.  Its very powerful in regards to bootstrapping instances as they boot up.  You can use Puppet, Chef or others (e.g. Salt Stack, Juju, etc.), but I decided to go with cloud-init.  The script does the following:
    • Downloads all the prerequisites for building OpenLDAP from source, including euca2ools.
    • Downloads OpenLDAP using Git
    • Set up ephemeral storage to be the installation point for OpenLDAP (e.g. configuration, storage, etc.)
    • Adds information into /etc/rc.local to make sure ephemeral gets re-mounted on reboots of the instance, and hostname is set.
    • Configures, builds and installs OpenLDAP.
  2. Cloud image that is ready to go – Ubuntu has done a wonderful job with their cloud images.  They have made it really easy to access them on AWS. These images can be used on Eucalyptus as well.
  3. Public and Private Cloud Deployment – Since Eucalyptus follows the AWS EC2 API very closely, it makes it really easy to test on both AWS and Eucalyptus.

Now that the background has been covered a bit, the next section will cover deploying the sandbox on AWS and/or Eucalyptus.

Deploy the Sandbox

To set the sandbox setup, use the following steps:

  1. Make sure and have an account on AWS and/or Eucalyptus (and the correct AWS/Eucalyptus IAM policies are in place so that you can bundle, upload and register images to AWS S3 and Eucalyptus Walrus).
  2. Make sure you have access to a registered AMI/EMI that runs Ubuntu Precise 12.04 LTS.  *NOTE* If you are using AWS, you can just go to the Ubuntu Precise Cloud Image download page, and select the AMI in the region that you have access to.
  3. Download the openldap cloud-init recipe from Eucalyptus/recipes repository.
  4. Download and install the latest Euca2ools (I used  the command-line tool euca-run-instances to run these instances).
  5. After you have downloaded your credentials from AWS/Eucalyptus, define your global environments by either following the documentation for AWS EC2 or the documentation for Eucalyptus.
  6. Use euca-run-instances with the –user-data-file option to launch the instance:  

    euca-run-instances -k hspencer.pem ....
     --user-data-file cloud-init-openldap.config [AMI | EMI]

After the instance is launched, ssh into the instance, and you will see something similar to the following:

ubuntu@euca-10-106-69-149:~$ df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 1.4G 1.2G 188M 86% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
none 0 0 0 - /sys/fs/fuse/connections
none 0 0 0 - /sys/kernel/debug
none 0 0 0 - /sys/kernel/security
udev 494M 12K 494M 1% /dev
devpts 0 0 0 - /dev/pts
tmpfs 200M 232K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
/dev/vda2 8.0G 159M 7.5G 3% /opt/openldap

Your sandbox environment is now set up.  From here, just following the instructions in the OpenLDAP Administrator’s Guide on configuring your openldap server, or continue from the “Setup – OLC and MDB” section located in my previous blog.  *NOTE* As you configure your openldap server, make sure and use euca-authorize to control access to your instance.

Enjoy!

OpenLDAP Sandbox in the Clouds

Another Great Example of AWS Fidelity – Neo4j, Cloud-Init and Eucalyptus

I recently ran across a blog entry entitled Neo4j 1.9.M01 – Self-managed HA.  I found the concept of graph databases storing data really interesting and reached out to the guys at Neo4j to get some insight on how to deploy their HA solution on Eucalyptus.   Amongst the resources that they provided,  they shared this little gem – how to deploy Neo4j on EC2.  In order to run first, you need to know how to walk – so before going down the path of standing up HA Neo4j, I decided to be influenced by the DIY on EC2 article provided by Neo4j and deploy Neo4j on Eucalyptus  – with a little help from Cloud-Init.  The follow-up blog will show how to use the same setup, and deploy an HA Neo4j environment.

The Setup

Eucalyptus

The Eucalyptus cloud I used is configured using Eucalyptus High-Availability.  Its running on CentOS 6.3, running KVM.  Its also running in Managed networking mode, so that we can take advantage of network isolation of the VMs, and the use of security groups  – interacting very much in the same way as its done in the security groups provided in AWS EC2.

Ubuntu Cloud Image – 12.04 LTS Precise Pangolin

The image that we will use is the Ubuntu 12.04 LTS Cloud image.  The reasons for using this image is as follows:

  • Ubuntu cloud images come pre-packaged with cloud-init, which helps with bootstrapping the instance.
  • I wanted to have the solution work on AWS EC2 and Eucalyptus; since Ubuntu cloud images work on both, its a great choice.

Registering the Ubuntu Cloud Image with Eucalyptus

In order for us to get started, we need to get the Ubuntu Cloud image into Eucalyptus so that we can use it for our instance.  To upload, bundle and register the Ubuntu Cloud image, ramdisk and kernel, do the following:

  1. Download current version of  Ubuntu Precise Server AMD64 from the Ubuntu Cloud Image – Precise page, then unpack (ungzip, unarchive) the tar-gzipped file.  

    $ tar -zxvf precise-server-cloudimg-amd64.tar.gz
    x precise-server-cloudimg-amd64.img
    x precise-server-cloudimg-amd64-vmlinuz-virtual
    x precise-server-cloudimg-amd64-loader
    x precise-server-cloudimg-amd64-floppy
    x README.files

  2. Make sure to download and source your Eucalyptus credentials.
  3. We need to bundle, upload, and register precise-server-cloudimg-amd64-loader (ERI), precise-server-cloudimg-amd64-vmlinuz-virtual (EKI), and precise-server-cloudimg-amd64.img (EMI).  For more information regarding this, please refer to the “Image Overview” section of the Eucalyptus 3.1 User Guide.  

    $ euca-bundle-image -i precise-server-cloudimg-amd64-loader --ramdisk true
    $ euca-upload-bundle -b latest-ubuntu-precise -m /tmp/precise-server-cloudimg-amd64-loader.manifest.xml
    $ euca-register -a x86_64 latest-ubuntu-precise/precise-server-cloudimg-amd64-loader.manifest.xml
    $ euca-bundle-image -i precise-server-cloudimg-amd64-vmlinuz-virtual --kernel true
    $ euca-upload-bundle -b latest-ubuntu-precise -m /tmp/precise-server-cloudimg-amd64-vmlinuz-virtual.manifest.xml
    $ euca-register -a x86_64 latest-ubuntu-precise/precise-server-cloudimg-amd64-vmlinuz-virtual.manifest.xml
    $ euca-bundle-image -i precise-server-cloudimg-amd64.img
    $ euca-upload-bundle -b latest-ubuntu-precise -m /tmp/precise-server-cloudimg-amd64.img.manifest.xml
    $ euca-register -a x86_64 latest-ubuntu-precise/precise-server-cloudimg-amd64.img.manifest.xml

After bundling, uploading and registering the ramdisk, kernel and image, the latest-ubuntu-precise bucket in Walrus should have the following images:

$ euca-describe-images | grep latest-ubuntu-precise
IMAGE eki-0F3937E9 latest-ubuntu-precise/precise-server-cloudimg-amd64-vmlinuz-virtual.manifest.xml 345590850920 available public x86_64 kernel instance-store

IMAGE emi-C1613E67 latest-ubuntu-precise/precise-server-cloudimg-amd64.img.manifest.xml 345590850920 available public x86_64 machine instance-store

IMAGE eri-0BE53BFD latest-ubuntu-precise/precise-server-cloudimg-amd64-loader.manifest.xml 345590850920 available public x86_64 ramdisk instance-store

Cloud-init Config File

Now that we have the image ready to go, we need to create a cloud-init config file to pass in using the –user-data-file option that is part of euca-run-instances.  For more examples of different cloud-init files, please refer to the cloud-init-dev/cloud-init repository on bazaar.launchpad.net.  Below is the cloud-init.config file I created for bootstrapping the instance with an install of Neo4j, using ephemeral disk for the application storage, and installing some other packages (i.e. latest euca2ools, mlocate, less, etc.). The script can be also accessed from github as well – under the eucalptus/recipes repo.

#cloud-config
apt_update: true
apt_upgrade: true
disable_root: true
package_reboot_if_required: true
packages:
 - less
 - bind9utils
 - dnsutils
 - mlocate
cloud_config_modules:
 - ssh
 - [ apt-update-upgrade, always ]
 - updates-check
 - runcmd
runcmd:
 - [ sh, -xc, "if [ -b /dev/sda2 ]; then tune2fs -L ephemeral0 /dev/sda2;elif [ -b /dev/vda2 ]; then tune2fs -L ephemeral0 /dev/vda2;elif [ -b /dev/xvda2 ]; then tune2fs -L ephemeral0 /dev/xvda2;fi" ]
 - [ sh, -xc, "mkdir -p /var/lib/neo4j" ]
 - [ sh, -xc, "mount LABEL=ephemeral0 /var/lib/neo4j" ]
 - [ sh, -xc, "if [ -z `ls /var/lib/neo4j/*` ]; then sed --in-place '$ iMETA_HOSTNAME=`curl -s http://169.254.169.254/latest/meta-data/local-hostname`\\nMETA_IP=`curl -s http://169.254.169.254/latest/meta-data/local-ipv4`\\necho ${META_IP}   ${META_HOSTNAME} >> /etc/hosts; hostname ${META_HOSTNAME}; sysctl -w kernel.hostname=${META_HOSTNAME}\\nif [ -d /var/lib/neo4j/ ]; then mount LABEL=ephemeral0 /var/lib/neo4j; service neo4j-service restart; fi' /etc/rc.local; fi" ] 
 - [ sh, -xc, "META_HOSTNAME=`curl -s http://169.254.169.254/latest/meta-data/local-hostname`; META_IP=`curl -s http://169.254.169.254/latest/meta-data/local-ipv4`; echo ${META_IP}   ${META_HOSTNAME} >> /etc/hosts" ]
 - [ sh, -xc, "META_HOSTNAME=`curl -s http://169.254.169.254/latest/meta-data/local-hostname`; hostname ${META_HOSTNAME}; sysctl -w kernel.hostname=${META_HOSTNAME}" ]
 - [ sh, -xc, "wget -O c1240596-eucalyptus-release-key.pub http://www.eucalyptus.com/sites/all/files/c1240596-eucalyptus-release-key.pub" ]
 - [ apt-key, add, c1240596-eucalyptus-release-key.pub ]
 - [ sh, -xc, "echo 'deb http://downloads.eucalyptus.com/software/euca2ools/2.1/ubuntu precise main' > /etc/apt/sources.list.d/euca2ools.list" ]
 - [ sh, -xc, "echo 'deb http://debian.neo4j.org/repo stable/' > /etc/apt/sources.list.d/neo4j.list" ]
 - [ apt-get, update ]
 - [ apt-get, install, -y, --force-yes, euca2ools ]
 - [ apt-get, install, -y, --force-yes, neo4j ]
 - [ sh, -xc, "sed --in-place 's/#org.neo4j.server.webserver.address=0.0.0.0/org.neo4j.server.webserver.address=0.0.0.0/' /etc/neo4j/neo4j-server.properties" ]
 - [ sh, -xc, "service neo4j-service restart" ]
 - [ sh, -xc, "export LANGUAGE=en_US.UTF-8" ]
 - [ sh, -xc, "export LANG=en_US.UTF-8" ]
 - [ sh, -xc, "export LC_ALL=en_US.UTF-8" ]
 - [ locale-gen, en_US.UTF-8 ]
 - [ dpkg-reconfigure, locales ]
 - [ updatedb ]
mounts:
 - [ ephemeral0, /var/lib/neo4j, auto, "defaults,noexec" ]

Now, we are ready to launch the instance.

Putting It All Together

Before launching the instance, we need to set up our keypair and security group that we will use with the instance.

  1. To create a keypair, run euca-create-keypair.  *NOTE* Make sure you change the permissions of the keypair to 0600 after its been created.

    euca-create-keypair  neo4j-user > neo4j-user.priv; chmod 0600 neo4j-user.priv

  2. Next, we need to create a security group for our instance.  To create a security group, use euca-create-group.  To open any ports you  need for the application, use euca-authorize.  The ports we will open up for the Neo4j application are SSH (22), ICMP, HTTP( 7474), and HTTPS (7473).
    • Create security group:

      # euca-create-group neo4j-test -d "Security for Neo4j Instances"

    • Authorize SSH:

      # euca-authorize -P tcp -p 22 -s 0.0.0.0/0 neo4j-test

    • Authorize HTTP:

      # euca-authorize -P tcp -p 7474 -s 0.0.0.0/0 neo4j-test

    • Authorize HTTPS:

      # euca-authorize -P tcp -p 7473 -s 0.0.0.0/0 neo4j-test

    • Authorize ICMP:

      # euca-authorize -P icmp -t -1:-1 -s 0.0.0.0/0 neo4j-test

  3. Finally, we use euca-run-instances to launch the Ubuntu Precise image, and use cloud-init to install Neo4j:

    # euca-run-instances -k neo4j-user --user-data-file cloud-init-neo4j.config emi-C1613E67 --kernel eki-0F3937E9 --ramdisk eri-0BE53BFD --group neo4j-test

To check the status of the instance, use euca-describe-instances.

# euca-describe-instances i-A9EF448C
RESERVATION r-ED8E4699 345590850920 neo4j-test
INSTANCE i-A9EF448C emi-C1613E67 euca-192-168-55-104.wu-tang.euca-hasp.eucalyptus-systems.com 
euca-10-106-69-154.wu-tang.internal running admin 0 m1.small 2012-12-04T03:13:13.869Z 
enter-the-wu eki-0F3937E9 eri-0BE53BFD monitoring-disable 
euca-192-168-55-104.wu-tang.euca-hasp.eucalyptus-systems.com euca-10-106-69-154.wu-tang.internal instance-store

Because I added in the cloud-init config file to do an “apt-get upgrade”, it takes about 5 to 7 minutes until the instance is fully configured and Neo4j is running.  Once you have it running, go to https://<ip-address of instance>:7473.  It will direct you to the web administration page for monitoring and management of the Neo4j instance.  In this example, the URL will be https://euca-192-168-55-104.wu-tang.euca-hasp.eucalyptus-systems.com:7473

Neo4j Monitoring and Management Tool
Neo4j Monitoring and Management Tool

Thats it!  The cool thing about this too, is that you can find an Ubuntu Precise AMI on AWS EC2, use the same cloud-init script, use euca2ools, and follow these instructions to get the same deployment on AWS EC2.

As mentioned before, the follow-up blog will be how to deploy the HA solution of Neo4j on Eucalyptus. Enjoy!

Another Great Example of AWS Fidelity – Neo4j, Cloud-Init and Eucalyptus