Using Boto3 Against HPE Helion Eucalyptus 4.2 Deployments

Recently, there was a blog entry posted on the AWS Developer Blog discussing how to migrate to boto3.  Since HPE Helion Eucalyptus strives to provide 100% AWS-compatible APIs for implemented services, AWS SDKs – such as the AWS SDK for Python – works solidly.  This blog entry will demonstrate how to use boto3 – the latest version of AWS SDK for Python – with HPE Helion Eucalyptus 4.2.

At the time of the posting of this blog entry, the following AWS service APIs are supported by HPE Helion Eucalyptus 4.2:

Installation

As mentioned on the boto3 documentation, install boto3 using pip:

# pip install boto3

Configuration

Again, as mentioned in the boto3 documentation, configuration can be done by using AWS CLI, or manually creating the config and credentials file under the .aws directory.  For example, here are the contents of the .aws/config and .aws/credentials files that will be used for this demonstration:

# cat .aws/config
[profile devops-admin]
output = json
region = us-east-1
# cat .aws/credentials
[devops-admin]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXX

If these files do not want to be used, as an alternative, you can pass the AWS Access Key ID and AWS Secret Key programmatically.  This will be referenced later on in this blog entry.

Using Boto3

To demonstrate how to use boto3, ipython will be utilized.  To get started, the Session class will be imported from the boto3 library:

# ipython
Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.2 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: from boto3.session import Session

Next invoke the session:

In [2]: session = Session(region_name='us-east-1', profile_name="devops-admin")

As mentioned earlier, alternatively, if we want to programmatically pass the AWS Access Key ID and the AWS Secret Key, it can be done when the session is invoked:

In [2]: session = Session(aws_access_key_id='XXXXXXXXXXXXXX', aws_secret_access_key='XXXXXXXXXXXXXXXXXXXXXXXX', region_name='us-east-1')

Even though region_name has a value here, when the client connection is created, the service endpoint will be a HPE Helion Eucalyptus service endpoint.  Any valid AWS region name can be used with HPE Helion Eucalyptus.  The important piece will be the endpoint URL.

From here, we can use the session to establish a client connection with a given HPE Helion Eucalyptus service endpoint.  Since the HPE Helion Eucalyptus cloud used in this example contains HTTPS endpoints, the trusted root certificate for the cloud subdomain will be passed as well.

Examples

Here is an example connecting to the EC2 service endpoint provided by the HPE Helion Eucalyptus Compute service to discover what instances as associated with the authenticated user account:

In [3]: client = session.client('ec2', endpoint_url='https://ec2.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [4]: for reservation in client.describe_instances()['Reservations']: 
  for instance in reservation['Instances']:
    print instance['InstanceId']
 ...:
i-4064f4e7
i-1c8515dd
i-79e96bc1
i-d43f50f1
i-b4adc06b
i-c4025e42

Below is another example connecting to the S3 service endpoint provided by the HPE Helion Eucalyptus Object Storage Gateway (OSG) service to list the buckets owned by the authenticated user account:

In [5]: client = session.client('s3', endpoint_url='https://s3.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [6]: for bucket in client.list_buckets()['Buckets']: 
  print bucket['Name']
 ...:
cfn-templates
ubuntu-trusty-x86_64-hvm-20151218
ubuntu-xenial-x86_64-hvm-20151217

Another example connecting to the Cloudformation service endpoint provided by the HPE Helion Eucalyptus Cloudformation service:

In [7]: client = session.client('cloudformation', endpoint_url='https://cloudformation.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [8]: for stack in client.describe_stacks()['Stacks']:
 print "Stack Name: " + stack['StackName']
 print "Status: " + stack['StackStatus']
 print "ID: " + stack['StackId']
 ...:
Stack Name: CoreOSCluster
Status: CREATE_COMPLETE
ID: arn:aws:cloudformation::001520216600:stack/CoreOSCluster/12437fe7-8a03-4920-9e34-270764450fa0

And for the last example, connecting to the AutoScaling service endpoint provided by the HPE Helion Eucalyptus AutoScaling service:

In [9]: client = session.client('autoscaling', endpoint_url='https://autoscaling.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In [10]: for asg in client.describe_auto_scaling_groups()['AutoScalingGroups']:
 print "AutoScaling Group Name: " + asg['AutoScalingGroupName']
 print "Launch Config: " + asg['LaunchConfigurationName']
 print "Availability Zones:"
 for az in asg['AvailabilityZones']:
 print "\t" + az
 print "AutoScaling Group Instances:"
 for instance in asg['Instances']:
 print "\t" + instance['InstanceId']
 ....:
AutoScaling Group Name: CoreOSCluster-CoreOsGroup-JTKMRINKKMYDI
Launch Config: CoreOSCluster-CoreOsLaunchConfig-LAWHOT5X5K5PX
Availability Zones:
 us-east-1c
 us-east-1b
 us-east-1a
AutoScaling Group Instances:
 i-79e96bc1
 i-4064f4e7
 i-c4025e42
 i-d43f50f1
 i-1c8515dd

Conclusion

As mentioned earlier, boto3 can be used with any AWS compatible service implemented by HPE Helion Eucalyptus.  If your team isn’t ready to use boto3 yet, boto can still be used with HPE Helion Eucalyptus.

As always, I hope you enjoyed this entry.  Please let me know if there are any questions/suggestion/ideas regarding this blog topic.

Enjoy!

 

Using Boto3 Against HPE Helion Eucalyptus 4.2 Deployments

Updated CoreOS Cluster Cloudformation Template for HPE Helion Eucalyptus 4.2 VPC Deployments

In 2014, I created a series of blog posts that have discussed using CoreOS on Eucalyptus cloud infrastructures.  This blog post is an updated version of the entry which discussed how to deploy a CoreOS cluster using a cloudformation template on Eucalyptus 4.0.1.  It will cover how to deploy a CoreOS cluster using Cloudformation on a HPE Helion Eucalyptus 4.2 VPC environment.

In HPE Helion Eucalyptus 4.1, VPC (Virtual Private Cloud) was in technical preview state.  With the release of Eucalyptus 4.2, VPC was upgraded to stable release.  HPE Helion Eucalyptus VPC provides similar features as AWS VPC.  For more information about what is currently supported in Eucalyptus VPC, please refer to the online documentation.

Prerequisites

Prerequisites for this blog entry are listed in the following previous blogs:

Please note the information regarding HPE Helion Eucalyptus IAM and how to obtain the CoreOS Beta AMI image in the previous listed blog entries.

CoreOS ETCD Discovery Service Token

When setting up the CoreOS cluster, the method used to handle cluster membership is using etcd Discovery.  This provides a unique discovery URL that will show all the members of the cluster.  To obtain a token for the size of the cluster you desire, use the following URL and add the value for the size of the cluster.  For example, if the cluster will have five members, using curl – the request URL will look like the following:

curl https://discovery.etcd.io/new?size=5

The value returned will look similar to the following:

https://discovery.etcd.io/fdd7d8ac203d2cac0c27ead148ad83ed

This URL can be referenced to see if all the members of the cluster registered successfully.

Deploying the Cluster on HPE Helion Eucalyptus VPC

When deploying the cluster on a Eucalyptus VPC environment, there are additional variables that have to be taken into account.  To download the example template, use the following URL:

https://s3-us-west-1.amazonaws.com/cfn-coreos-deployment/cfn-coreos-as-vpc.json

After downloading the template, use either euca2ools or AWS CLI to validate the template.  This will display the arguments that need to be passed when creating the cloudformation stack on Eucalyptus.  For example:

# euform-validate-template --template-file cfn-coreos-as.json 
DESCRIPTION Deploy CoreOS Cluster on Eucalyptus VPC
PARAMETER VpcId false VpcId of your existing Virtual Private Cloud (VPC)
PARAMETER Subnets false The list of SubnetIds in your Virtual Private Cloud (VPC)
PARAMETER AZs false The list of AvailabilityZones for your Virtual Private Cloud (VPC)
PARAMETER CoreOSImageId false CoreOS Image Id
PARAMETER UserKeyPair true User Key Pair
PARAMETER ClusterSize false Desired CoreOS Cluster Size
PARAMETER VmType false Desired VM Type for Instances

Notice the template requires unique variables associated with HPE Helion Eucalyptus VPC.

Now that the template has been downloaded, create the CoreOS stack using euca2ools.  For example:

# euform-create-stack CoreOSCluster --template-file cfn-coreos-as.json --parameter Subnets=subnet-0814e7aa,subnet-5d816215,subnet-c3755d6c --parameter AZs=euca-east-1c,euca-east-1b,euca-east-1a --parameter CoreOSImageId=emi-dfa27782 --parameter UserKeyPair=devops-admin --parameter ClusterSize=5 --parameter VmType=m1.large --parameter VpcId=vpc-d7fcff27

Once the cluster has been deployed, confirm that the cloudformation stack deployed successfully:

# euform-describe-stacks
STACK CoreOSCluster CREATE_COMPLETE Complete! Deploy CoreOS Cluster on Eucalyptus VPC 2016-01-01T21:09:10.965Z
PARAMETER VpcId vpc-d7fcff27
PARAMETER Subnets subnet-0814e7aa,subnet-5d816215,subnet-c3755d6c
PARAMETER AZs euca-east-1c,euca-east-1b,euca-east-1a
PARAMETER CoreOSImageId emi-dfa27782
PARAMETER UserKeyPair ****
PARAMETER ClusterSize 5
PARAMETER VmType m1.large
OUTPUT AutoScalingGroup CoreOSCluster-CoreOsGroup-JTKMRINKKMYDI

Check the discovery URL using curl, wget or any browser to confirm that the cluster membership completed:

# curl https://discovery.etcd.io/fdd7d8ac203d2cac0c27ead148ad83ed
{"action":"get","node":{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed","dir":true,"nodes":[{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/d0a4c6d73d0d8d17","value":"8981923b54d7d7f46fabc527936a7dcf=http://172.31.4.17:2380","modifiedIndex":953833155,"createdIndex":953833155},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/12b6e6e78c9cb70c","value":"33a3209006d2be1d5be0da6eaea007c5=http://172.31.19.215:2380","modifiedIndex":953833156,"createdIndex":953833156},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/d5c5d93e360ba87","value":"e71b1fefcd65c43a0fbacc7103efbc2b=http://172.31.22.157:2380","modifiedIndex":953833162,"createdIndex":953833162},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/cffd4985c990f872","value":"f047b9ff24f3d0c4e74c660709103b36=http://172.31.6.166:2380","modifiedIndex":953833167,"createdIndex":953833167},{"key":"/_etcd/registry/fdd7d8ac203d2cac0c27ead148ad83ed/8e6ccfef42f98260","value":"c48b163558b61733c1aa44dccb712406=http://172.31.47.175:2380","modifiedIndex":953833339,"createdIndex":953833339}],"modifiedIndex":953831075,"createdIndex":953831075}}

To confirm the health of the cluster, SSH into one of the cluster nodes, and use fleetctl and etcdctl:

# ssh -i devops-admin-key core@euca-10-116-131-230.eucalyptus.c-05.autoqa.qa1.eucalyptus-systems.com
Last login: Sat Jan 2 23:53:25 2016 from 10.111.1.71
CoreOS beta (877.1.0)
core@euca-172-31-22-157 ~ $ fleetctl list-machines
MACHINE IP METADATA
33a32090... 10.116.131.107 purpose=coreos-cluster,region=euca-us-east-1
8981923b... 10.116.131.121 purpose=coreos-cluster,region=euca-us-east-1
c48b1635... 10.116.131.213 purpose=coreos-cluster,region=euca-us-east-1
e71b1fef... 10.116.131.230 purpose=coreos-cluster,region=euca-us-east-1
f047b9ff... 10.116.131.197 purpose=coreos-cluster,region=euca-us-east-1
core@euca-172-31-22-157 ~ $ etcd
etcd etcd2 etcdctl
core@euca-172-31-22-157 ~ $ etcdctl cluster-health
member d5c5d93e360ba87 is healthy: got healthy result from http://10.116.131.230:2379
member 12b6e6e78c9cb70c is healthy: got healthy result from http://10.116.131.107:2379
member 8e6ccfef42f98260 is healthy: got healthy result from http://10.116.131.213:2379
member cffd4985c990f872 is healthy: got healthy result from http://10.116.131.197:2379
member d0a4c6d73d0d8d17 is healthy: got healthy result from http://10.116.131.121:2379
cluster is healthy
core@euca-172-31-22-157 ~ $ etcdctl member list
d5c5d93e360ba87: name=e71b1fefcd65c43a0fbacc7103efbc2b peerURLs=http://172.31.22.157:2380 clientURLs=http://10.116.131.230:2379
12b6e6e78c9cb70c: name=33a3209006d2be1d5be0da6eaea007c5 peerURLs=http://172.31.19.215:2380 clientURLs=http://10.116.131.107:2379
8e6ccfef42f98260: name=c48b163558b61733c1aa44dccb712406 peerURLs=http://172.31.47.175:2380 clientURLs=http://10.116.131.213:2379
cffd4985c990f872: name=f047b9ff24f3d0c4e74c660709103b36 peerURLs=http://172.31.6.166:2380 clientURLs=http://10.116.131.197:2379
d0a4c6d73d0d8d17: name=8981923b54d7d7f46fabc527936a7dcf peerURLs=http://172.31.4.17:2380 clientURLs=http://10.116.131.121:2379

Thats it! The CoreOS cluster has been successfully deployed.  Given HPE Helion Eucalyptus’s AWS compatibility, this template can be used on AWS as well.

As always, please let me know if there are any questions.  Enjoy!

Updated CoreOS Cluster Cloudformation Template for HPE Helion Eucalyptus 4.2 VPC Deployments

The Case for a Policy Decision Point inside the LDAP Server

Great insight as to the importance of Policy Decision Points with regards to security processes.

iamfortress

Why on earth would you do that?

We all understand that runtime characteristics change as processes get moved around the network.  Having problems with network io?  Move the database daemon to the same tier as the client process.  Problems with file io?  Store the data in memory as opposed to disk.  etc…

These same techniques apply for system architecture and security.  Location of policy enforcement, decision, and database processes hugely impact the overall welfare of your organization’s computational systems.

With these kinds of thoughts, what happens when security processes get moved around the network?

But first, we must define the security processes:

1. Policy Enforcement Point (PEP)

The gatekeeper component.  It enforces the security policy on the client program.  PEPs come in many shapes and sizes.  Often times it’s a small block of code that gets embedded directly into a client program.

2. Database (DB)

The database is used by PDPs to house…

View original post 637 more words

The Case for a Policy Decision Point inside the LDAP Server

Setting Up 3-Factor Authentication (Keypair, Password, Google Authenticator) for Eucalyptus Cloud Instances

Recently, I was logging into my AWS account, where I have multi-factor authentication (MFA) enabled, using the Google Authenticator application on my smart phone.  This inspired me to research how to enable MFA for any Linux distribution.  I ran across the following blog entries:

From there, I figured I would try to create a Eucalyptus EMI that would support three-factor authentication on a Eucalyptus 4.0 cloud.  The trick here was to figure out how to display the Google Authenticator information so users could configure Google Authenticator.  The euca2ools command ‘euca-get-console-output‘ proved to be the perfect mechanism to provide this information to the cloud user.  This blog will show how to configure an Ubuntu Trusty (14.04) Cloud image to support three-factor authentication.

Prerequisites

In order to leverage the steps mentioned in this blog, the following is needed:

Now that the prereqs have been mentioned, lets get started.

Updating the Ubuntu Image

Before we can update the Ubuntu image, let’s download the image:

[root@odc-f-13 ~]# wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img

After the image has been downloaded successfully, the image needs to be converted to a raw format.  Use qemu-img for this conversion:

[root@odc-f-13 ~]# qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw

After converting the image to a raw format, we need to mount it in order to update the image accordingly:

[root@odc-f-13 ~]# losetup /dev/loop0 trusty-server-cloudimg-amd64-disk1.raw
[root@odc-f-13 ~]# kpartx -av /dev/loop0
add map loop0p1 (253:2): 0 4192256 linear /dev/loop0 2048
[root@odc-f-13 ~]# mkdir /mnt/ubuntu
[root@odc-f-13 ~]# mount /dev/mapper/loop0p1 /mnt/ubuntu
[root@odc-f-13 ~]# chroot /mnt/ubuntu

The above command ‘chroot’ allows us to edit the image as if its the current running Linux operating system.  We have to install a couple of packages in the image.  Before we do, use the resolvconf to create the necessary information in /etc/resolv.conf.

root@odc-f-13:/# resolvconf -I

Confirm the settings are correct by running ‘apt-get update’:

root@odc-f-13:/#  apt-get update

Once that command runs successfully, install the PAM module for Google Authenticator and the whois package:

root@odc-f-13:/# apt-get install libpam-google-authenticator whois

After these packages have been installed, run the ‘google-authenticator’ command to see all the available options:

root@odc-f-13:/# google-authenticator --help
google-authenticator [<options>]
 -h, --help Print this message
 -c, --counter-based Set up counter-based (HOTP) verification
 -t, --time-based Set up time-based (TOTP) verification
 -d, --disallow-reuse Disallow reuse of previously used TOTP tokens
 -D, --allow-reuse Allow reuse of previously used TOTP tokens
 -f, --force Write file without first confirming with user
 -l, --label=<label> Override the default label in "otpauth://" URL
 -q, --quiet Quiet mode
 -Q, --qr-mode={NONE,ANSI,UTF8}
 -r, --rate-limit=N Limit logins to N per every M seconds
 -R, --rate-time=M Limit logins to N per every M seconds
 -u, --no-rate-limit Disable rate-limiting
 -s, --secret=<file> Specify a non-standard file location
 -w, --window-size=W Set window of concurrently valid codes
 -W, --minimal-window Disable window of concurrently valid codes

Updating PAM configuration

Next the PAM configuration file /etc/pam.d/common-auth needs to be updated.  Find the following line in that file:

auth[success=1 default=ignore]pam_unix.so nullok_secure

Replace it with the following lines:

auth requisite pam_unix.so nullok_secure
auth requisite pam_google_authenticator.so
auth [success=1 default=ignore] pam_permit.so

Next, we need to update SSHD configuration.

Update SSHD configuration

We need to modify the /etc/ssh/sshd_config file to help make sure the Google Authenticator PAM module works successfully.  Modify/add the following lines to the /etc/ssh/sshd_config file:

ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive

Updating Cloud-Init Configuration

The next modification involves enabling the ‘ubuntu‘ user to have a password.  By default, the account is locked (i.e. doesn’t have a password assigned) in the cloud-init configuration file.  For this exercise, we will enable it, and assign a password.  Just like the old Ubuntu Cloud images, we will assign the ‘ubuntu‘ user the password ‘ubuntu‘.

Use ‘mkpasswd‘ as mentioned in the cloud-init documentation to create the password for the user:

root@odc-f-13:/# mkpasswd --method=SHA-512
Password:
$6$8/.y8gwYT$dVmtT7jXdBrz0w1ku5mh6HOC.vngjsXpehyeEicJT4kIyhvUMV3p9VGUIDC42Z1mjXdfAaQkINcCfcFe5jEKX/

In the file /etc/cloud/cloud.cfg, find the section ‘default_user‘.  Change the following line from:

lock_passwd: True

to

lock_passwd: False
passwd: $6$8/.y8gwYT$dVmtT7jXdBrz0w1ku5mh6HOC.vngjsXpehyeEicJT4kIyhvUMV3p9VGUIDC42Z1mjXdfAaQkINcCfcFe5jEKX/

The value for the ‘passwd‘ option is the output from the mkpasswd command executed earlier.

Updating /etc/rc.local

The final update to the image is to add some bash code to the /etc/rc.local file.   The reason for this update is so the information for configuring Google Authenticator with the instance can be presented to the user through the output of ‘euca-get-console-output‘.  Add the following code to the /etc/rc.local file above the ‘exit 0‘ line:

if [ ! -f /home/ubuntu/.google_authenticator ]; then
 /bin/su ubuntu -c "google-authenticator -t -d -f -r 3 -R 30 -w 4" > /root/google-auth.txt
 echo "############################################################"
 echo "Google Authenticator Information:"
 echo "############################################################"
 cat /root/google-auth.txt
 echo "############################################################"
fi

Thats it!  Now we need to bundle, upload and register the image.

Bundle, Upload and Register the Image

Since we are using an HVM image, we don’t have to worry about the kernel and ramdisk.  We can just bundle, upload and register the image.  To do so, use the euca-install-image command.  Before we do that, we need to exit out of the chroot environment and unmount the image:

root@odc-f-13:/# exit
[root@odc-f-13 ~]# umount /mnt/ubuntu
[root@odc-f-13 ~]# kpartx -dv /dev/loop0
del devmap : loop0p1
[root@odc-f-13 ~]# losetup -d /dev/loop0

After unmounting the image, bundle, upload and register the image with the euca-install-image command:

[root@odc-f-13 ~]# euca-install-image -b ubuntu-trusty-server-google-auth-x86_64-hvm -i trusty-server-cloudimg-amd64-disk1.raw --virtualization-type hvm -n trusty-server-google-auth -r x86_64
/var/tmp/bundle-Q8yit1/trusty-server-cloudimg-amd64-disk1.raw.manifest.xml 100% |===============| 7.38 kB 3.13 kB/s Time: 0:00:02
IMAGE emi-FF439CBA

After the image is registered, launch the instance with a keypair that has been created using the ‘euca-create-keypair‘ command:

[root@odc-f-13 ~]# euca-run-instances -k account1-user01 -t m1.medium emi-FF439CBA
RESERVATION r-B79E6A59 408396244283 default
INSTANCE i-48D98090 emi-FF439CBA pending account1-user01 0 m1.medium 2014-07-21T20:23:10.285Z ViciousLiesAndDangerousRumors monitoring-disabled 0.0.0.0 0.0.0.0 instance-store hvm sg-A5133B59

Once the instance has reached the ‘running’ state, use ‘euca-get-console-ouptut’ to grab the Google Authenticator information:

[root@odc-f-13 ~]# euca-describe-instances i-48D98090
RESERVATION r-B79E6A59 408396244283 default
INSTANCE i-48D98090 emi-FF439CBA euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com euca-172-18-238-157.bigboi.internal running account1-user01 0 m1.medium 2014-07-21T20:23:10.285Z ViciousLiesAndDangerousRumors monitoring-disabled 10.104.6.237 172.18.238.157 instance-store hvm sg-A5133B59
[root@odc-f-13 ~]# euca-get-console-output i-48D98090
.......
############################################################
Google Authenticator Information:
############################################################
https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/ubuntu@euca-172-18-238-157%3Fsecret%3D2MGKGDZTFLVE5LCX
Your new secret key is: 2MGKGDZTFLVE5LCX
Your verification code is 275414
Your emergency scratch codes are:
 59078604
 17425999
 89676696
 65201554
 14740079
############################################################
.....

Now we are ready to test access to the instance.

Testing Access to the Instance

To test access to the instance, make sure the Google Authenticator application is installed on your smart phone/hand-held device.  Next, copy the URL seen in the output (e.g. https://www.google.com/chart?chs=200×200&chld=M|0&cht=qr&chl=otpauth://totp/ubuntu@euca-172-18-238-157%3Fsecret%3D2MGKGDZTFLVE5LCX) from ‘euca-get-console-output’, and past it into a browser:

OTPAUTH URL for Google Authenticator
OTPAUTH URL for Google Authenticator

Use the ‘Google Authenticator’ application on your smart phone/hand-held device, and scan the QR Code:

Google Authenticator Application
Google Authenticator Application

 

Google Authenticator Application - Set Up Account
Google Authenticator Application – Set Up Account

 

After selecting the ‘Set up account‘ option, select ‘Scan a barcode‘, hold your smartphone/hand-held device to the screen where your browser is showing the QR code, and scan:

Google Authenticator Application - Scan Barcode
Google Authenticator Application – Scan Barcode

 

After scanning the QR code, you should see the account get added, and the verification codes begin to populate for the account:

Verification Code For Instance
Verification Code For Instance

 

Finally, SSH into the instance using the following:

  • the private key of the keypair used when launching the instance with euca-run-instances
  • the password ‘ubuntu
  • the verification code displayed in Google Authenticator for the new account added

With the information above, the SSH authentication should look similar to the following:

[root@odc-f-13 ~]# ssh -i account1-user01/account1-user01.priv ubuntu@euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com
The authenticity of host 'euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com (10.104.6.237)' can't be established.
RSA key fingerprint is c9:37:18:66:e3:ee:66:d2:8a:ac:a4:21:a6:84:92:08.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com,10.104.6.237' (RSA) to the list of known hosts.
Authenticated with partial success.
Password:
Verification code:
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-32-generic x86_64)

* Documentation: https://help.ubuntu.com/

System information as of Mon Jul 21 13:23:48 UTC 2014

System load: 0.0 Memory usage: 5% Processes: 68
 Usage of /: 56.1% of 1.32GB Swap usage: 0% Users logged in: 0

Graph this data and manage this system at:
 https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@euca-172-18-238-157:~$

Three-factor authentication has been successfully configured for the Ubuntu cloud image.  If cloud administrators would like to use different authentication for the instance user, I suggest investigating how to set up PAM LDAP authentication, where SSH public keys are stored in OpenLDAP.  In order to do this, the Ubuntu image  would have to be updated to work.  I would check out the ‘sss_ssh_authorizedkeys‘ command, and the pam-script module to potentially help get this working.

Enjoy!

Setting Up 3-Factor Authentication (Keypair, Password, Google Authenticator) for Eucalyptus Cloud Instances

Eucalyptus 4.0 Load Balancer Statistics Web UI for the Cloud Administrator

Background

From the cloud user’s perspective, the Eucalyptus Load Balancer is a “black box“.  The only interaction cloud user’s have with the Eucalyptus Load Balancer is through the eulb-* commands in euca2ools or the AWS Elastic Load Balancing API tools.   In Eucalyptus 3.4 and greater, the cloud administrator (any user under the ‘eucalyptus’ account) has the ability to access the instance that implements the load balancing solution used by the Eucalyptus Load Balancing service.  This access can be used to help troubleshoot the Eucalyptus Load Balancer if there are any issues reported by the cloud user.

The Eucalyptus Load Balancer utilizes HAProxy to implement the load balancing solution.  HAProxy has a cool feature to enable the ability to display a statistics page for the HAProxy application.  Enabling this feature to the Eucalyptus Load Balancer can help cloud administrators obtain valuable information from the load balancer in the following areas:

  • Network traffic to the backend instances registered with the load balancer
  • Network traffic to the load balancer
  • Triaging any Eucalyptus Load Balancer behavior associated with Eucalyptus CloudWatch alarms

Before getting into the details, I would like to thank Nathan Evans for his entry entitled “Cultural learnings of HA-Proxy, for make benefit…“, which helped influence this blog entry.   Now on to the fun stuff….

Prerequisites

The prerequisites for this blog entry are pretty straight forward – just read my previous entry entitled “Customizing Eucalyptus Load Balancer for Eucalyptus 4.0“.  To enable the web UI stats page, we will just add information to the /etc/load-balancer-servo/haproxy_template.conf file in the load balancer image.

In addition, the cloud administrator credentials will be needed, along with euca2ools 3.1 installed.

Enabling the HAProxy Web Statistics Page

After downloading and mounting the Eucalyptus Load Balancer image (as mentioned in my previous blog entry), to enable the HAProxy web statistics page, update the /etc/load-balancer-servo/haproxy_template.conf to look like the following:

[root@odc-f-13 /]# cat etc/load-balancer-servo/haproxy_template.conf
#template
global
 maxconn 100000
 ulimit-n 655360
 pidfile /var/run/haproxy.pid

#drop privileges after port binding
 user servo
 group servo

defaults
 timeout connect 5s
 timeout client 2m
 timeout server 2m
 timeout http-keep-alive 10s
 timeout queue 1m
 timeout check 5s
 retries 3
 option dontlognull
 option redispatch
 option http-server-close # affects KA on/off

 userlist UsersFor_HAProxyStatistics
  group admin users admin
  user admin insecure-password pwd*4admin
  user stats insecure-password pwd*4stats

listen HAProxy-Statistics *:81
 mode http
 stats enable
 stats uri /haproxy?stats
 stats refresh 60s
 stats show-node
 stats show-legends
 acl AuthOkay_ReadOnly http_auth(UsersFor_HAProxyStatistics)
 acl AuthOkay_Admin http_auth_group(UsersFor_HAProxyStatistics) admin
 stats http-request auth realm HAProxy-Statistics unless AuthOkay_ReadOnly
 stats admin if AuthOkay_Admin

For more information regarding these options, please refer to the HAProxy 1.5 documentation.  The key options here are as follows:

  • The port defined in the ‘listen’ section – listen HAProxy-Statistics *:81
  • The username and passwords defined in the ‘userlist‘ subsection under the ‘defaults’ section.
  • The URI defined in the ‘listen’ section – stats uri /haproxy?stats

After making these changes, confirm that there aren’t any configuration file errors:

[root@odc-f-13 /]# /usr/sbin/haproxy -c -f etc/load-balancer-servo/haproxy_template.conf
 Configuration file is valid

Next, unmount the image, and tar-gzip the image:

[root@odc-f-13 eucalyptus-load-balancer-image]# umount /mnt/centos
[root@odc-f-13 eucalyptus-load-balancer-image]# kpartx -dv /dev/loop0
del devmap : loop0p1
[root@odc-f-13 eucalyptus-load-balancer-image]# losetup -d /dev/loop0
[root@odc-f-13 eucalyptus-load-balancer-image]# tar -zcvf eucalyptus-load-balancer-image-monitored.tgz eucalyptus-load-balancer-image.img
eucalyptus-load-balancer-image.img

Use euca-install-load-balancer to upload the new image:

[root@odc-f-13 eucalyptus-load-balancer-image]# cd
[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:

Version 2 (enabled)
emi-F0D5828C (loadbalancer-v2/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-05-28 at 11:10:03 PDT

[root@odc-f-13 ~]# euca-install-load-balancer -t eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-monitored.tgz
Decompressing tarball: eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-monitored.tgz
Bundling and uploading image to bucket: loadbalancer-v3
Registering image manifest: loadbalancer-v3/eucalyptus-load-balancer-image.img.manifest.xml
Registered image: emi-DB150EC0
PROPERTY loadbalancing.loadbalancer_emi emi-DB150EC0 was emi-F0D5828C

Load Balancing Support is Enabled
[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:

Version 2
emi-F0D5828C (loadbalancer-v2/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-05-28 at 11:10:03 PDT

Version 3 (enabled)
emi-DB150EC0 (loadbalancer-v3/eucalyptus-load-balancer-image.img.manifest.xml)
 Installed on 2014-07-08 at 18:38:29 PDT

Testing the Eucalyptus Load Balancer Statistics Page

To view the HAProxy statistics page, create a Eucalyptus Load Balancer instance by using eulb-create-lb:

[root@odc-f-13 ~]# eulb-create-lb TestLoadBalancer -z ViciousLiesAndDangerousRumors -l "lb-port=80, protocol=HTTP, instance-port=80, instance-protocol=HTTP"
DNS_NAME TestLoadBalancer-408396244283.elb.acme.eucalyptus-systems.com

[root@odc-f-13 ~]# euca-describe-instances
RESERVATION r-06DF089F 944786667073 euca-internal-408396244283-TestLoadBalancer
INSTANCE i-3DA342C2 emi-DB150EC0 euca-10-104-6-233.bigboi.acme.eucalyptus-systems.com euca-172-18-229-187.bigboi.internal running euca-elb 0 m1.medium 2014-07-09T01:45:11.753Z ViciousLiesAndDangerousRumors monitoring-enabled 10.104.6.233 172.18.229.187 instance-store hvm 8ba248ae-dbeb-41ce-97df-fb13b91a337b_ViciousLiesAndDangerousR_1 sg-3EA4ADEC arn:aws:iam::944786667073:instance-profile/internal/loadbalancer/loadbalancer-vm-408396244283-TestLoadBalancer
TAG instance i-3DA342C2 Name loadbalancer-resources
TAG instance i-3DA342C2 aws:autoscaling:groupName asg-euca-internal-elb-408396244283-TestLoadBalancer
TAG instance i-3DA342C2 euca:node 10.105.1.188

Since the web statistics page is configured to display on port 81, use euca-authorize to allow access to that port in the load balancer’s security group.  I recommend limiting access to the port for security reasons.  In the example below, access is limited to only the client 192.168.30.25:

[root@odc-f-13 ~]# euca-authorize -P tcp -p 81 -s 192.168.30.25/32 euca-internal-408396244283-TestLoadBalancer
 GROUP euca-internal-408396244283-TestLoadBalancer
 PERMISSION euca-internal-408396244283-TestLoadBalancer ALLOWS tcp 81 81 FROM CIDR 192.168.30.25/32

Finally, use a browser on the authorized client to view the statistics page on the load balancer.  In this example, the URL – http://testloadbalancer-408396244283.elb.acme.eucalyptus-systems.com:81/haproxy?stats – will be used.  Use the username and password credentials that were added to to the HAProxy configuration file to view the page.  It should look similar to the screenshot below:

HAProxy Statistics Web Page of the Eucalyptus Load Balancer
HAProxy Statistics Web Page of the Eucalyptus Load Balancer

 

Thats it!  For any load balancer thats launched on the Eucalyptus 4.0 cloud, the cloud administrator will be able to display statistics of the load balancer.  This is also something that the cloud administrator can provide to cloud users as a service.  By leveraging restrictions placed in security groups of the load balancer, cloud administrators can limit access to the statistics page based upon the source IP addresses of the cloud users’ client machine(s).

Enjoy!

Eucalyptus 4.0 Load Balancer Statistics Web UI for the Cloud Administrator

Making FileVault Use a Disk Password

Great use of FileVault on Mac OS X.

/dev/zero

To unlock a disk that is encrypted with OS X’s FileVault feature one needs to type in the password that belongs to any user on the machine who is allowed to unlock the disk. The system then boots and helpfully logs you in as that user. In general that is probably a convenient little feature, but for me it just makes things awkward — I want to use different passwords for unlocking the disk and logging into my user account. To make that work I have to create a second account dedicated to unlocking the disk, get logged into that one when the system boots, then immediately log back out so I can log in as the user I actually want to use.

Or do I?

The system that powers FileVault, Core Storage, combines full disk encryption and some logical volume management features in a manner similar to LVM…

View original post 676 more words

Making FileVault Use a Disk Password

Install Eucalyptus 4.0 Using Motherbrain and Chef

Great blog discussing how to use Motherbrain, Chef and Eucalyptus.

Testing Clouds at 128bpm

Introduction

Installing distributed systems can be a tedious and time consuming process. Luckily there are many solutions for distributed configuration management available to the open source community. Over the past few months, I have been working on the Eucalyptus cookbook which allows for standardized deployments of Eucalyptus using Chef. This functionality has already been implemented in MicroQA using individual calls to Knife (the Chef command line interface) for each machine in the deployment. Orchestration of the deployment is rather static and thus only 3 topologies have been implemented as part of the deployment tab

Last month, Riot Games released Motherbrain, their orchestration framework that allows flexible, repeatable, and scalable deployment of multi-tiered applications. Their approach to the deployment roll out problem is simple and understandable. You configure manifests that define how your application components are split up then define the order in which they should be deployed.

For…

View original post 508 more words

Install Eucalyptus 4.0 Using Motherbrain and Chef