Again, as mentioned in the boto3 documentation, configuration can be done by using AWS CLI, or manually creating the config and credentials file under the .aws directory. For example, here are the contents of the .aws/config and .aws/credentials files that will be used for this demonstration:
If these files do not want to be used, as an alternative, you can pass the AWS Access Key ID and AWS Secret Key programmatically. This will be referenced later on in this blog entry.
To demonstrate how to use boto3, ipython will be utilized. To get started, the Session class will be imported from the boto3 library:
Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.2 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In : from boto3.session import Session
Next invoke the session:
In : session = Session(region_name='us-east-1', profile_name="devops-admin")
As mentioned earlier, alternatively, if we want to programmatically pass the AWS Access Key ID and the AWS Secret Key, it can be done when the session is invoked:
In : session = Session(aws_access_key_id='XXXXXXXXXXXXXX', aws_secret_access_key='XXXXXXXXXXXXXXXXXXXXXXXX', region_name='us-east-1')
Even though region_name has a value here, when the client connection is created, the service endpoint will be a HPE Helion Eucalyptus service endpoint. Any valid AWS region name can be used with HPE Helion Eucalyptus. The important piece will be the endpoint URL.
From here, we can use the session to establish a client connection with a given HPE Helion Eucalyptus service endpoint. Since the HPE Helion Eucalyptus cloud used in this example contains HTTPS endpoints, the trusted root certificate for the cloud subdomain will be passed as well.
Here is an example connecting to the EC2 service endpoint provided by the HPE Helion Eucalyptus Compute service to discover what instances as associated with the authenticated user account:
In : client = session.client('ec2', endpoint_url='https://ec2.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In : for reservation in client.describe_instances()['Reservations']:
for instance in reservation['Instances']:
Below is another example connecting to the S3 service endpoint provided by the HPE Helion Eucalyptus Object Storage Gateway (OSG) service to list the buckets owned by the authenticated user account:
In : client = session.client('s3', endpoint_url='https://s3.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In : for bucket in client.list_buckets()['Buckets']:
Another example connecting to the Cloudformation service endpoint provided by the HPE Helion Eucalyptus Cloudformation service:
In : client = session.client('cloudformation', endpoint_url='https://cloudformation.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In : for stack in client.describe_stacks()['Stacks']:
print "Stack Name: " + stack['StackName']
print "Status: " + stack['StackStatus']
print "ID: " + stack['StackId']
Stack Name: CoreOSCluster
And for the last example, connecting to the AutoScaling service endpoint provided by the HPE Helion Eucalyptus AutoScaling service:
In : client = session.client('autoscaling', endpoint_url='https://autoscaling.c-05.autoqa.qa1.eucalyptus-systems.com/', verify='/root/euca-ca-0.crt')
In : for asg in client.describe_auto_scaling_groups()['AutoScalingGroups']:
print "AutoScaling Group Name: " + asg['AutoScalingGroupName']
print "Launch Config: " + asg['LaunchConfigurationName']
print "Availability Zones:"
for az in asg['AvailabilityZones']:
print "\t" + az
print "AutoScaling Group Instances:"
for instance in asg['Instances']:
print "\t" + instance['InstanceId']
AutoScaling Group Name: CoreOSCluster-CoreOsGroup-JTKMRINKKMYDI
Launch Config: CoreOSCluster-CoreOsLaunchConfig-LAWHOT5X5K5PX
AutoScaling Group Instances:
In HPE Helion Eucalyptus 4.1, VPC (Virtual Private Cloud) was in technical preview state. With the release of Eucalyptus 4.2, VPC was upgraded to stable release. HPE Helion Eucalyptus VPC provides similar features as AWS VPC. For more information about what is currently supported in Eucalyptus VPC, please refer to the online documentation.
Prerequisites for this blog entry are listed in the following previous blogs:
When setting up the CoreOS cluster, the method used to handle cluster membership is using etcd Discovery. This provides a unique discovery URL that will show all the members of the cluster. To obtain a token for the size of the cluster you desire, use the following URL and add the value for the size of the cluster. For example, if the cluster will have five members, using curl – the request URL will look like the following:
The value returned will look similar to the following:
After downloading the template, use either euca2ools or AWS CLI to validate the template. This will display the arguments that need to be passed when creating the cloudformation stack on Eucalyptus. For example:
# euform-validate-template --template-file cfn-coreos-as.json
DESCRIPTION Deploy CoreOS Cluster on Eucalyptus VPC
PARAMETER VpcId false VpcId of your existing Virtual Private Cloud (VPC)
PARAMETER Subnets false The list of SubnetIds in your Virtual Private Cloud (VPC)
PARAMETER AZs false The list of AvailabilityZones for your Virtual Private Cloud (VPC)
PARAMETER CoreOSImageId false CoreOS Image Id
PARAMETER UserKeyPair true User Key Pair
PARAMETER ClusterSize false Desired CoreOS Cluster Size
PARAMETER VmType false Desired VM Type for Instances
Notice the template requires unique variables associated with HPE Helion Eucalyptus VPC.
Now that the template has been downloaded, create the CoreOS stack using euca2ools. For example:
To confirm the health of the cluster, SSH into one of the cluster nodes, and use fleetctl and etcdctl:
# ssh -i devops-admin-key email@example.com
Last login: Sat Jan 2 23:53:25 2016 from 10.111.1.71
CoreOS beta (877.1.0)
core@euca-172-31-22-157 ~ $ fleetctl list-machines
MACHINE IP METADATA
33a32090... 10.116.131.107 purpose=coreos-cluster,region=euca-us-east-1
8981923b... 10.116.131.121 purpose=coreos-cluster,region=euca-us-east-1
c48b1635... 10.116.131.213 purpose=coreos-cluster,region=euca-us-east-1
e71b1fef... 10.116.131.230 purpose=coreos-cluster,region=euca-us-east-1
f047b9ff... 10.116.131.197 purpose=coreos-cluster,region=euca-us-east-1
core@euca-172-31-22-157 ~ $ etcd
etcd etcd2 etcdctl
core@euca-172-31-22-157 ~ $ etcdctl cluster-health
member d5c5d93e360ba87 is healthy: got healthy result from http://10.116.131.230:2379
member 12b6e6e78c9cb70c is healthy: got healthy result from http://10.116.131.107:2379
member 8e6ccfef42f98260 is healthy: got healthy result from http://10.116.131.213:2379
member cffd4985c990f872 is healthy: got healthy result from http://10.116.131.197:2379
member d0a4c6d73d0d8d17 is healthy: got healthy result from http://10.116.131.121:2379
cluster is healthy
core@euca-172-31-22-157 ~ $ etcdctl member list
d5c5d93e360ba87: name=e71b1fefcd65c43a0fbacc7103efbc2b peerURLs=http://172.31.22.157:2380 clientURLs=http://10.116.131.230:2379
12b6e6e78c9cb70c: name=33a3209006d2be1d5be0da6eaea007c5 peerURLs=http://172.31.19.215:2380 clientURLs=http://10.116.131.107:2379
8e6ccfef42f98260: name=c48b163558b61733c1aa44dccb712406 peerURLs=http://172.31.47.175:2380 clientURLs=http://10.116.131.213:2379
cffd4985c990f872: name=f047b9ff24f3d0c4e74c660709103b36 peerURLs=http://172.31.6.166:2380 clientURLs=http://10.116.131.197:2379
d0a4c6d73d0d8d17: name=8981923b54d7d7f46fabc527936a7dcf peerURLs=http://172.31.4.17:2380 clientURLs=http://10.116.131.121:2379
Thats it! The CoreOS cluster has been successfully deployed. Given HPE Helion Eucalyptus’s AWS compatibility, this template can be used on AWS as well.
As always, please let me know if there are any questions. Enjoy!
We all understand that runtime characteristics change as processes get moved around the network. Having problems with network io? Move the database daemon to the same tier as the client process. Problems with file io? Store the data in memory as opposed to disk. etc…
These same techniques apply for system architecture and security. Location of policy enforcement, decision, and database processes hugely impact the overall welfare of your organization’s computational systems.
With these kinds of thoughts, what happens when security processes get moved around the network?
But first, we must define the security processes:
1. Policy Enforcement Point (PEP)
The gatekeeper component. It enforces the security policy on the client program. PEPs come in many shapes and sizes. Often times it’s a small block of code that gets embedded directly into a client program.
From there, I figured I would try to create a Eucalyptus EMI that would support three-factor authentication on a Eucalyptus 4.0 cloud. The trick here was to figure out how to display the Google Authenticator information so users could configure Google Authenticator. The euca2ools command ‘euca-get-console-output‘ proved to be the perfect mechanism to provide this information to the cloud user. This blog will show how to configure an Ubuntu Trusty (14.04) Cloud image to support three-factor authentication.
In order to leverage the steps mentioned in this blog, the following is needed:
The above command ‘chroot’ allows us to edit the image as if its the current running Linux operating system. We have to install a couple of packages in the image. Before we do, use the resolvconf to create the necessary information in /etc/resolv.conf.
root@odc-f-13:/# resolvconf -I
Confirm the settings are correct by running ‘apt-get update’:
After these packages have been installed, run the ‘google-authenticator’ command to see all the available options:
root@odc-f-13:/# google-authenticator --help
-h, --help Print this message
-c, --counter-based Set up counter-based (HOTP) verification
-t, --time-based Set up time-based (TOTP) verification
-d, --disallow-reuse Disallow reuse of previously used TOTP tokens
-D, --allow-reuse Allow reuse of previously used TOTP tokens
-f, --force Write file without first confirming with user
-l, --label=<label> Override the default label in "otpauth://" URL
-q, --quiet Quiet mode
-r, --rate-limit=N Limit logins to N per every M seconds
-R, --rate-time=M Limit logins to N per every M seconds
-u, --no-rate-limit Disable rate-limiting
-s, --secret=<file> Specify a non-standard file location
-w, --window-size=W Set window of concurrently valid codes
-W, --minimal-window Disable window of concurrently valid codes
Updating PAM configuration
Next the PAM configuration file /etc/pam.d/common-auth needs to be updated. Find the following line in that file:
The next modification involves enabling the ‘ubuntu‘ user to have a password. By default, the account is locked (i.e. doesn’t have a password assigned) in the cloud-init configuration file. For this exercise, we will enable it, and assign a password. Just like the old Ubuntu Cloud images, we will assign the ‘ubuntu‘ user the password ‘ubuntu‘.
The value for the ‘passwd‘ option is the output from the mkpasswd command executed earlier.
The final update to the image is to add some bash code to the /etc/rc.local file. The reason for this update is so the information for configuring Google Authenticator with the instance can be presented to the user through the output of ‘euca-get-console-output‘. Add the following code to the /etc/rc.local file above the ‘exit 0‘ line:
Thats it! Now we need to bundle, upload and register the image.
Bundle, Upload and Register the Image
Since we are using an HVM image, we don’t have to worry about the kernel and ramdisk. We can just bundle, upload and register the image. To do so, use the euca-install-image command. Before we do that, we need to exit out of the chroot environment and unmount the image:
the verification code displayed in Google Authenticator for the new account added
With the information above, the SSH authentication should look similar to the following:
[root@odc-f-13 ~]# ssh -i account1-user01/account1-user01.priv firstname.lastname@example.org
The authenticity of host 'euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com (10.104.6.237)' can't be established.
RSA key fingerprint is c9:37:18:66:e3:ee:66:d2:8a:ac:a4:21:a6:84:92:08.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'euca-10-104-6-237.bigboi.acme.eucalyptus-systems.com,10.104.6.237' (RSA) to the list of known hosts.
Authenticated with partial success.
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-32-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Mon Jul 21 13:23:48 UTC 2014
System load: 0.0 Memory usage: 5% Processes: 68
Usage of /: 56.1% of 1.32GB Swap usage: 0% Users logged in: 0
Graph this data and manage this system at:
Get cloud support with Ubuntu Advantage Cloud Guest:
0 packages can be updated.
0 updates are security updates.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
Three-factor authentication has been successfully configured for the Ubuntu cloud image. If cloud administrators would like to use different authentication for the instance user, I suggest investigating how to set up PAM LDAP authentication, where SSH public keys are stored in OpenLDAP. In order to do this, the Ubuntu image would have to be updated to work. I would check out the ‘sss_ssh_authorizedkeys‘ command, and the pam-script module to potentially help get this working.
The Eucalyptus Load Balancer utilizes HAProxy to implement the load balancing solution. HAProxy has a cool feature to enable the ability to display a statistics page for the HAProxy application. Enabling this feature to the Eucalyptus Load Balancer can help cloud administrators obtain valuable information from the load balancer in the following areas:
Network traffic to the backend instances registered with the load balancer
The prerequisites for this blog entry are pretty straight forward – just read my previous entry entitled “Customizing Eucalyptus Load Balancer for Eucalyptus 4.0“. To enable the web UI stats page, we will just add information to the /etc/load-balancer-servo/haproxy_template.conf file in the load balancer image.
After downloading and mounting the Eucalyptus Load Balancer image (as mentioned in my previous blog entry), to enable the HAProxy web statistics page, update the /etc/load-balancer-servo/haproxy_template.conf to look like the following:
[root@odc-f-13 /]# cat etc/load-balancer-servo/haproxy_template.conf
#drop privileges after port binding
timeout connect 5s
timeout client 2m
timeout server 2m
timeout http-keep-alive 10s
timeout queue 1m
timeout check 5s
option http-server-close # affects KA on/off
group admin users admin
user admin insecure-password pwd*4admin
user stats insecure-password pwd*4stats
listen HAProxy-Statistics *:81
stats uri /haproxy?stats
stats refresh 60s
acl AuthOkay_ReadOnly http_auth(UsersFor_HAProxyStatistics)
acl AuthOkay_Admin http_auth_group(UsersFor_HAProxyStatistics) admin
stats http-request auth realm HAProxy-Statistics unless AuthOkay_ReadOnly
stats admin if AuthOkay_Admin
For more information regarding these options, please refer to the HAProxy 1.5 documentation. The key options here are as follows:
The port defined in the ‘listen’ section – listen HAProxy-Statistics *:81
The username and passwords defined in the ‘userlist‘ subsection under the ‘defaults’ section.
The URI defined in the ‘listen’ section – stats uri /haproxy?stats
After making these changes, confirm that there aren’t any configuration file errors:
[root@odc-f-13 /]# /usr/sbin/haproxy -c -f etc/load-balancer-servo/haproxy_template.conf
Configuration file is valid
Next, unmount the image, and tar-gzip the image:
[root@odc-f-13 eucalyptus-load-balancer-image]# umount /mnt/centos
[root@odc-f-13 eucalyptus-load-balancer-image]# kpartx -dv /dev/loop0
del devmap : loop0p1
[root@odc-f-13 eucalyptus-load-balancer-image]# losetup -d /dev/loop0
[root@odc-f-13 eucalyptus-load-balancer-image]# tar -zcvf eucalyptus-load-balancer-image-monitored.tgz eucalyptus-load-balancer-image.img
Use euca-install-load-balancer to upload the new image:
[root@odc-f-13 eucalyptus-load-balancer-image]# cd
[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:
Version 2 (enabled)
Installed on 2014-05-28 at 11:10:03 PDT
[root@odc-f-13 ~]# euca-install-load-balancer -t eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-monitored.tgz
Decompressing tarball: eucalyptus-lb/usr/share/eucalyptus-load-balancer-image/eucalyptus-load-balancer-image-monitored.tgz
Bundling and uploading image to bucket: loadbalancer-v3
Registering image manifest: loadbalancer-v3/eucalyptus-load-balancer-image.img.manifest.xml
Registered image: emi-DB150EC0
PROPERTY loadbalancing.loadbalancer_emi emi-DB150EC0 was emi-F0D5828C
Load Balancing Support is Enabled
[root@odc-f-13 ~]# euca-install-load-balancer --list
Currently Installed Load Balancer Bundles:
Installed on 2014-05-28 at 11:10:03 PDT
Version 3 (enabled)
Installed on 2014-07-08 at 18:38:29 PDT
Testing the Eucalyptus Load Balancer Statistics Page
To view the HAProxy statistics page, create a Eucalyptus Load Balancer instance by using eulb-create-lb:
Since the web statistics page is configured to display on port 81, use euca-authorize to allow access to that port in the load balancer’s security group. I recommend limiting access to the port for security reasons. In the example below, access is limited to only the client 192.168.30.25:
Thats it! For any load balancer thats launched on the Eucalyptus 4.0 cloud, the cloud administrator will be able to display statistics of the load balancer. This is also something that the cloud administrator can provide to cloud users as a service. By leveraging restrictions placed in security groups of the load balancer, cloud administrators can limit access to the statistics page based upon the source IP addresses of the cloud users’ client machine(s).
To unlock a disk that is encrypted with OS X’s FileVault feature one needs to type in the password that belongs to any user on the machine who is allowed to unlock the disk. The system then boots and helpfully logs you in as that user. In general that is probably a convenient little feature, but for me it just makes things awkward — I want to use different passwords for unlocking the disk and logging into my user account. To make that work I have to create a second account dedicated to unlocking the disk, get logged into that one when the system boots, then immediately log back out so I can log in as the user I actually want to use.
Or do I?
The system that powers FileVault, Core Storage, combines full disk encryption and some logical volume management features in a manner similar to LVM…
Installing distributed systems can be a tedious and time consuming process. Luckily there are many solutions for distributed configuration management available to the open source community. Over the past few months, I have been working on the Eucalyptus cookbook which allows for standardized deployments of Eucalyptus using Chef. This functionality has already been implemented in MicroQA using individual calls to Knife (the Chef command line interface) for each machine in the deployment. Orchestration of the deployment is rather static and thus only 3 topologies have been implemented as part of the deployment tab.
Last month, Riot Games released Motherbrain, their orchestration framework that allows flexible, repeatable, and scalable deployment of multi-tiered applications. Their approach to the deployment roll out problem is simple and understandable. You configure manifests that define how your application components are split up then define the order in which they should be deployed.