Follow me on Twitter:

I give thee, cronlib and puppet-cron-analyzer

Posted: June 11th, 2012 | Author: | Filed under: DevOps | Tags: , , , | No Comments »

I’ve been working on a puppet cron analyzer tool, which is coming along nicely:

Its goal was to provide an analysis/map of cron runtimes, but it turns out that simply regex searching across all crons in an infrastructure is the most useful part. (and it works now)
Also, to build this, I had to create a library to convert cron entries (like what you’d see on-disk), into normalized entries (with only lists of numbers). Cronlib also supports dumping a list of all timestamps a cron will run at (huge list!), based on a days argument. See for a nice way to create a time_map, to avoid storing duplicates of these huge lists.
More to come as puppet-cron-analyzer progresses.

No Comments yet... be the first »

Connecting to existing buckets in S3 with boto, the right way

Posted: January 23rd, 2012 | Author: | Filed under: DevOps | Tags: , | No Comments »

Here’s another interesting tidbit.

If you have scripts that connect to S3, and you run out of buckets (Amazon only allows 100 buckets per account), you might get a nasty surprise.

See, you may have been using create_bucket(name-of-bucket) to get your bucket object. It’s undocumented as far as I can see, but apparently if you use create_bucket() on an bucket that actually exists, it’ll return the Bucket object. That’s handy! Except it breaks if you’re unable to create more buckets (even though you aren’t really trying to create more). Sigh, so I refactored as such:

# old and busted: bucket = s3_conn.create_bucket(bucket_name)
# new hotness:
# iterate over Bucket objects and return the one matching string:
def find_s3_bucket(s3_conn, string):
    for i in s3_conn.get_all_buckets():
        if string in
            return i
Used as: bucket = find_s3_bucket(s3_conn, bucket_name)

There is likely a more elegant way, but hey this works.

No Comments yet... be the first »

Finding instances by name with boto in python

Posted: January 22nd, 2012 | Author: | Filed under: DevOps | Tags: , | No Comments »

OK, I know I need to blog more. Rather than think I don’t have anything useful to say, I’ll start adding quick entries of what-I-learned.

Random tidbits from today:
I got annoyed with EC2 failures, and having to manually terminate and redeploy instances today, so I finally worked on a script I’ve been meaning to write. One thing I had to figure out (which isn’t all that complex), is how to discover an instance by name.

If you tag an instance with the hostname you’re using in your deployment script, you don’t need to fumble in the AWS console to find an instance ID. Ever. I don’t find it acceptable to manually click around or run scripts to discover information that’s available from an API 🙂

So, to “find” the instance using python and boto (assume AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are defined in your shell environment):

import boto
ec2conn = boto.connect_ec2(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
reservations = ec2conn.get_all_instances()
instances = [i for r in reservations for i in r.instances]
my_fqdn = "" # trailing part of my domain

Now, ‘instances’ can be iterated over to find instances with the name you desire. I wrote a little wrapper function to do this, and it returns an instance object (which I call instance.terminate() on, for this purpose). Code:

def find_instance_by_nametag(instances, name):
    # support short or full hostname usage
    if not my_fqdn in name:
        name = name + my_fqdn
    for i in instances:
        if "Name" in i.tags and name in i.tags['Name']:
            return i
    sys.exit("sorry, I couldn't find an instance with that name!")

Easy as that!

No Comments yet... be the first »