The Rocket Ship to Havana – OpenStack Summit Spring 2013

I started this blog post as “The Road to Havana”, but immediately it struck me that the term “road” just doesn’t do this summit justice.

Day 1 was full of Heat for me. As a recent addition to the Heat core reviewer team, it was quite helpful and a pleasure to meet most of the other developers in person. This happened about 10 minutes before our first session together. It never ceases to impress me how easy it is to meet somebody in real life whom you’ve been corresponding with over only IRC and email. In this case, I felt face to face contact just added warmth and depth to already warm and friendly professional relationships.

Heat is on a path toward being a really great solution for managing large application deployments in OpenStack clouds. Six months ago I was focused on Juju as a part of that discussion, and Heat was just this little incubating AWS CloudFormation compatibility engine. Juju has been focusing on a rewrite, which I think is a mistake the project will likely be regretting for a long time. Meanwhile, Heat has turned into a project to gather effort around making orchestration and high order control services built in features of OpenStack.

We had a few discussions about scaling and performance, including concurrent launching of resources and scaling out the Heat engine. These were pretty low level discussions involving mostly the developers already involved, and like any good summit, we had contributions of ideas from many attendees, and solutions seem clear.

One thing that was clear to me before the summit was that a storm was brewing with regards to how Heat users would express their application deployments.

From some of the larger “enterprise” focused vendors, my own employer (HP) included, comes a recommendation to support TOSCA. This is a really large, wide ranging standard that only partially even applies to Heat’s current scope. However, it seems like a natural fit for TOSCA users to use something built into the cloud to deploy their applications.

Rackspace contributed a spec and will likely contribute some existing code for a native format that Heat users can use. This format is more narrow in focus than TOSCA, and I think has real potential. It was in need of a good name, so I dubbed it “Heat Orchestration Template” during a session. HOT seems to have stuck for at least a while, though “Heat DSL” may also end up being its name.

There were also some interesting discussions around auto scaling going into Heat. I think it is understood by everyone that this is a different interface to similar, but not identical, control services. Because having competing control services can be problematic, it makes sense to have them all live in Heat for now. Rackspace has committed some developers to getting the problem solved and code published during this cycle, so we are all excited to see their work.

On a larger scale, the “TripleO (OpenStack on OpenStack)” program that our team at HP has been driving under the tutelage of Robert Collins got extreme amounts of exposure. With Bare-Metal nova landing in OpenStack for grizzly, and Heat fully integrated, nearly all of our components are blessed and thus prepped for the usual community contribution fire hose that OpenStack brings. The other pieces, diskimage-builder, os-config-applier, and os-refresh-config, are all in StackForge and will be absorbed into OpenStack as we flesh out the TripleO effort.

After the day 1 “Heat firehose” of sessions, I spent a lot of time just communicating to various interested parties about what it all means and where Heat and TripleO fit in with the OpenStack ecosystem. My talk about using Heat to manage OpenStack was well attended and there were some great questions. Slides are available here. Update 2013-04-26: videos have been posted!

One big surprise was to see the tool chain of TripleO mentioned in the keynote by Mark Shuttleworth. The concept we have subscribed to is to copy the Unix tool method. One needs to support incremental adoption of tools and thus one tool should do one job well. To demonstrate this, I have used this image a few times:

This is not a controversial pattern, and has proven successful for solving large problems in computing before. So, imagine my surprise when I saw that Mark Shuttleworth was arguing against it in a slide, showing Juju trying to do all of these jobs, and suggesting this was better than our (unfinished) effort to break the problem up and write a single tool for each distinct task. I spoke with Mark afterwards, and I think we will just agree to disagree on the approach. Having been quite involved with Juju since very early in its existence, I am still rooting for Juju to accomplish what it has set out to do. However, I am troubled by the lack of focus and unclear integration path.

I want to end on a happy note. This was my first OpenStack summit as an employee of HP Cloud Services. I want to thank HP for the opportunities provided to me, and also for sponsoring OpenStack. There are so many talented and focused people at HP, I think we’re going to do some really amazing work together.

Overall, this was a great OpenStack Summit. The Havana cycle will see OpenStack growing more features and improving the deployment story, which is good for everyone. I am particularly excited about the proposition of gathering for “OpenStack ‘I’”, which will be in Hong Kong! So, rock on Stackers, can’t wait to see you all again in the fall!

4 thoughts on “The Rocket Ship to Havana – OpenStack Summit Spring 2013

  1. Clint, I share your sentiments. I’ve been championing Juju within our organization precisely for its cloud vendor agnosticism, as I anticipated (correctly) that we would wish to transition from public to private cloud as the latter matured. I’ve done so in spite of the distribution agnosticism found throughout the rest of our toolchain, and Juju’s present lack thereof. I have excused this of Juju, since I have anticipated the Go rewrite might solve this problem, as some blog posts have suggested, although not promised.

    I’ve also excused Juju for not supporting the concurrent deployment of services to hosts, however, as I am attempting the HA deployment of OpenStack Grizzly on MaaS, I have run into a situation where Jitsu (to the uninformed: a companion tool of Juju) works around the service concurrency issue, but only if running a single instance of each service, which is contrary to my HA efforts. But this is somewhat excusable because of a feature request which will allegedly address this issue in the rewrite. Plus, deployment to MaaS is my only use case for concurrent services, so this isn’t a big deal except for the rollout of OpenStack.

    I’ve also excused Juju for not providing the means to scale my services dynamically due to the lack of an internal API, because once again that’s something that will be available in the rewrite.

    Unfortunately due to the timing of the rewrite, I’m finding myself transitioning some of my efforts from Juju to other tools in our toolchain, like SaltStack, to try to address some of these shortcomings. The transition itself hasn’t been all wine and roses, since there are currently no tools available that address all cloud related issues head on, nor can they due to the rapid evolution of IaaS itself.

    I hope that the Go rewrite will enable the flexibility that was lacking in the original Python implementation, but I’m losing hope that will happen any time soon considering the bug reports I see filed against the Go rewrite, which more often than not relate to core functionality that was available in the Python version, as opposed to new features to address the shortcomings that I have been anticipating.

    Cheers

  2. There are also shell and genuine pearls which have been artificially
    coated or dyed. The body jewelry goes thru the skin
    around the navel, typically directly above, but sometimes below or towards the side.
    At the final major one I attended in Geneva the
    jewelry was being guarded by men with guns.

  3. hi! , Is extremely good crafting pretty a lot! write about most of us connect a lot more roughly the report upon Google? We need an expert in this field to fix our dilemma. Could be that is certainly an individual! Having a look toward expert you actually.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>