On Wednesday of last week I hopped on a flight to New York City for a meeting with several of my OpenStack Infrastructure colleagues and potential contributors for the first “Infrastructure Bootcamp” ever held! Thursday morning the bootcamp began with a quick overview of expectations and topics we wished to cover and then we went around the room for introductions. Of the 20 folks present, we had a very diverse crowd representing several companies (including HP, Dell, IBM, DreamHost, Red Hat, Citrix and VMware), development and operations backgrounds and various interests for learning more about the infrastructure that keeps OpenStack development going. Next time we’ll have to be sure we have materials to make name tags.
After introductions, we got to what was dubbed “habits of successful infra-core members” where we dove into the communication workflow we use in the project. Of most stress was how important IRC (and our channel #openstack-infra on Freenode) is to our workflow and that many of us maintain persistent connection to it. The team has a mailing list but as a team culture all the core folks are used to using IRC for discussing and working through almost everything, reserving our mailing list primarily for announcements. The team also has meetings in #openstack-meeting every Tuesday at 17:00 (details here).
Next up on the agenda was a shift into some of the practical philosophies that govern the technical decisions made in the project. One of the things that really drew me to this team was the realization that in addition to being the team that makes the OpenStack’s development infrastructure tick, the infrastructure is an Open Source project unto itself. Configuration of our infrastructure is publicly available and goes through the code review process just like other projects in OpenStack. We strive to use tools which are Open Source and self-hosted, when it’s not we’re actively seeking alternatives (see, the irony of saying this after using a GitHub link was not lost on me, I’m working on that one). All the tools we create to work on the infrastructure are open sourced, both Zuul and Jenkins Job Builder were developed for our project but are also used by other projects.
Since we do have such an open infrastructure with code review, it has really allowed us to encourage autonomy for new contributors and foster a “just do it” attitude when it comes to contributing. Everyone should feel free to browse our bug list (or find an issue to fix themselves) and submit patches for review.
The afternoon was spent on a shift to actually walking everyone through a more detailed overview as the current core infrastructure team (Monty Taylor, Jim Blair, Clark Boyland, and Jeremy Stanley) and various other committers worked together to write and explain as much as they could about the infrastructure on a pair of white paper boards. It quickly became apparent why we all needed to come together at a bootcamp to do this – it’s not simple!
The result by the end of the day:
We had one volunteer to actually put this together as a more formal SVG, which would be an significant improvement over the much more limited one I wrote for the InfraTeam wiki. I’m looking forward to seeing that.
The evening was spent with a majority of attendees by going to an outstanding dinner at PUBLIC Restaurant where Monty had arranged a private room for us.
Sean Dague, who could only join us for the first day, also wrote about the day here: OpenStack Infrastructure Bootcamp
Friday morning began with another pile of bagels, cream cheese and lox (my favorite!) as we took to diving into the specifics of many of the services we had discussed in the overview the previous day. The first stop of the day was to look at the recently reformatted Infrastructure Documentation at ci.openstack.org.
From there we talked about the public configuration of our infrastructure and then there was a demonstration of how we go about making and testing our puppet patches, documented here. Jim selected our paste service puppet packages for a demonstration and ended up finding a couple bugs which he was able to submit a patch for which made for a really great demonstration of testing.
The next major infrastructure piece we looked at was Zuul, our pipeline-oriented project gating system which “facilitates running tests and automated tasks in response to Gerrit events.” In this demonstration it was discussed how to go about testing Zuul itself when developing for it (which I hope to see documented in a simple way soon) as well as providing a deeper look at how Zuul is configured and why certain pieces work the way they do in the various pipelines it manages.
Going through these two topics caused us to touch upon most of the more complicate pieces of the infrastructure and so the rest of the day was spent going through more minor portions and answering questions. We were able to review the IRC-based services we maintain (docs), discuss Jenkins Job Builder (docs), show where we track bugs (here) and how we typically manage them.
More photos from the event here: http://www.flickr.com/photos/pleia2/sets/72157634407480547/
In all it was a really great event and it was nice to be a part of it. I was able to fill in some gaps in my own knowledge about the infrastructure (particularly when it comes to pieces like Zuul which I haven’t really dug into yet). The loose event structure that included meals delivered and breaks allowed me to sit down and share what I know with other attendees as the topics arose. The food for the event was quite accommodating (I don’t eat pork and at least one attendee was a vegetarian) and the Manhattan venues for each day gave us really great spaces to work in that were easy to get to. Huge props to Monty for putting this together!