• Archives

  • Categories:

Liberty OpenStack Summit days 3-5

Summiting continued! The final three days of the conference offered two days of OpenStack Design Summit discussions and working sessions on specific topics, and Friday was spent doing a contributors meetup so we could have face time with people we’re working with on projects.

Wednesday began with a team breakfast, where over 30 of us descended upon a breakfast restaurant and had a lively morning. Unfortunately it ran a bit long and made us a bit late for the beginning of summit stuff, but the next Infrastructure work session was fully attended! The session sought to take some next steps with our activity tracking mechanisms, none of which are currently part of the OpenStack Infrastructure. Currently there are several different types of stats being collected, from reviewstats which are hosted by a community member and focus specifically on reviews to those produced from Bitergia (here) that are somewhat generic but help compare OpenStack to other open source projects to Stackalytics which is crafted specifically for the OpenStack community. There seems to be value in hosting various metric types, mostly so comparisons can be made across platforms if they differ in any way. The consensus of the session was to first move forward with moving Stackalytics into our infrastructure, since so many projects find such value in it. Etherpad here: YVR-infra-activity-tracking


With this view from the work session room, it’s amazing we got anything done

Next up was QA: Testing Beyond the Gate. In OpenStack there is a test gate that all changes must pass in order for a change to be merged. In the past cycle periodic and post-merge tests have also been added, but it’s been found that if a code merging isn’t dependent upon these passing, not many people pay attention to these additional tests. The result of the session is a proposed dashboard for tracking these tests so that there’s an easier view into what they’re doing, whether they’re failing and so empower developers to fix them up. Tracking of third party testing in this, or a similar, tracker was also discussed as a proposal once the infra-run tests are being accounted for. Etherpad here: YVR-QA-testing-beyond-the-gate

The QA: DevStack Roadmap session covered some of the general cleanup that typically needs to be done in DevStack, but then also went into some of the broader action items, including improving the reliability of Centos tests run against it that are currently non-voting, pulling some things out of DevStack to support them as plugins as we move into a Big Tent world and work out how to move forward with Grenade. Etherpad here: YVR-QA-Devstack-Roadmap

I then attended QA: QA in the Big Tent. In the past cycle, OpenStack dropped the long process of being accepted into OpenStack as an official project and streamlined it so that competing technologies are now all in the mix, we’re calling it the Big Tent – as we’re now including everyone. This session focused on how to support the QA needs now that OpenStack is not just a slim core of a few projects. The general idea from a QA perspective is that they can continue to support the things-everyone-uses (nova, neutron, glance… an organically evolving list) and improve pluggable support for projects beyond that so they can help themselves to the QA tools at their disposal. Etherpad here: YVR-QA-in-the-big-tent

With sessions behind me, I boarded a bus for the Core Reviewer Party, hosted at the Museum of Anthropology at UBC. As party venues go, this was a great one. The museum was open for us to explore, and they also offered tours. The main event took place outside, where they served design-your-own curry seafood dishes, bison, cheeses and salmon. Of course no OpenStack event would be complete with a few bars around serving various wines and beer. There was an adjacent small building where live music was playing and there was a lot of space to walk around, catch the sunset and enjoy some gardens. I spent much of my early evening with friends from Time Warner Cable, and rounded things off with several of my buddies from HP. This ended up being a get-back-after-midnight event for me, but it was totally worth it to spend such a great time with everyone.

Thursday morning kicked off with a series of fishbowl sessions where the Infrastructure team was discussing projects we have in the works. First up was Infrastructure: Zuul v3. Zuul is our pipeline-oriented project gating system, which currently works by facilitating the of running tests and automated tasks in response to Gerrit events. Right now it sends jobs off to Gearman for launching via Jenkins to our fleet of waiting nodes, but we’re really using Jenkins as a shim here, not really taking advantage of the built in features that Jenkins offers. We’re also in need of a system that better supports multi-tenancy and multi-node jobs and which can scale as OpenStack continues to grow, particularly with Big Tent. This session discussed the end game of phasing out Jenkins in favor of a more Zuul-driven workflow and more immediate changes that may be made to Nodepool and smaller projects like Zuul-merger to drive our vision. Etherpad here: YVR-infra-zuulv3

Everyone loves bug reporting and task tracking, right? In the next session, Infrastructure: Task tracking, that was our topic. We did an experiment with the creation of Storyboard as our homebrewed solution to bug and task tracking, but in spite of valiant efforts by the small team working on it, they were unable to gain more contributors and the job was simply too big for the size of the team doing the work. As a result, we’re now back to looking at solutions other than Canonical’s hosted Launchpad (which is currently used). The session went through some basic evaluation of a few tools, and at the end there was some consensus to work toward bringing up a more battle-hardened and Puppetized instance of Maniphest (from Phabricator) so that teams can see if it fits their needs. Etherpad here:YVR-infra-task-tracking

The morning continued with an Infrastructure: Infra-cloud session. The Infrastructure team has about 150 machines in a datacenter that have been donated to us by HP. The session focused on how we can put these to use as Nodepool instances by running OpenStack on our own and adding that “infra-cloud” to the providers in Nodepool. I’m particularly interested in this, given some of my history with getting TripleO into testing (so have deployed OpenStack many, many times!) and in general eager to learn even more about production OpenStack deployments. So it looks like I’ll be providing Infra-brains to Clint Byrum who is otherwise taking a lead here. To keep in sync with other things we host, we’ll be using Puppet to deploy OpenStack, so I’m thankful for the expertise of people like Colleen Murphy who just joined our team to help with that. Etherpad here: YVR-infra-cloud

Next up was the Infrastructure: Puppet testing session. It was great to have some of the OpenStack Puppet folks in the room so they could talk some about how they’re using beaker-rspec in our infra for testing the OpenStack modules themselves. Much of the discussion centered around whether we want to follow their lead, or do something else, leveraging our current system of node allocation to do our own module testing. We also have a much commented on spec up for proposal here. The result of the discussion was that it’s likely that we’ll just follow the lead of the OpenStack Puppet team. Etherpad here: kilo-infra-puppet-testing

That afternoon we had another Infrastructure: Work session where we focused on the refactor of portions of system-config OpenStack module puppet scripts, and some folks worked on getting the testing infrastructure that was talked about earlier. I took the opportunity to do some reviews of the related patches and help a new contributor do some review – she even submitted a patch that was merged the next morning! Etherpad for the work session here: YVR-infra-puppet-openstackci

The last session I attended that day was QA: Liberty Priorities. It wasn’t one I strictly needed to be in, but I hadn’t attended a session in room 306 yet, and it was the famous gosling room! The room had a glass wall that looked out onto a roof were a couple of geese had their babies and would routinely walk by and interrupt the session because everyone would stop, coo and take pictures of them. So I finally got to see the babies! The actual session collected the pile of to do list items generated at the summit, which I got roped into helping with, and prioritized them. Oh, and they gave me a task to help with. I just wanted to see the geese! Etherpad with the priorities is here: YVR-QA-Liberty-Priorities


Photo by Thierry Carrez (source)

Thursday night I ended up having dinner with the moderator of our women of OpenStack panel, Beth Cohen. We went down to Gastown to enjoy a dinner of oysters and seafood and had a wonderful time. It was great to swap tech (and women in tech) stories and chat about our work.

Friday! The OpenStack conference itself ended on Thursday, so it was just ATCs (Active Technical Contributors) attending for the final day of the Design Summit. So things were much quieter and the agenda was full of contributors meetups. I spent the day in the Infrastructure, QA and Release management contributors meetup. We had a long list of things to work on, but I focused on the election tooling, which I ended up following up with on list and then later had a chat with the author of the proposed tooling. My afternoon was spent working on the translations infrastructure with Steve Kowalik who works with me on OpenStack infra and Carlos Munoz of the Zanata team. We were able to work through the outstanding Zanata bugs and make some progress with how we’re going to tackle everything, it was a productive afternoon and always a pleasure to get together with the folks I work with online every day.

That evening, as we left the closing conference center, I met up with several colleagues for an amazing sushi dinner in downtown Vancouver. A perfect, low-key ending to an amazing event!