Wednesday morning started off with a Women of OpenStack breakfast where I happened to sit down next to Leslie Hawthorn without realizing it until we introduced ourselves, so that was a pleasant surprise! We’d both been intending to meet for some time. The table we were at also had a couple engineers working on OpenStack at Yahoo! and a UX engineer from Red Hat, a tech reporter and more, so it was quite the interesting mix and we all swapped business cards. It’s been really cool to meet the GNOME Outreach Program for Women interns from this past cycle too, they’ve been doing really impressive work and writing about it in blogs I’ve enjoyed reading. I was also able to work with Anita Kuno directly while she was helping out with the infrastructure (-infra) team.
It was then over to the conference for more inspiring keynotes. The OpenStack Foundation brought in Randall Sobie to talk about clouds in high energy physics, including a 5000 core OpenStack cloud CERN. Predictably dealing with massive amounts of data is one of the challenges of such scientific organizations that collect it to solve some of the most interesting problems of and in the universe.
Next up was a representative from the NSA who, while not able to tell us much at all about their private cloud deployment of OpenStack, did let slip their numbers had reached the 1000s and at a certain point in the process and that their staff was seeking to collaborate with other government organizations to deploy similar infrastructures elsewhere. Again, predictably, an organization like the NSA needs to handle not only big data, but analysis and manipulation of data. Next up was HP’s keynote, where they dove into some of the projects HP supports, including the one I’m employed with them to work on, the continuous integration team! They also talked about OpenStack on Openstack (TripleO), Heat and related tools. It was also interesting to learn that HP ships a Linux server every minute.
Off to sessions! The Adding new integrated projects to Tempest session shared some best practices from longer-term OpenStack projects as the newly integrated Ceilometer and Heat projects are now seeking to be included in Tempest (also check out these slides by Sean Dague) testing. The testr / testtools feedback/next-steps
session began with a description of exactly what testr and Test Repository are and are for and then beyond that how more tests can migrate to it. Tempest Best Practice Guide where they discussed some ideas for what the QA team should develop in terms of best practices and then steps the team can take to help with organization and productivity of the team, including leveraging milestones, hosting regular bug triage days and improving documentation with regular documentation weeks.
I went to Devananda van der Veen’s Provisioning Bare Metal with OpenStack talk (slides here). Although I am quite familiar with it now, it’s cool to see it in context and learn about some of the future plans. The Tempest – Gap Analysis – Identify new tests session took a look at some of the gaps in testing that some of the projects have, there are a number of them and they currently lack manpower to write them, a few new blueprints will be created to help track progress.
One of my favorite sessions of the day was Vulnerability management: infra needs, scoring…. The topic was handling of security patches and how our team could facilitate that work — preferably without completely duplicating our entire development infrastructure in private exclusively for the security patches. Ideas tossed around included pointing security testers in the direction of the git repositories for tests we use so they could run them, plus offering to run a stand-alone Gerrit-only server (rather than the full infrastructure so they could use a familiar code review system when handling the changes with key team members. Discussion then moved into vulnerability handling in general and it was really interesting to learn about the teams involved. Final session of the day was Technical Committee membership evolution. Given my work with Ubuntu this governance discussion regarding the mechanisms for handling the growth of the TC was an interesting one, even if they didn’t come to any formal conclusions.
The evening wrapped up for me by heading out for an enjoyable and geeky evening with some of my -infra colleagues. I work with some really fun, open source committed folks and it’s really cool to be able to meet up in person during these summits.
Thursday was the final day of the conference! Moving on from QA, it was a day of -infra discussions. First session of the day was Continuous-deployment for upstream Openstack where the team discussed some of the goals and intentions for such a deployment model (alongside the standard 6 month release cycle) and sought to acknowledge that there were organizations who were doing this already in various ways. Next up was Dependency Management, where most of the discussion centered around discussion of Python modules and that the -infra team now maintains requirements project for the new pipy mirror. The Sorting out test runners, wrappers and venvs session focused on the fact that a lot of people are confused about how to properly run the same tests that are run in the CI infrastructure, so are left puzzled when their commits fail. The last session before lunch was another -infra meeting centered around Failure management in the gate. The concern was the few days in the past cycle where it could take several hours for a change to merge due to a couple of issues with the various moving parts in the gating infrastructure. There was some discussion around how to optimize these jobs (and failures) and also improvements to the notification system for such slowness or failures beyond the IRC bot in the #openstack-infra channel, added this past cycle.
After lunch it was off to the Bare Metal Testing session. We had already talked about the virtualized testing earlier in the week (essentially used to quickly test the logic of the process), so the focus with this session was how to handle bare metal testing on actual bare metal. Some of the challenges and risks encountered with testing on bare metal were identified and the session ended with some action items to put together a document to give to any potential donor of this hardware and data center space to explain the requirements and expectations. OpenStack CI logging Review of non-infra-managed tooling was an interesting session to review the OpenStack core tooling that isn’t yet under control of the -infra team and plans to move that over. I took a couple action items in this session to help draft up some puppet modules to pull some things over. The OpenStack project has had to cease using the code name for the networking project due to a trademark issue (details here) which led to the Projects (re)naming session. It turns out the code names for the project aren’t protected under OpenStack trademarks and it would be very costly to do so, particularly as the project grows, and they feel it would be inevitable for a situation like the networking software’s to arise again. The consensus tended to be to try and reduce the dependence upon code names in official and beyond development public spaces, but it was acknowledged that it’s a hard human problem to get around. The last session I attended of the day was on the Havana release schedule where Thierry Carrez presented the proposed release cycle for the next 6 months and took some feedback about it and then the summit in general.
Right after that last session I headed back to the hotel to grab my suitcase and then it was off on the uneventful train + plane trip home.