• Archives

  • Categories:

  • Other profiles

Quotes for community.ubuntu.com

Earlier this year I worked with the team that launched our shiny new community.ubuntu.com site.

Hooray!

community.ubuntu.com

I had a couple of tasks aside from general review. The first was getting more images on the site prior to launch to make it more engaging to people. I reached out to LoCo teams and got a lot of great photos in, but then got very busy. My second task, finding quotes of community members, got stuck on my poor, long to do list.

But now I shall continue my quest! We need short, 1-2 sentence, quotes from community members.

What do you work on and why?

Or perhaps:

What inspires you about Ubuntu?

Email me: lyz@ubuntu.com

Please include the name which you want to be credited under for the quote, and what you work on if it’s not part of the quote.

We’d like to scatter these quotes throughout the site, so feedback from folks from a variety of teams will be super valuable. Once I’ve collected all the quotes I’ll submit the full list to the team and get to work adding them to the site.

Virtual Ubuntu Developer Summit 1311

As is tradition, this virtual Ubuntu Developer Summit kicked off with an introduction by Jono Bacon and keynote from Mark Shuttleworth. It was at 6AM my time, I shut off my 5:45AM alarm and proceeded to sleep until the first session I had to be at 8:05AM. Hah!

Fortunately it was available on youtube immediately following the broadcast and I was able to Chromecast it up to my TV a few days later: Intro by Jono Bacon, Keynote by Mark Shuttleworth

At 8AM I joined my trusty …tahr at my desk to kick off sessions for the week.


Trusty has a pink dragon friend, I call her Zuul

– Ubuntu Documentation Team Roundtable –

I spent a considerable amount of time with the Ubuntu Documentation team this past cycle, so I was really proud that several of us could get together to have a session and outline what we need to do in the next 6 months.

The focus was primarily on-boarding new contributors. It’s clear that there are portions of our process documentation that still need clean-up and there remains some confusion in the community over what exactly we have for documentation and the focus of each, so defining those more succinctly in all our resources is important, but for reference…

Ubuntu Desktop Guide

  • Managed in bzr on launchpad, lp:ubuntu-docs
  • Written in Mallard
  • Official and ships with the desktop
  • Committed to updating for every release
  • Lives at help.ubuntu.com/$release-number/ubuntu-help/

Ubuntu Server Guide

  • Managed in bzr on launchpad, lp:serverguide
  • Written in DocBook
  • Official and is published as html and PDF
  • Committed to updating for every release
  • Lives at help.ubuntu.com/$release-number/serverguide/

Community Help Wiki

  • A MoinMoin wiki, anyone can edit
  • Not strictly versioned, no solid committment for updating per release
  • Lives at help.ubuntu.com/community/

Then we have flavor documentation. Xubuntu and Kubuntu manage shipped documentation in DocBook.

Oh there’s also this thing called wiki.ubuntu.com that we should only be using for notes related to Ubuntu teams, not documentation. And then there is the Ubuntu Manual which is a completely different project.

All clear? No more confusion? If only it were that easy :) We need some clicky buttons or something on our DocumentationTeam wiki page to make this all easier on the brain.

We came out of the session with several action items for continuing to improve things for new contributors.

Video: http://www.youtube.com/watch?v=Vb1AIVAsGkE
IRC Log: /2013/11/19/%23ubuntu-uds-community-1.html#t16:17
Notes: uds-1311-community-1311-docteam-roundtable.txt
Blueprint: community-1311-docteam-roundtable

– LoCo projects –

I was really excited about this session. There are always “tips” and encouragement going around for LoCo events, but many of us still spend time putting together packs of materials for things like Global Jams (as I did in September last year for our QA Jam), writing presentations for each new release to present at the local LUG (how many of us are doing this same work every cycle?) and more. It would be great if there were defined projects with materials, instructions and desired outcomes that teams could use to take some of the work out of planning events. And so it shall be! Stephen Michael Kellat of Ubuntu Ohio and the LoCo Council is now working with David Planella to begin putting this project of projects together.

Video: http://www.youtube.com/watch?v=e99_s2rWJbk
IRC Log: /2013/11/19/%23ubuntu-uds-community-2.html#t18:02
Notes: uds-1311-community-1311-loco-projects.txt
Blueprint: community-1311-loco-projects


Stephen Michael Kellat talks about LoCo Projects!

– Ubuntu Women Trusty Goals –

I already wrote about this over on the Ubuntu Women blog, so I won’t repeat myself here, visit: Ubuntu Women at vUDS 1311 session summary

Video: http://www.youtube.com/watch?v=AS22FRgrKe0
IRC Log: /2013/11/20/%23ubuntu-uds-community-1.html#t15:00
Notes: uds-1311-community-1311-ubuntu-women.txt
Blueprint: community-1311-ubuntu-women

– Community IRC Workshops and Classrooms for Trusty –

In spite of the rise of Ubuntu On-Air, my heart still belongs to text and IRC-based sessions in Ubuntu Classroom. In this session Daniel Holbach and I talked through some of the events we had planned for the cycle and lamented the inability to get a timely Ubuntu Open Week out the door for Saucy. We sketched out some plans based on our own schedules and now each have a list of folks to contact to firm up the schedule for our events. I’ve also taken some action items to follow up with teams who I hope will host sessions this cycle, including QA and Documentation.

I did land on a proposed date for Ubuntu User Days though: Saturday, January 25th 2014

Ubuntu User Days

Video: http://www.youtube.com/watch?v=T_Eyhu6JDuo
Notes: uds-1311-community-1311-classroom.txt
Blueprint: community-1311-classroom

Unfortunately I slept through the Community Council session due to a scheduling snafu, I could have sworn it was for later! But you can see what my fellow Community Council members Daniel Holbach, Laura Czajkowski, Elfy, Michael Hall, Scott Ritchie and Mark Shuttleworth got up to by checking out the video here: Community Council meeting


I watched the CC session on my TV too

To wrap up vUDS, Jono met with track leads to present results from each of the tracks. It gives a nice overview of the whole summit, check it out here: UDS Nov 2013 – Summaries

All the videos from the summit are available by browsing the schedule here. Click on the title of the session you want to watch, the videos are youtube videos embedded in the page and links are on the page to notes and blueprints.

This is the second virtual UDS I’ve attended, the first being vUDS 1305 which took place at the same time as the in person UDS would have. As someone who had the opportunity to attend the physical summits I still find these virtual summits greatly lacking. Many folks who used to go don’t take the time off of work for them anymore (myself included) so we only specifically target a very small subset of sessions we may have otherwise wandered into. I’ve also found that in the community sessions I was in the attendance was significantly lower than any sessions we had at physical UDS, probably due to the loss of the “wander in if it looks interesting since I’m here already” effect. The Ubuntu Women session is one which has perhaps suffered the most, several of our ideas over the years came from women who had never heard of us but happened to be at the summit and joined our session to offer new ideas and perspectives. So for sessions I was in, these virtual UDSes have only managed to attract a subset of existing contributors who could attend at the time it was scheduled and as a result just felt like just any other team meeting. Sadly, I don’t feel inspired following these new UDSes, instead I feel “wow, my to do list is very long, and I’m sick of meetings.”

That said, I understand Canonical is doing the best they can with their resources so I’ve done my best to take what value I can from this new format. It was great to see the schedule firmed up over a week in advance this time so I was able to adjust my work schedule accordingly. I’m also happy that they made it easier to join hangouts, as in the past it seemed like you had to scramble at the beginning of the session and know who to talk to in order to be a part of the video portion. I had no trouble submitting my blueprints this time around and found they had landed on the schedule through no actions of my own, hooray! Having recordings of every session has also been valuable, as in the past only a handful of sessions were recorded during each time slot and it was always somewhat unclear to attendees whether their session would be one of those select few or what the rationale was behind what got recorded or not.

Oh, and with virtual UDS we can bring our cats!

You may notice that popey did too, and I saw one walk behind Elfy in the Community Council session!

Taming Lubuntu on my PowerBook G4

About a year ago I adopted a MacBook Pro and a PowerBook G4. The MacBook Pro now happily runs OSX, when we turn it on (mostly MJ uses it for photo processing). In spite of being quite the nice machine for 2004, the poor PowerBook G4 had been long abandoned by Apple due to it’s PPC nature and this made me quite sad. My plan had been to load it up with Lubuntu, use it here and there and help out with ISO testing. Then 2013 happened. New job! Wedding! I was much busier than anticipated, after loading up Lubuntu 12.04 on the PowerBook I didn’t have a whole lot of time to spend with it.

This week I pulled my PowerBook G4 off the shelf and loaded up Lubuntu 13.10. I quickly learned that it’s one of the more finicky PowerBooks when it comes to sound, and my initial cleverness during install led me to have some graphical pain. Here’s what I had to fix:

First up, when I loaded up the Lubuntu LiveCD the graphics were a mess. The installer advised that if problems existed you could try passing the LiveCD a video option of “video=ofonly” which I did and then happily ran the installer. Unfortunately this later caused my installed system to lock up pretty quickly after booting it. Sad. I got my clue to what I should be using for video via this page. I made a quick change to /etc/yaboot.conf to swap out the video option:

append="quiet splash video=radeonfb:1024x768-32@60"

Ran `sudo ybin` and rebooted. Voila! No more crashes.

Now wifi.

elizabeth@r2g4:~$ lspci | grep Network
0001:10:12.0 Network controller: Broadcom Corporation BCM4306 802.11b/g Wireless LAN Controller (rev 03)

That’s a job for the firmware-b43-installer package. Easy enough. I now had wifi!

The trickiest part of this install was audio. It turns out that most PowerBook G4s have sound working out of the box, but I got unlucky.

elizabeth@r2g4:~$ alsamixer
cannot open mixer: No such file or directory

And no /proc/asound at all. That won’t do!

Back to the wiki, PowerPC FAQ: Why do I have no sound?

After a whole lot of trial and error, I learned that this bug was indeed my culprit. I had to remove the snd_aoa entries from /etc/modprobe.d/blacklist.local.conf and then edit: /etc/modules

Removing: snd_powerbook

And adding: snd_aoa_i2sbus

I had manually rmmod & modprobed the modules to test and was finally able to run alsamixer to adjust the volume settings. Then a reboot to confirm all was well. Hooray, sound!

I was pretty happy at this point, but I had to go a bit further, the backlight for the keyboard wasn’t working! The island-style keyboards are the bane of my existence, I hate that all laptops have gone to them. But this old PowerBook G4? Beautiful keyboard, aside from the awkward placing of the alt key, it’s a pleasure to type on.

Backlit functionality would be the icing on the cake, fortunately for me it was as easy as:

sudo modprobe i2c-dev

This made my Mac hotkeys work and the keyboard backlight. Winner! I added the module for /etc/modules for persistence.

The system isn’t exactly zippy, but with a gig of ram it’s quite usable, particularly for writing. I’m happy to learn that sensitivity problems I encountered with the touchpad in 12.04 have been resolved and I’m looking forward to loading it up with tests this cycle so maybe less manual labor will be required for the 14.04 release.

Trusty attends first Ubuntu Hour

For each Ubuntu release I spend a little time finding a toy or other representation of the codename animal to use at booths, Ubuntu Hours and other events. See my previous posts about critters for K, L, M, N and O, Quetzals and Pangolins, Raring and Saucy Salamander.

Trusty Tahr was tricky and took some tracking. In the end I determined that it would be unlikely to find an actual tahr toy, and they are a type of mountain goat, so I should find a stuffed mountain goat that looks like the critter I see on the Tahr wikipedia page:

I ended up with the Hansa 15″ mountain goat.

And tonight she visited her first Ubuntu Hour in San Francisco! Joining Grant, James, Eric and myself.

Holiday cards 2013!

Every year I send out a big batch of wintertime holiday cards to friends and acquaintances online.

Reading this? That means you! Even if you’re outside the United States!

Just drop me an email at lyz@princessleia.com with your postal address, please put “Holiday Card” in the subject so I can filter it appropriately. Please do this even if I’ve sent you a card in the past, I won’t be reusing the list from last year.

Typical disclaimer: I’m not religious and the cards will be secular in nature.

Hong Kong Central and the Big Buddha

With the summit all wrapped up, I had Saturday and much of Sunday to do a bit more exploring in Hong Kong before catching my flight home on Sunday night.

Saturday morning I met up with Anita Kuno to do some eating and shopping in Hong Kong Central. We traced some of the similar steps I took on Monday, but this time stopping at more shops to pick up gifts. I also enjoyed an early lunch at Tsim Chai Kee Noodle, went with the King Prawn Wonton Noodle, yum!

Saturday afternoon I headed back to the hotel and got some much needed down time. Caught up on email, read a bit (finished both Flowers for Algernon and The NeverEnding Story over the weekend!) and at 7PM headed down to the hotel spa for a manicure.

Sunday it rained, but as my last day in Hong Kong I wasn’t going to let that keep me in! I spent the afternoon visiting the Tian Tan Buddha (locally known as the “Big Buddha”). The hotel had a convenient shuttle that went right to the station where the cable car starts to take you up to the mountain, so I picked that up around 1PM after checking out of the hotel.

I had been warned that the wait would be long, so I brought along my NOOK to read during the 1+ hour wait in line for the 35 minute cable car ride to the Buddha. Fortunately it was covered, so we didn’t get rained on. Then I was ready to go! And I picked up the $13 (USD) tourist photo of me in the car:

In spite of the rain and fog, the ride up was beautiful. From the car you first get a view of the airport, but once over the first hill it’s all beautiful forest and ocean views. In several spots you could also see the optional walking trail that can be taken (one summit attendee said it took about about 3 hours a lot of perseverance to take this hike).

Once at the Buddha village you walk through a number of shops and eateries before getting to the area where there are entrances to the Monastery and steps to the Big Buddha. The drizzle, heat and 240 step climb up to the Buddha were worth the trouble, you get spectacular views again and get close up to the truly giant Buddha at the top of the hill.

I did some more gift shopping in the village before I left. The cable car ride back down to the station had a shorter wait (only about 40 minutes) and by then the rain had picked up a bit so it really was time to go. I got back to the hotel around 6PM, in time to head to the airport.

In all, a great final two days in Hong Kong, even if I am paying for it this week in less time to adjust to my home time zone before going back to work. Which also explains why I’m wide awake writing this after midnight my time.

Now that I’m home I got photos from my whole trip uploaded here: http://www.flickr.com/photos/pleia2/sets/72157637585878834/

OpenStack Design Summit Day 4

First off this morning had an enjoyable “Monty’s team” HP breakfast at the Marriott. Then there was a bit less fog today than previous days, so throughout the day we were able to head into the dev lounge and enjoy a lovely view!

– Python3 and Pypy support –

The first session I attended of the day was talking about continuing and expanding the support for Python3 in the infrastructure. There was a review of the current status and then walking through some of the current roadblocks. From there the issue of PyPy support was discussed, an action item of which was largely to continue to track projects in this cycle which have blockers and that new projects should probably start with tests enabled for PyPy. It was also noted that 2.6 support will continue through this cycle to prevent breaking RHEL6 and Debian Wheezy.

Copy of notes from this session available here: IcehousePypyPy3.txt

– Release artifact management –

Currently we release libraries to Pypi and servers on release and milestones as tarballs to tarballs.openstack.org and Launchpad. This session discussed the possibility of leveraging Python Wheel to also make pre-release library versions available as well. Discussion then moved into discussing releases of the servers (not just libraries) and how version numbering should work. It was nice to also see discussion about doing a better job of handling tagged release announcements and release notes.

Copy of notes from this session available here: IcehouseReleaseArtifacts.txt

– Keystone needs on the QA Pipeline –

The session walked through some of what is being tested in Keystone (either directly or as a result of other tests) and how this needs to be formally expanded with different types of tests in Tempest. It was very interesting for me to also learn about the Tempest Field Guides which defines the different types of tests (Scenario, API, Unit, CLI, etc) and helps shape these discussions.

Copy of notes from this session available here: icehouse-summit-qa-keystone.txt

– Coverage analysis tooling –

The team isn’t happy with the current tooling in place for code testing coverage in nova due to fundamental way it runs and that it regularly fails in production (action item to look into this). So much of the discussion was around best policy for improving coverage testing and trying to figure out who has interest and time available to move it forward.

Copy of notes from this session available here: icehouse-summit-qa-coverage-tooling.txt

Mid day I ended up having a great lunch with Derek Higgins and Dan Prince to discuss the TripleO Test Framework (toci) and the status regarding setup of the TripleO Cloud(s!).

– Negative Testing Strategy –

Negative testing confirms that the failure notice/response is correct. The session was about policy around what should be accepted in terms of negative testing and it was decided that there should essentially a moratorium on them unless they are of high value and they should probably move many of them to unit tests.

Copy of notes from this session available here: icehouse-summit-qa-negative-tests.txt

– Enhancing Debugability in the Gate –

As the session name indicates, this session focused on ways to improve the tools around debugging failures in the gate as our testing continues to grow, parallelize and gets more complicated. The first part of the discussion centered around coming up with guidelines for getting log policy harmonized between projects, cross-service IDs and how to make it easier for developers to replicate the gate locally (devstack-gate README exists, make easier? More discoverable? Add a note to failures with link?).

Copy of notes from this session available here: icehouse-summit-qa-gate-debugability.txt

– Icehouse release schedule & coordination –

In this session it was decided that Icehouse will be released on April 17th, 3 weeks before the J summit in Atlanta. Milestones and RC timing were then discussed and decided upon, while also considering the concerns of translators. The session also covered the format of the weekly release meetings and the release tooling and process.

Copy of notes from this session available here: IcehouseReleaseSchedule.txt

– The future of Design Summits –

This was a feedback session about the summit where we discussed room size+shape, food and other parts of the summit, with consensus that most of the summit was really great. We also discussed staggering the next event slightly so that conference runs Monday-Thursday and Design Summit lasts Tuesday-Friday, conversations which I’m sure will continue in the coming weeks as plans for Atlanta and Paris are worked out.

Copy of notes from this session available here: IcehouseFutureDesignSummits.txt (very detailed! worth a browse)

And then it was over! In celebration of the end of the summit, National Geographic’s picture of the day today ;) Fireworks in China

For dinner I met up with Anita, Ghe and Clark for a quick dinner at the hotel Chinese restaurant where I had some fried eel and for the first time got to enjoy some sea cucumber, and enjoy I did!

Tomorrow I’ll be meeting up with Anita to explore Hong Kong central a bit more before her flight Saturday night. I fly home on Sunday.

OpenStack Design Summit Day 3

Thursday morning! No keynotes today so we went directly into design sessions.

– Integration Testing for Horizon –

Currently the Horizon team depends on manual testing of its integration with the APIs, this session sought to address this and other tests.

A lot of great notes were taken during this session, available here: icehouse-summit-Integration-Testing-for-Horizon.txt

– Ceilometer integration testing –

Now over to visit the Ceilometer team! The project has matured to a point where they want to start putting together Tempest tests. The session covered current blockers, including current issues with using devstack and the database backends. On the Infrastructure side Clark Boylan was able to recommend use of the experimental queue to get tests going slowly and work through the issues as the tests evolve and they shake out the issues. The session wrapped up with discussion about options for stress testing, but that’s a more long term goal.

Copy of notes from this session available here: icehouse-summit-ceilometer-integration-tests.txt

During the 10:30 break several of us got together for a GPG keysigning organized by Jeremy Stanley in order to begin to establish a web of trust with OpenStack contributors. I’m happy to say that I was able to check the IDs and keys of 19 folks at the keysigning.

– Translation tools and scripts –

Transifex is currently used to handle translations of documentation, but they recently became closed source and so is no longer an appropriate option. Aside from the typical concerns (vendor lock-in, lack of control, implied promotion of a proprietary tool) there was also concern that over time the service itself will continue to degrade for open source projects leaving us scrambling for alternatives. In this session Ying Chun Guo (Daisy) presented her analysis of Pootle, TranslateWiki and Zanata as alternatives. It was also noteworthy that the TranslateWiki community is actively interested in helping make their platform useful for OpenStack, so ease of contribution to projects is an important consideration.

A public copy of the analysis spreadsheet is available on Google Docs: Translation Management Comparison. I’ve also downloaded a copy in .ods format Translation_Management_Comparison.ods).

Copy of notes from this session available here: icehouse-summit-translation-tools-and-scripts.txt

– elastic-recheck –

Back in September the elastic-recheck service was launched (see post from Joe Gordon. This has been a very successful project and this session was set up to discuss some of the next steps forward, including improved notification of when logs are ready, improvements to the dashboard, adding new search queries, getting metrics into graphite, documentation and possible future use of Bayesian analysis.

Copy of notes from this session available here: icehouse-summit-elastic-recheck.txt

I had lunch with Anita Kuno and our infrastructure resident Salt guru David Boucha, after which I did a final loop around the expo.

– Testing Rolling Upgrades –

I’d write a lovely, concise description of this session but I don’t actually know that it’s possible because the matrix of upgrades being addressed is a bit tricky to grok. Maybe with pictures. Or maybe the session notes are good enough! Check them here: icehouse-summit-qa-rolling-upgrades.txt

– Grenade Update and Futures –

This session outlined some of the future plans for Grenade, the test harness used to exercise the upgrade process between OpenStack releases. Ongoing work includes getting Grenade set up to test Havana to trunk (it’s still Grizzly to trunk at the moment) and there was some discussion about how to make that less painful now that it’s gating. The requirements discussed in the previous session were acknowledged and discussed at length, including variables that may be used to do service-specific testing. The session wrapped up by talking about the need to test other projects; neutron, ceilometer and heat were mentioned specifically.

Copy of notes from this session available here: icehouse-summit-qa-grenade.txt

– Enablement for multiple nodes test –

The discussion in this session centered around how to enable a multi-node test environment to test services that require it (VM HA, migrate, evacuate, etc). First there was some talk about the non-public cloud hardware available (ie TripleO cloud), and then migrated in to how we can use the public clouds for this, focusing on how networking would work between the cloud VMs (vpn) and then how the testing infrastructor would allocate/assign sets of machines (gearman? Heat?). Then there was discussion about how to handle logs that are generated from the multiple nodes.

Copy of notes from this session available here: icehouse-summit-qa-multi-node.txt

It was then on to the final Infrastructure sessions of the week.

– Requirements project redux –

Now the OpenStack has a global Requirements project so it’s clear to everyone what Python dependencies are required. The session reviewed some of the current pain points and plans to improve it. Issues this past cycle included poor communication between projects when requirements change, dependency issues, and the need to develop a freezing policy for requirements.

Copy of notes from this session available here: icehouse-summit-requirements-project-redux.txt

– Preemptively Integrate the Universe –

Fun session name for the last session of the day! The problem currently is that upstream Python packages updates sometimes breaks things. The proposal in this session was to do a better job of preemptively making sure changes in upstream projects we are friendly with are not going to break us and automatically notifying them when a proposed change does.

Copy of notes from this session available here: icehouse-summit-preemptively-integrate-the.txt

Tonight is a quiet night in for me, I’ll be hitting the hotel gym before grabbing some dinner and turning in to relax before the last day tomorrow.

OpenStack Design Summit Day 2

Day 2 of the OpenStack summit here in Hong Kong began with a series of really great keynotes. First up were three major Chinese companies, iQIYI (online video), Qihoo 360 (Internet platform company) and Ctrip (travel services) about how they all use OpenStack in their companies (video here). We also learned several OpenStack statistics, including that there are more Active Technical Contributors (ATCs) in Beijing than any other city in the world and Shanghai is pretty big too. This introduction also fed a theme of passion for Open Source and OpenStack that was evident throughout the keynotes that followed.

I was then really impressed with the Red Hat keynote, particularly Mark McLoughlin’s segment. Having been actively working on Open Source for over 10 years myself, his words of success we’ve had from Linux to OpenStack really resonated with me. For years all of us passionate Open Source folks have been talking about (and giving presentations) the benefits of open solutions, so seeing the success and growth today really does feel like validation, we had it right, and that feeds us to continue (literally, many of us are getting paid for this now, I love open source and used to do it all for free). He also talked about TripleO, yeah! (video here)

Next up was Monty Taylor’s keynote for HP where he got to announce the formal release of Horizon Dashboard for use with HP Cloud at horizon.hpcloud.com. It was great to hear Monty echo some of the words of Mark when discussing the success of OpenStack and then diving into the hybrid testing infrastructure we now have between the public HP Cloud and Rackspace testing infrastructures and the new “private” TripleO clouds we’re deploying (admittedly, of course I enjoyed this, it’s what I’m working on!). He also discussed much of what customers had been asking for when approaching OpenStack, including questions around openness (is it really open?), maturity, security, complexity and upgrade strategies. (video here)

– Neutron QA and Testing –

Neutron is tested much less than other portions of OpenStack and the team has recognized that this is a problem, so the session began by discussing the current state of testing and the limitations they’ve run into. One of the concerns discussed early in the session was recruiting contributors to work on testing, and then they dove into discussing some of the specific test cases that are failing to find solutions and assign tasks, there was in depth discussion of Tenant Isolation & Parallel Testing, which is one of their major projects. There are also several test concerns that there wasn’t time to address and will have to be tackled in a later meeting, including: Full Tempest Test Runs, Grenade Support, API Tests and Scenario Tests.

Copy of notes from this session available here: icehouse-summit-qa-neutron.txt

It’s interesting to learn in these QA sessions how many companies do their own testing. It seems that this is partially an artifact of Open Source projects historically being poor at public automation testing and largely being beholden to companies to do this behind the scenes and submit bugs and patches. I’m sure there will always be needs internally for companies to run their own testing infrastructures, but I do look forward to a time when more companies become interested in testing the common things in the shared community space.

– Tempest Policy in Icehouse –

Retrospective of successes and failures from work this past cycle. Kicked off by mentioning that they have now documented the Tempest Design Principles so all contributors are on the same page, and a suggestion was made to add time budget and scope of negative tests to the principles. Successes included the large ops and parallel testing, and usage of elastic search to help bug triaging. The weaker parts included use of and follow-up with (or not) blueprints, onboarding new contributors (need more documentation!) and prioritizing reviews (perhaps leverage reviewday more) and in general encouraging all reviewers.

Copy of notes from this session available here: icehouse-summit-qa-tempest-policy.txt

After lunch I did some wandering around the expo hall where I had a wonderful chat with Stephen Spector at the HP booth. I also got to chat with Robbie Williamson of Canonical and totally cheated on my Ubuntu ice cream by just asking him for banana with brownies instead of checking out their juju demo.

– Moving Trove integration tests to Tempest –

Trove is currently being tested independently of the core OpenStack CI system and they’ve been working to bring it in so this session walked through the plans to do this. One step identified was moving the Trove diskimage-elements into a different repo and discussed the pros and cons of adding it to the tripleo-image-elements, pros won. Built images from the job will then be pushed to tarballs.openstack.org for caching and then discussed more of what trove integration testing today does and what needs to be done to update tempest to run the tests on the devstack-gate using said cached instances.

Copy of notes from this session available here: TroveTempestTesting.txt

– Tempest Stress Test – Overview and Outlook –

The overall goals of Tempest stress testing is to find race conditions and simulate real-life load. The session walked through the current status of the tests and began outlining some of the steps to move forward, including the defining and writing of more stress tests. Beyond that, using stress tests in the gate was also reviewed where the time tests take (can valuable tests be done in under 45 minutes?) was considered so some of the pain points timing-wise were noted. There was also discussion around scenario tests and enhancing the documentation to include examples of unit/scenario tests and defining what makes a good test to make development of stress tests more straight forward.

Copy of notes from this session available here: icehouse-summit-qa-stress-tests.txt

– Parallel tempest moving forward –

Parallel testing in Tempest currently exists and speed of testing has greatly improved as a result, hooray! So this session was a review of some of the improvements needed to move forward. Topics included improving reliability, further speed improvements (first step: increase number of test runners. Eliminate setupClass? Distributed testing?) and testr UI vs Tempest UI.

Copy of notes from this session available here: icehouse-summit-qa-parallel.txt

– Zuul job runners and log management –

The first part of this session discussed log management for the logs produced from test runs, continuing an infrastructure mailing list thread from October: [OpenStack-Infra] Log storage/serving.

Next up: We use a limited number of features from Jenkins these days due to our workflow, so there has been discussion about writing a new/different job runner for Zuul that has several requirements:

  • Distributed (no centralized ‘master’ architecture)
  • Secure (should be able to run untrusted jobs)
  • Should be able to publish artifacts appropriate to a job’s security context
  • Lightweight (should do one job and simply)

Copy of notes from this session available here: icehouse-summit-zuul-job-runners-and-log.txt

– More Salt in Infra –

Much of the OpenStack Infrastructure is currently managed by Puppet, but there are some things like event-based dependencies that are not-trivial to do in Puppet but which Salt has built in support for. The primary example that inspired this was manage_projects.py which tends to have race/failure problems due to event dependencies.

Copy of notes from this session available here: icehouse-summit-more-salt-in-infra.txt

My evening wrapped up by heading down to Kowloon to enjoy dinner with several of my Infrastructure colleagues from HP, Wikimedia and the OpenStack Foundation.

OpenStack Design Summit Day 1

Today, Tuesday here in Hong Kong, the OpenStack Summit began!

It kicked off with a beautiful and colorful performance of the Lion dance, followed by some words of welcome from Daniel Lai, CIO of the Hong Kong Office of the Government (video of both here).

Then we had the keynotes. Jonathan Bryce, Executive Director of the OpenStack Foundation, began with an introduction to the event, announcing that there were over 3000 attendees from 50 countries and satisfying the curiosity of attendees by announcing that the next summit would take place in Atlanta, and the one following that in Paris! He then welcomed a series of presenters from Shutterstock, DigitalFilm Tree and Concur to each talk about the way that they use OpenStack components in their companies (video here). These were followed up by a great keynote by Mark Shuttleworth, founder of Canonical and Ubuntu (video here), and a keynote from IBM (video here).

Directly following the keynotes the Design Summit sessions began. I spent much of my day in TripleO sessions.

– TripleO: Icehouse scaling design & deployment scaling/topologies –

This session included a couple of blueprints, starting off with one discussing the scaling design of the current TripleO. What is not automated (tailing of log files, etc), what is automated but slow (bootstrapping, avoidance of race conditions, etc), where we hit scaling/perf limits (network, disk i/o, database, etc), measuring and tracking tools (measure latency, collectd+graphite on undercloud, logstash+elastic search in undercloud). From there the the second half of the session discussed the needs and possibilities regarding a Heat template repository for Tuskar.

Copy of notes from this session available here: tripleo-icehouse-scaling-design.txt and tripleo-deployment-scaling-topologies.txt

– TripleO: HA/production configuration –

This session provided a venue for reviewing the components of TripleO and determining what needs to be HA, including: rabbit, qpid, db, api instances, glance store, heat-engine, neutron, nova-compute & scheduler & conductor, cinder-volume, horizon. Once defined, attendees were able to discuss the targeted solution for HA for each component which were captured in the session notes.

Copy of notes from this session available here: tripleo-icehouse-ha-production-configuration.txt

– TripleO: stable branch support and updates futures –

The discussion during this session centered around whether the TripleO project should maintain stable branches so TripleO can be deployed using non-trunk OpenStack and what components would need to be attended to to make this happen. Consensus seemed to be that this should be a goal with a support term similar to the rest of OpenStack, but more discussions will come when the project itself has grown up a bit.

Copy of notes from this session available here: icehouse-updates-stablebranches.txt

– TripleO: CI and CD automation –

This was the last TripleO session I attended. It began with an offer from Red Hat to provide a second physical cluster to complement the current HP rack that we’re using for TripleO testing. Consensus was that this new rack would be identical to the current one in case one of the providers has problems or goes away, and it was noted that having multiple “TripleO Clouds” was essential for gating. Discussion then went into what should be running test-wise and timelines for when we expect each step to be done. Then Robert Collins did a quick walkthrough of tripleo-test-cluster document that steps through our plans for putting TripleO into the Infrastructure CI system. This is my current focus in TripleO and I have a lot of work to do when I get home!

Copy of notes from this session available here: icehouse-tripleo-deployment-ci-and-cd-automation.txt

– Publishing translated documentation –

I headed over to this session due to my curiosity regarding how translations are handled and the OpenStack Infrastructure’s role in making sure the translations teams have the resources they need. The focus of this session was the formalization of getting translations published on official OpenStack docs page and when these should be published (only when translations reach 100%? If so, should a partially translated “in progress” version be published on a dev server?). There were some Infrastructure pain points that Clark Boylan was able to work with them on after the session.

Copy of notes from this session available here: icehouse-doc-translation.txt

– Infrastructure: Storyboard – Basic concepts and next steps –

Thierry Carrez led this session about Storyboard, a bug and task tracking Django application that he created a Proof of Concept for to replace Launchpad.net, which we currently use. He did a thorough review of the current limitations of Launchpad as justification for replacing it, and quickly mentioned the pool of other bug trackers that were reviewed. From there he broke into a workflow discussion in Storyboard and wrapped up the session by outlining some priorities to move it forward. I’m really excited about this project, while it’s quite the cliche to write your own bug and task tracker and I find Launchpad to be a pretty good bug tracker (as these things go), the pain points of slowness and lack of active feature development will only continue to get worse as time goes on and working on a solution now is important.

Copy of notes from this session available here: icehouse-summit-storyboard-basic-concepts-and-next.txt

– Infrastructure: External replications and hooks –

Last session of the day! The point of this session was to discuss our git replication strategy, and specifically our current policy of mirroring our repositories to GitHub. Concerns centered around casual developers getting confused or put off by our mirror there (not realizing that it’s just a mirror, not something you can do pull requests against), the benefits of discoverability and workflow for contributors used to GitHub even if they have to use Gerrit for actual submission of code, the implicit “blessing” of GitHub our repository mirrors there conveys (we don’t mirror to other 3rd party services, is this fair?) and the ingrained use of the GitHub URLs by many projects. The most practical concern with this replication was the amount of work it adds for the Infrastructure team when creating new projects if GitHub is misbehaving.

Consensus was to keep the GitHub mirrors, but to work more to replace references to cloning, etc repos to point to our new (as of August) git://git.openstack.org and https://git.openstack.org/cgit/ addresses.

Copy of notes from this session available here: icehouse-summit-external-replications-and-hooks.txt

And before I wrap up, Etherpad held up today! Clark did work this past month to deploy a new instance that closed several bugs related to our past deployment. Throughout sessions today we kept an eye on how it was doing, the graph from Cacti very clearly showed when sessions were happening (and when we were having lunch!):