• Archives

  • Categories:

  • Other profiles

Hong Kong Central and the Big Buddha

With the summit all wrapped up, I had Saturday and much of Sunday to do a bit more exploring in Hong Kong before catching my flight home on Sunday night.

Saturday morning I met up with Anita Kuno to do some eating and shopping in Hong Kong Central. We traced some of the similar steps I took on Monday, but this time stopping at more shops to pick up gifts. I also enjoyed an early lunch at Tsim Chai Kee Noodle, went with the King Prawn Wonton Noodle, yum!

Saturday afternoon I headed back to the hotel and got some much needed down time. Caught up on email, read a bit (finished both Flowers for Algernon and The NeverEnding Story over the weekend!) and at 7PM headed down to the hotel spa for a manicure.

Sunday it rained, but as my last day in Hong Kong I wasn’t going to let that keep me in! I spent the afternoon visiting the Tian Tan Buddha (locally known as the “Big Buddha”). The hotel had a convenient shuttle that went right to the station where the cable car starts to take you up to the mountain, so I picked that up around 1PM after checking out of the hotel.

I had been warned that the wait would be long, so I brought along my NOOK to read during the 1+ hour wait in line for the 35 minute cable car ride to the Buddha. Fortunately it was covered, so we didn’t get rained on. Then I was ready to go! And I picked up the $13 (USD) tourist photo of me in the car:

In spite of the rain and fog, the ride up was beautiful. From the car you first get a view of the airport, but once over the first hill it’s all beautiful forest and ocean views. In several spots you could also see the optional walking trail that can be taken (one summit attendee said it took about about 3 hours a lot of perseverance to take this hike).

Once at the Buddha village you walk through a number of shops and eateries before getting to the area where there are entrances to the Monastery and steps to the Big Buddha. The drizzle, heat and 240 step climb up to the Buddha were worth the trouble, you get spectacular views again and get close up to the truly giant Buddha at the top of the hill.

I did some more gift shopping in the village before I left. The cable car ride back down to the station had a shorter wait (only about 40 minutes) and by then the rain had picked up a bit so it really was time to go. I got back to the hotel around 6PM, in time to head to the airport.

In all, a great final two days in Hong Kong, even if I am paying for it this week in less time to adjust to my home time zone before going back to work. Which also explains why I’m wide awake writing this after midnight my time.

Now that I’m home I got photos from my whole trip uploaded here: http://www.flickr.com/photos/pleia2/sets/72157637585878834/

OpenStack Design Summit Day 4

First off this morning had an enjoyable “Monty’s team” HP breakfast at the Marriott. Then there was a bit less fog today than previous days, so throughout the day we were able to head into the dev lounge and enjoy a lovely view!

– Python3 and Pypy support –

The first session I attended of the day was talking about continuing and expanding the support for Python3 in the infrastructure. There was a review of the current status and then walking through some of the current roadblocks. From there the issue of PyPy support was discussed, an action item of which was largely to continue to track projects in this cycle which have blockers and that new projects should probably start with tests enabled for PyPy. It was also noted that 2.6 support will continue through this cycle to prevent breaking RHEL6 and Debian Wheezy.

Copy of notes from this session available here: IcehousePypyPy3.txt

– Release artifact management –

Currently we release libraries to Pypi and servers on release and milestones as tarballs to tarballs.openstack.org and Launchpad. This session discussed the possibility of leveraging Python Wheel to also make pre-release library versions available as well. Discussion then moved into discussing releases of the servers (not just libraries) and how version numbering should work. It was nice to also see discussion about doing a better job of handling tagged release announcements and release notes.

Copy of notes from this session available here: IcehouseReleaseArtifacts.txt

– Keystone needs on the QA Pipeline –

The session walked through some of what is being tested in Keystone (either directly or as a result of other tests) and how this needs to be formally expanded with different types of tests in Tempest. It was very interesting for me to also learn about the Tempest Field Guides which defines the different types of tests (Scenario, API, Unit, CLI, etc) and helps shape these discussions.

Copy of notes from this session available here: icehouse-summit-qa-keystone.txt

– Coverage analysis tooling –

The team isn’t happy with the current tooling in place for code testing coverage in nova due to fundamental way it runs and that it regularly fails in production (action item to look into this). So much of the discussion was around best policy for improving coverage testing and trying to figure out who has interest and time available to move it forward.

Copy of notes from this session available here: icehouse-summit-qa-coverage-tooling.txt

Mid day I ended up having a great lunch with Derek Higgins and Dan Prince to discuss the TripleO Test Framework (toci) and the status regarding setup of the TripleO Cloud(s!).

– Negative Testing Strategy –

Negative testing confirms that the failure notice/response is correct. The session was about policy around what should be accepted in terms of negative testing and it was decided that there should essentially a moratorium on them unless they are of high value and they should probably move many of them to unit tests.

Copy of notes from this session available here: icehouse-summit-qa-negative-tests.txt

– Enhancing Debugability in the Gate –

As the session name indicates, this session focused on ways to improve the tools around debugging failures in the gate as our testing continues to grow, parallelize and gets more complicated. The first part of the discussion centered around coming up with guidelines for getting log policy harmonized between projects, cross-service IDs and how to make it easier for developers to replicate the gate locally (devstack-gate README exists, make easier? More discoverable? Add a note to failures with link?).

Copy of notes from this session available here: icehouse-summit-qa-gate-debugability.txt

– Icehouse release schedule & coordination –

In this session it was decided that Icehouse will be released on April 17th, 3 weeks before the J summit in Atlanta. Milestones and RC timing were then discussed and decided upon, while also considering the concerns of translators. The session also covered the format of the weekly release meetings and the release tooling and process.

Copy of notes from this session available here: IcehouseReleaseSchedule.txt

– The future of Design Summits –

This was a feedback session about the summit where we discussed room size+shape, food and other parts of the summit, with consensus that most of the summit was really great. We also discussed staggering the next event slightly so that conference runs Monday-Thursday and Design Summit lasts Tuesday-Friday, conversations which I’m sure will continue in the coming weeks as plans for Atlanta and Paris are worked out.

Copy of notes from this session available here: IcehouseFutureDesignSummits.txt (very detailed! worth a browse)

And then it was over! In celebration of the end of the summit, National Geographic’s picture of the day today ;) Fireworks in China

For dinner I met up with Anita, Ghe and Clark for a quick dinner at the hotel Chinese restaurant where I had some fried eel and for the first time got to enjoy some sea cucumber, and enjoy I did!

Tomorrow I’ll be meeting up with Anita to explore Hong Kong central a bit more before her flight Saturday night. I fly home on Sunday.

OpenStack Design Summit Day 3

Thursday morning! No keynotes today so we went directly into design sessions.

– Integration Testing for Horizon –

Currently the Horizon team depends on manual testing of its integration with the APIs, this session sought to address this and other tests.

A lot of great notes were taken during this session, available here: icehouse-summit-Integration-Testing-for-Horizon.txt

– Ceilometer integration testing –

Now over to visit the Ceilometer team! The project has matured to a point where they want to start putting together Tempest tests. The session covered current blockers, including current issues with using devstack and the database backends. On the Infrastructure side Clark Boylan was able to recommend use of the experimental queue to get tests going slowly and work through the issues as the tests evolve and they shake out the issues. The session wrapped up with discussion about options for stress testing, but that’s a more long term goal.

Copy of notes from this session available here: icehouse-summit-ceilometer-integration-tests.txt

During the 10:30 break several of us got together for a GPG keysigning organized by Jeremy Stanley in order to begin to establish a web of trust with OpenStack contributors. I’m happy to say that I was able to check the IDs and keys of 19 folks at the keysigning.

– Translation tools and scripts –

Transifex is currently used to handle translations of documentation, but they recently became closed source and so is no longer an appropriate option. Aside from the typical concerns (vendor lock-in, lack of control, implied promotion of a proprietary tool) there was also concern that over time the service itself will continue to degrade for open source projects leaving us scrambling for alternatives. In this session Ying Chun Guo (Daisy) presented her analysis of Pootle, TranslateWiki and Zanata as alternatives. It was also noteworthy that the TranslateWiki community is actively interested in helping make their platform useful for OpenStack, so ease of contribution to projects is an important consideration.

A public copy of the analysis spreadsheet is available on Google Docs: Translation Management Comparison. I’ve also downloaded a copy in .ods format Translation_Management_Comparison.ods).

Copy of notes from this session available here: icehouse-summit-translation-tools-and-scripts.txt

– elastic-recheck –

Back in September the elastic-recheck service was launched (see post from Joe Gordon. This has been a very successful project and this session was set up to discuss some of the next steps forward, including improved notification of when logs are ready, improvements to the dashboard, adding new search queries, getting metrics into graphite, documentation and possible future use of Bayesian analysis.

Copy of notes from this session available here: icehouse-summit-elastic-recheck.txt

I had lunch with Anita Kuno and our infrastructure resident Salt guru David Boucha, after which I did a final loop around the expo.

– Testing Rolling Upgrades –

I’d write a lovely, concise description of this session but I don’t actually know that it’s possible because the matrix of upgrades being addressed is a bit tricky to grok. Maybe with pictures. Or maybe the session notes are good enough! Check them here: icehouse-summit-qa-rolling-upgrades.txt

– Grenade Update and Futures –

This session outlined some of the future plans for Grenade, the test harness used to exercise the upgrade process between OpenStack releases. Ongoing work includes getting Grenade set up to test Havana to trunk (it’s still Grizzly to trunk at the moment) and there was some discussion about how to make that less painful now that it’s gating. The requirements discussed in the previous session were acknowledged and discussed at length, including variables that may be used to do service-specific testing. The session wrapped up by talking about the need to test other projects; neutron, ceilometer and heat were mentioned specifically.

Copy of notes from this session available here: icehouse-summit-qa-grenade.txt

– Enablement for multiple nodes test –

The discussion in this session centered around how to enable a multi-node test environment to test services that require it (VM HA, migrate, evacuate, etc). First there was some talk about the non-public cloud hardware available (ie TripleO cloud), and then migrated in to how we can use the public clouds for this, focusing on how networking would work between the cloud VMs (vpn) and then how the testing infrastructor would allocate/assign sets of machines (gearman? Heat?). Then there was discussion about how to handle logs that are generated from the multiple nodes.

Copy of notes from this session available here: icehouse-summit-qa-multi-node.txt

It was then on to the final Infrastructure sessions of the week.

– Requirements project redux –

Now the OpenStack has a global Requirements project so it’s clear to everyone what Python dependencies are required. The session reviewed some of the current pain points and plans to improve it. Issues this past cycle included poor communication between projects when requirements change, dependency issues, and the need to develop a freezing policy for requirements.

Copy of notes from this session available here: icehouse-summit-requirements-project-redux.txt

– Preemptively Integrate the Universe –

Fun session name for the last session of the day! The problem currently is that upstream Python packages updates sometimes breaks things. The proposal in this session was to do a better job of preemptively making sure changes in upstream projects we are friendly with are not going to break us and automatically notifying them when a proposed change does.

Copy of notes from this session available here: icehouse-summit-preemptively-integrate-the.txt

Tonight is a quiet night in for me, I’ll be hitting the hotel gym before grabbing some dinner and turning in to relax before the last day tomorrow.

OpenStack Design Summit Day 2

Day 2 of the OpenStack summit here in Hong Kong began with a series of really great keynotes. First up were three major Chinese companies, iQIYI (online video), Qihoo 360 (Internet platform company) and Ctrip (travel services) about how they all use OpenStack in their companies (video here). We also learned several OpenStack statistics, including that there are more Active Technical Contributors (ATCs) in Beijing than any other city in the world and Shanghai is pretty big too. This introduction also fed a theme of passion for Open Source and OpenStack that was evident throughout the keynotes that followed.

I was then really impressed with the Red Hat keynote, particularly Mark McLoughlin’s segment. Having been actively working on Open Source for over 10 years myself, his words of success we’ve had from Linux to OpenStack really resonated with me. For years all of us passionate Open Source folks have been talking about (and giving presentations) the benefits of open solutions, so seeing the success and growth today really does feel like validation, we had it right, and that feeds us to continue (literally, many of us are getting paid for this now, I love open source and used to do it all for free). He also talked about TripleO, yeah! (video here)

Next up was Monty Taylor’s keynote for HP where he got to announce the formal release of Horizon Dashboard for use with HP Cloud at horizon.hpcloud.com. It was great to hear Monty echo some of the words of Mark when discussing the success of OpenStack and then diving into the hybrid testing infrastructure we now have between the public HP Cloud and Rackspace testing infrastructures and the new “private” TripleO clouds we’re deploying (admittedly, of course I enjoyed this, it’s what I’m working on!). He also discussed much of what customers had been asking for when approaching OpenStack, including questions around openness (is it really open?), maturity, security, complexity and upgrade strategies. (video here)

– Neutron QA and Testing –

Neutron is tested much less than other portions of OpenStack and the team has recognized that this is a problem, so the session began by discussing the current state of testing and the limitations they’ve run into. One of the concerns discussed early in the session was recruiting contributors to work on testing, and then they dove into discussing some of the specific test cases that are failing to find solutions and assign tasks, there was in depth discussion of Tenant Isolation & Parallel Testing, which is one of their major projects. There are also several test concerns that there wasn’t time to address and will have to be tackled in a later meeting, including: Full Tempest Test Runs, Grenade Support, API Tests and Scenario Tests.

Copy of notes from this session available here: icehouse-summit-qa-neutron.txt

It’s interesting to learn in these QA sessions how many companies do their own testing. It seems that this is partially an artifact of Open Source projects historically being poor at public automation testing and largely being beholden to companies to do this behind the scenes and submit bugs and patches. I’m sure there will always be needs internally for companies to run their own testing infrastructures, but I do look forward to a time when more companies become interested in testing the common things in the shared community space.

– Tempest Policy in Icehouse –

Retrospective of successes and failures from work this past cycle. Kicked off by mentioning that they have now documented the Tempest Design Principles so all contributors are on the same page, and a suggestion was made to add time budget and scope of negative tests to the principles. Successes included the large ops and parallel testing, and usage of elastic search to help bug triaging. The weaker parts included use of and follow-up with (or not) blueprints, onboarding new contributors (need more documentation!) and prioritizing reviews (perhaps leverage reviewday more) and in general encouraging all reviewers.

Copy of notes from this session available here: icehouse-summit-qa-tempest-policy.txt

After lunch I did some wandering around the expo hall where I had a wonderful chat with Stephen Spector at the HP booth. I also got to chat with Robbie Williamson of Canonical and totally cheated on my Ubuntu ice cream by just asking him for banana with brownies instead of checking out their juju demo.

– Moving Trove integration tests to Tempest –

Trove is currently being tested independently of the core OpenStack CI system and they’ve been working to bring it in so this session walked through the plans to do this. One step identified was moving the Trove diskimage-elements into a different repo and discussed the pros and cons of adding it to the tripleo-image-elements, pros won. Built images from the job will then be pushed to tarballs.openstack.org for caching and then discussed more of what trove integration testing today does and what needs to be done to update tempest to run the tests on the devstack-gate using said cached instances.

Copy of notes from this session available here: TroveTempestTesting.txt

– Tempest Stress Test – Overview and Outlook –

The overall goals of Tempest stress testing is to find race conditions and simulate real-life load. The session walked through the current status of the tests and began outlining some of the steps to move forward, including the defining and writing of more stress tests. Beyond that, using stress tests in the gate was also reviewed where the time tests take (can valuable tests be done in under 45 minutes?) was considered so some of the pain points timing-wise were noted. There was also discussion around scenario tests and enhancing the documentation to include examples of unit/scenario tests and defining what makes a good test to make development of stress tests more straight forward.

Copy of notes from this session available here: icehouse-summit-qa-stress-tests.txt

– Parallel tempest moving forward –

Parallel testing in Tempest currently exists and speed of testing has greatly improved as a result, hooray! So this session was a review of some of the improvements needed to move forward. Topics included improving reliability, further speed improvements (first step: increase number of test runners. Eliminate setupClass? Distributed testing?) and testr UI vs Tempest UI.

Copy of notes from this session available here: icehouse-summit-qa-parallel.txt

– Zuul job runners and log management –

The first part of this session discussed log management for the logs produced from test runs, continuing an infrastructure mailing list thread from October: [OpenStack-Infra] Log storage/serving.

Next up: We use a limited number of features from Jenkins these days due to our workflow, so there has been discussion about writing a new/different job runner for Zuul that has several requirements:

  • Distributed (no centralized ‘master’ architecture)
  • Secure (should be able to run untrusted jobs)
  • Should be able to publish artifacts appropriate to a job’s security context
  • Lightweight (should do one job and simply)

Copy of notes from this session available here: icehouse-summit-zuul-job-runners-and-log.txt

– More Salt in Infra –

Much of the OpenStack Infrastructure is currently managed by Puppet, but there are some things like event-based dependencies that are not-trivial to do in Puppet but which Salt has built in support for. The primary example that inspired this was manage_projects.py which tends to have race/failure problems due to event dependencies.

Copy of notes from this session available here: icehouse-summit-more-salt-in-infra.txt

My evening wrapped up by heading down to Kowloon to enjoy dinner with several of my Infrastructure colleagues from HP, Wikimedia and the OpenStack Foundation.

OpenStack Design Summit Day 1

Today, Tuesday here in Hong Kong, the OpenStack Summit began!

It kicked off with a beautiful and colorful performance of the Lion dance, followed by some words of welcome from Daniel Lai, CIO of the Hong Kong Office of the Government (video of both here).

Then we had the keynotes. Jonathan Bryce, Executive Director of the OpenStack Foundation, began with an introduction to the event, announcing that there were over 3000 attendees from 50 countries and satisfying the curiosity of attendees by announcing that the next summit would take place in Atlanta, and the one following that in Paris! He then welcomed a series of presenters from Shutterstock, DigitalFilm Tree and Concur to each talk about the way that they use OpenStack components in their companies (video here). These were followed up by a great keynote by Mark Shuttleworth, founder of Canonical and Ubuntu (video here), and a keynote from IBM (video here).

Directly following the keynotes the Design Summit sessions began. I spent much of my day in TripleO sessions.

– TripleO: Icehouse scaling design & deployment scaling/topologies –

This session included a couple of blueprints, starting off with one discussing the scaling design of the current TripleO. What is not automated (tailing of log files, etc), what is automated but slow (bootstrapping, avoidance of race conditions, etc), where we hit scaling/perf limits (network, disk i/o, database, etc), measuring and tracking tools (measure latency, collectd+graphite on undercloud, logstash+elastic search in undercloud). From there the the second half of the session discussed the needs and possibilities regarding a Heat template repository for Tuskar.

Copy of notes from this session available here: tripleo-icehouse-scaling-design.txt and tripleo-deployment-scaling-topologies.txt

– TripleO: HA/production configuration –

This session provided a venue for reviewing the components of TripleO and determining what needs to be HA, including: rabbit, qpid, db, api instances, glance store, heat-engine, neutron, nova-compute & scheduler & conductor, cinder-volume, horizon. Once defined, attendees were able to discuss the targeted solution for HA for each component which were captured in the session notes.

Copy of notes from this session available here: tripleo-icehouse-ha-production-configuration.txt

– TripleO: stable branch support and updates futures –

The discussion during this session centered around whether the TripleO project should maintain stable branches so TripleO can be deployed using non-trunk OpenStack and what components would need to be attended to to make this happen. Consensus seemed to be that this should be a goal with a support term similar to the rest of OpenStack, but more discussions will come when the project itself has grown up a bit.

Copy of notes from this session available here: icehouse-updates-stablebranches.txt

– TripleO: CI and CD automation –

This was the last TripleO session I attended. It began with an offer from Red Hat to provide a second physical cluster to complement the current HP rack that we’re using for TripleO testing. Consensus was that this new rack would be identical to the current one in case one of the providers has problems or goes away, and it was noted that having multiple “TripleO Clouds” was essential for gating. Discussion then went into what should be running test-wise and timelines for when we expect each step to be done. Then Robert Collins did a quick walkthrough of tripleo-test-cluster document that steps through our plans for putting TripleO into the Infrastructure CI system. This is my current focus in TripleO and I have a lot of work to do when I get home!

Copy of notes from this session available here: icehouse-tripleo-deployment-ci-and-cd-automation.txt

– Publishing translated documentation –

I headed over to this session due to my curiosity regarding how translations are handled and the OpenStack Infrastructure’s role in making sure the translations teams have the resources they need. The focus of this session was the formalization of getting translations published on official OpenStack docs page and when these should be published (only when translations reach 100%? If so, should a partially translated “in progress” version be published on a dev server?). There were some Infrastructure pain points that Clark Boylan was able to work with them on after the session.

Copy of notes from this session available here: icehouse-doc-translation.txt

– Infrastructure: Storyboard – Basic concepts and next steps –

Thierry Carrez led this session about Storyboard, a bug and task tracking Django application that he created a Proof of Concept for to replace Launchpad.net, which we currently use. He did a thorough review of the current limitations of Launchpad as justification for replacing it, and quickly mentioned the pool of other bug trackers that were reviewed. From there he broke into a workflow discussion in Storyboard and wrapped up the session by outlining some priorities to move it forward. I’m really excited about this project, while it’s quite the cliche to write your own bug and task tracker and I find Launchpad to be a pretty good bug tracker (as these things go), the pain points of slowness and lack of active feature development will only continue to get worse as time goes on and working on a solution now is important.

Copy of notes from this session available here: icehouse-summit-storyboard-basic-concepts-and-next.txt

– Infrastructure: External replications and hooks –

Last session of the day! The point of this session was to discuss our git replication strategy, and specifically our current policy of mirroring our repositories to GitHub. Concerns centered around casual developers getting confused or put off by our mirror there (not realizing that it’s just a mirror, not something you can do pull requests against), the benefits of discoverability and workflow for contributors used to GitHub even if they have to use Gerrit for actual submission of code, the implicit “blessing” of GitHub our repository mirrors there conveys (we don’t mirror to other 3rd party services, is this fair?) and the ingrained use of the GitHub URLs by many projects. The most practical concern with this replication was the amount of work it adds for the Infrastructure team when creating new projects if GitHub is misbehaving.

Consensus was to keep the GitHub mirrors, but to work more to replace references to cloning, etc repos to point to our new (as of August) git://git.openstack.org and https://git.openstack.org/cgit/ addresses.

Copy of notes from this session available here: icehouse-summit-external-replications-and-hooks.txt

And before I wrap up, Etherpad held up today! Clark did work this past month to deploy a new instance that closed several bugs related to our past deployment. Throughout sessions today we kept an eye on how it was doing, the graph from Cacti very clearly showed when sessions were happening (and when we were having lunch!):

Tourist in Hong Kong

This week I’m in Hong Kong! It’s my first time in Asia so I wanted to make sure I got some tourist time in. I arrived via a Singapore airline flight on Sunday and spent the day catching up on some things and attempting to stay awake so I could adjust to the time zone. Sunday evening I met up with some of my colleagues for dinner and we made plans to go down to Kowloon on Monday.

A rainy Monday arrived as I met several colleagues for breakfast before splitting off into groups to head out for the day. I went out with Khai Do, Clark Boylan and Ghe Rivero down to Kowloon to check out their electronics street market.

I didn’t buy anything, but I did learn about the popularity of dashboard mounted cameras here. I don’t think I’ve ever even seen one prominently for sale in the US but they were everywhere we looked in the market. While we were wandering around the street markets I let my travel companions know of my quest for the day: tourist trinkets, post cards and to eat an egg custard (MJ told me I had to try one while I was in Hong Kong!).

First up was the egg custard, which didn’t disappoint:

We then hopped back on the subway and headed over to Hong Kong Central for my tourist goodies. While we were there we also found Ice House Street, for which the next OpenStack release is named after. We got the obligatory photo-near-the-sign (thanks to Ghe for taking our photo):

I found my post cards and tourist trinkets there and we spent a lot of time walking around Central (my Fitbit tells me we walked about 5 miles before going back to the hotel) and finally settled on a place to eat. I went with the soy chicken and rice and realized I’m terribly spoiled by boneless meat I usually have in Chinese food at home. Bone in is much trickier to eat! But the meal was enjoyable and really hit the spot after all the walking in the hot, soggy weather.

We wrapped up lunch around 2PM and headed back to the hotel and expo center, arriving just in time for the opening of summit registration at 3PM. I then swung by my room quickly to drop off my conference goodies before taking the train back down to Hong Kong Central for a Women of OpenStack boat tour of the harbor.

Unfortunately the rain had picked up a bit by the evening and we ended up with quite the soggy and choppy boat ride, but the boat was very cool and I had some nice conversations, in spite of my shyness.

The evening wrapped up with all of us heading to a small bar where we enjoyed small plates and drinks before splitting off for the evening. And that wrapped up my day! The rest of this week will be spent on the OpenStack Summit. I then have Saturday and much of Sunday to squeeze some more tourist stuff in before I fly home, I’d love to see the Tian Tan Buddha and if the weather cooperates it’s terribly tempting to go to Disneyland.

Cinematic Titanic, classes & events and upcoming travel

It’s been a busy month. Fortunately in that time my ankle has pretty much healed from the sprain I got last month and I plan on heading back to the gym full force soon.

I realized that I never mentioned it here, but I was interviewed on a podcast earlier in the month about Xubuntu, available here: Frostcast Episode 084. While the 13.10 release for Xubuntu didn’t quite make a big splash feature-wise, it was great talking about some of the work we did this cycle around XMir testing.

On October 19th I had the pleasure of going with my friend Steve (recently imported from Boston) to see a Cinematic Titanic double feature, which was sadly part of their farewell tour. It was a fun night and as Steve and I met in a Mystery Science Theater 3000 chat room years ago so it felt fitting as our first adventure together in this city since his move.

I also began taking an Islam and Judaism Course at the synagogue this month. Unfortunately of the six classes I’ve only been able to attend two so far and will attend the final one in a couple weeks, but it’s still been a fascinating class given my very limited knowledge of Islam. I missed the class that talked about the holy books specifically so I might need to do a bit more reading myself about the Quran (of which I’m now the proud owner of a copy of).

Last week I ended up with a cold that slowed me down for a few days, but after recovering over the weekend I was able to spend this week running around to events in the evenings. Monday it was off to the CNET offices to hear Lavabit’s Ladar Levison talk about the actions by the FBI that led up to him shutting down the Lavabit email service. Story-wise it was very similar to his recounting for NANOG earlier in the month that MJ attended (video here, worth the watch!), but it was good to get out and I had some interesting conversations that night. Tuesday as the Islam & Judaism class and then last night I attended an event put on by Double Union, the new feminist hackerspace opening up here in San Francisco. There were a lot of interesting topics throughout the night, and although I long ago realized that my own feminism is much more passive than that of many of these strong women at these events, I’m always happy to see folks working for equality.

Tonight, on Halloween, I’m staying in. I finished off the 13th episode of the Hammer House of Horror television series that I had expected to really campy, but turned out to actually be quite well-paced and good, particularly for a series produced in 1980. I’ve also spent some time lately going through the classic Doctor Who episodes on Netflix. Like many casual fans my age, I’ve seen all the new episodes, but my classic Doctor Who exposure came from the late night runs on PBS channels, and in my case my father’s interest in the show when I was young (hello Dalek nightmares as a kid!). As such I don’t remember a lot and I lack continuity. It’s been interesting watching the sampling of shows on Netflix in order, and setting me up for exploring beyond their collection in the future.

In free moments I’ve been making time for reading more. Carrying around my NOOK in my purse even when I don’t plan on reading has led to me reading things I want to rather than idlying checking Facebook/Twitter/G+ during spare moments, which has been a healthy change. My stack of magazines has a nice dent in it too, I hope to improve that further on some upcoming flights. I’ve also been watching the fascinating lectures from A Brief History of Humankind by Dr. Yuval Noah Harari. I’ll be loading up the ones I don’t finish onto my tablet to take along on my trip to Hong Kong.

Which brings me to Hong Kong! I’m leaving for the OpenStack Design Summit tomorrow night. It will be my first time in Asia so I’m really looking forward to it, even if there is some “new place” stress building up. I got a direct San Francisco to Hong Kong flight that will take about 15 hours, putting me in Hong Kong on Sunday morning. I have plans lined up for much of the time prior to the summit, which starts on Tuesday, including a Women of OpenStack boat tour on Monday. Here’s hoping the other-side-of-the-world jet lag doesn’t hit me too hard.

Finally, I’ve also booked my trip to Perth for linux.conf.au in January. In another first, this will be my first time in Australia. I’ll be speaking on Wednesday on Systems Administration in the Open and have also submitted proposals for short talks to two of the miniconfs on Monday and Tuesday so I can make the most of my trip there.

And now, time to finish up laundry and get packing!

ZHackers in the Ubuntu Software Center

Last week I was approached by the author of the ZHackers series, David Jordan, to see if I wanted to review volumes 1 and 2 in preparation for the release of volume 2 in the Ubuntu Software Center. With his promise of “It’s got awesome geeks of both genders as well as downloading the linux kernel for the purpose of surviving the zombie apocalypse” how could I resist? He sent copies my way and I loaded the duo up on my NOOK.

ZHackers

They were both a lot of fun. I was very amused to see that the story takes place on the campus of the University of Illinois Urbana-Champaign, where I just spoke for a conference. It was amusing to watch the characters navigate the reality of a zombie outbreak, as they are all aware of zombie popular culture and made many references to it, particularly during their struggle to convince people that it was real and in making plans (don’t go to the grocery store!). I absolutely did appreciate the geekiness of the volumes too, they’re using Ubuntu and have a very college geek way of handling themselves so I often found myself saying “why on earth would they…? Actually, that’s what I’d expect from college geeks.”

However it’s good to be aware that they aren’t complete stories, volume 1 will leave you wondering what happens next in volume 2 which continues the story, and volume 2 does the same. I did find some minor grammatical issues which made me wish the book had a bug tracker (hey, it’s a .deb!) but reporting directly to the author via email was easy enough. I’d also mention that as a geek I appreciated the technical references, and some were explained, but my average cousin wouldn’t know what to make of the phrase “Daniel logged off of IRC” even though it’s obvious to me.

You can purchase both volumes via the Ubuntu Software Center:

And they are generously licensed Creative Commons – Attribution Share Alike, so you can share with your friends!

It was also cool to see that volume 2 came with some bonus features, including a 3D version of the cover and a short story. It got me thinking a bit more about self-publishing in the Ubuntu Software Center and how it’s been opening doors for niche authors and given opportunity to expand content that’s shipped with an ebook.

A Little San Francisco 13.10 Release Party

I was finally in my home city of San Francisco for an Ubuntu release (why are Octobers so crazy?) so I was able to put together a small event for the release of Ubuntu 13.10 (Saucy Salamander).

The night before I pulled out the nail polish and nail decals from System76 to get into the spirit of things.

At 6:30 I arrived at Panera Bread to get set up.

I brought along my pair of salamanders, one of which would be auctioned off, I also brought along a copy of the Official Ubuntu Book to give away.

In all we had 5 total attendees, which made for the smallest turnout I’ve ever had for a release party, but made for a great number for conversation. I was able to show off the features of the new Smart Scopes in Unity on my laptop and verbally share some updates from the world of Xubuntu. We also got to learn about some of the latest improvements in MoinMoin from a developer who joined us and about some of the other recent projects being worked on by attendees.

Saucy got a birthday cupcake

It was also great to see interest in the Ubuntu Phone from a couple of folks who happened to be having coffee nearby. They asked us about the progress of the Ubuntu Phone codebase (released today!) and we commiserated over the inability of Ubuntu Edge to reach the funding goal.

For 14.04 we hope to do something bigger, we’re seeking to partner with one of the many businesses in the area tat use Ubuntu to do more hands on demos and more formal demonstration of the new features. Stay tuned for updates on that coming in the spring.

Code Review for Sysadmins talk at BALUG

On the heels of my trip to Illinois to speak at ACM Reflections | Projections on building a career in open source, I had the opportunity on Tuesday to speak at the Bay Area Linux Users Group (BALUG) on the system the OpenStack Infrastructure team uses to do systems administration. It was a happy coincidence to be the presenter on Ada Lovelace Day and gave me a fine opportunity to promote the work of the Ada Initiative.

The talk was an updated version of the one I gave at OSCON in July (video here). I was able to update the talk to include our use of gearman and multiple jenkinses, elasticsearch, the new git.openstack.org and some of the latest things I’ve learned as we continue to refine our infrastructure and ability to share it with other projects and organizations.

It was also great to have an engaging audience and have my comments on the merits of code review changes to the infrastructure echoed by attendees who also have experience with it in their day jobs. There was also discussion about the general lacking of fully public, open source-based software testing in the open source community and people were quite pleased to see OpenStack leading the way here. It’s certainly something I’m proud to be a part of.

Thanks to everyone who came out!

Slides here: Code Review for Systems Administrators