ubuntu planet – pleia2's blog https://princessleia.com/journal Elizabeth Krumbach Joseph's public journal about open source, mainframes, beer, travel, pink gadgets and her life near the city where little cable cars climb halfway to the stars. Mon, 16 Mar 2026 20:47:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 SCaLE 23x https://princessleia.com/journal/2026/03/scale-23x/ Mon, 16 Mar 2026 20:47:36 +0000 https://princessleia.com/journal/?p=18286 Last week I had the pleasure of attending SCaLE 23x in Pasadena, California. I love SCaLE, it’s probably my favorite conference. It’s big, but it still feels so local, and I always walk away having met new, exceptional people, and with the warmth of connection I feel from seeing some of my closest friends in the open source world. The weather is almost always gorgeous around this time of year, and there are a ton of places that are easy to walk to for lunch and dinner. My arrival ritual these days it taking a walk south to the Whole Foods nearby to pick up some breakfast foods and coffees to enjoy each morning before the conference kicks off at 10AM. 10AM! What a glorious time to start!

The flight down from San Francisco was a quick regional flight into Burbank, my go-to airport for this conference so I can avoid LAX. And then I spent Wednesday evening getting settled in and putting some finishing touches on my talk based on some feedback I had requested from some community members working on projects I had an interesting in learning about.

Thursday is when the magic began! I spent the morning picking up my badge and immediately seeing several familiar faces. It didn’t take long to meet up long time friends from our time in the Ubuntu community Jorge and Amber, and we all went out to lunch.

Talk-wise on Thursday I found myself attending several AI talks as part of the Kwaai Summit.

So, AI. We are in the middle of an AI revolution in the tech industry and things are moving fast. A year ago a lot of the AI being used in tech was being marketed as helpers for developers. At SCaLE I heard someone suggest that we treat AI like a junior developer. We’re now replacing junior developers. But I had an experience over the holidays where I was in the trenches with code these “AI junior developers” were spitting out, and it needs a lot of guidance. Without that, the code, documentation, and even commit messages can come out nonsensical and solving things in way that is not “clever”, they legitimately don’t make sense when viewed in the correct context. It also took away the thoughtful collaboration that I love of development: How do we solve this? Can you explain what I’m reviewing? Why did you make this decision? When you’re met with a series of shrugs and a finger pointed at the AI, the job of thinking ends up solely on the person reviewing the change, and that means more experienced developers doing the reviews are being buried in AI slop code.

The technology will get better, and I anticipate an absolute decimation of our industry job-wise. I’m not exempt from this. Plus, there are real environmental concerns about power consumption and resources being used to build out all the data centers to implement these AI solutions, and I worry that it will be painful for our society in a way that may not be ethical. But we aren’t going back, that’s not how technology works in our world.

Add in that so much of this space is flush with more money than the world has ever seen, and decisions driven by greed and a horrifying lack of consideration for humanity seem to winning.

So, why did I run off on this terrifying, negative AI rant? I wanted to share what head space I was in when I walked into SCaLE. I’ve used AI tooling, and I’m constantly learning, but I’m deeply worried about it.

Thankfully, there are still good people doing good things in AI and some of those people were speaking at SCaLE.

As I strolled into the Kwaai Summit it was refreshing to be reminded of some of the more optimistic views of AI, and how success doesn’t necessarily have to follow the money. AI can be used in ways that benefit us all. There are tedious tasks and “impossible” problems that are starting to be solved by AI. Can I actually get a good handle on a big chunk of open source projects on GitHub supporting s390x? Possibly! Can we finally cure some of the most dangerous forms of cancer? Maybe! And there are people building communities around things like Beneficial General Intelligence (BGI, a play on AGI, the Artificial General Intelligence that tends to be the holy grail of AI) where things like ethics and sustainability are considered. These are my people. These are the people who built the first online social networks and open source projects. This is the messaging that I found so inspiring when I first got into open source software and what made me so fully devote my life’s work to it. It was nice to be there.

On Friday I attended Guinevere Saenger’s talk on building out developer infrastructures, which brought up a lot of points one might not necessarily think about when doing so. From there I went to Jon “maddog” Hall’s talk on “Open Source In Computer Higher Education – Past, Present and Future” which was definitely a highlight for me. I don’t need to learn how to teach computer science in higher education, but I do love hearing whatever he has to talk about because he has so many wonderful stories. He took us on a tour of this career with an eye toward education, dropping references from everything to the IBM System/360 to learning assembly from difficult text books that he read solo and then went on to teach. He’s a strong proponent for learning topics deeply, and teaching students to learn how to learn so they can thrive in an industry that requires continuous learning. We’re all in agreement there.

That evening I joined a bunch of folks from The Software Freedom Conservancy for dinner and software freedom discussions. It was a lovely evening and I had the pleasure of meeting some new people, including a fellow from Oakland Privacy who told me about the StrayCap Multispace in Hayward that I’ll have to check out some time soon!

I’ve supported the Conservancy for many years, and have known several of their staff for even longer. Need a free software license violation acted upon? This is the group that does that. They get a remarkable amount done with the staff and budget they have, and I’m incredibly grateful for that. Please consider donating.

The keynote from Cindy Cohn, Executive Director of the EFF, on Saturday morning was wonderful. I’ve been a supporter of of the EFF for years, and am closely aligned with most of their views. It was fascinating to hear about her work in this space, and the fundamental protections that she’s worked to help pioneering technologists secure over the years. I vaguely knew that encryption was restricted by the US government in the earliest days of the internet, but I didn’t realize it was classified as munitions which ultimately meant that encryption algorithms couldn’t be shared and collaborated on online. Wow. Could you imagine the internet without encryption? Our world? Cindy, with an army of early free software hackers, argued in federal court in San Francisco and for over a decade beyond that to make sure encryption was freed from this classification. This story was the first of three that she dives into in her new book Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance which I promptly pre-ordered. Her call to action to us hackers today was to stay engaged in this fight so that we show up for all the future legal battles that have the potential to threaten the future of our world and lives with regard to digital freedom. I had the pleasure of running into her later in the conference to thank her for her talk, and I had the presence of mind to pull out a piece of paper to have her sign so I could put it in my book when it arrives; I’ll have a signed copy, kind of! Then I went to the EFF booth to do my annual contribution.

After the keynote I was able to meet up with Kaitlyn Davis, a new colleague at IBM who joined us from HashiCorp and I just started working with a couple weeks ago. She happens to live in southern California! So I made the case for her to come out to SCALE. She has some really helpful ideas around leveraging AI for open source contribution tracking and so we were able to sit down for about an hour and chat about IBM in general and drill down into some of the problems I’ve been focused on to see where she wants to jump in. It’s not every day that I have the pleasure of working with someone like her, so I’m really eager to see what we come up with together in the coming months. See? I’m not negative on all uses of AI.

From there, it was time for my talk on Open Source in Closed Ecosystems. Originally I was planning on just drawing from my experience in the mainframe world, but after a chat with John Mertic of The Open Mainframe Project I was convinced to draw from a broader pool of expertise and to look into Automotive and Motion Picture industry use cases. I was fortunate that Alison Chaiken of Automotive Grade Linux (AGL) and Emily Olin who has worked on both AGL and the Academy Software Foundation (ASWF) were able to get back to me quickly regarding questions I had about the initiatives. I was also thankful to get time with Nithya Ruff whose expertise in running open source software programs across the industry has been incredibly valuable to my own work, and the broader community through her extensive work in the community of the years, and direct contributions to the TODO Group.

The talk had some rough edges flow-wise, and I’d like to flesh it out with more examples and talk to more people in industries where open source hasn’t taken a firm hold yet to see what barriers they’re encountering in their organizations. But I had some great conversations after my talk and I think it generally went well. Slides from the talk are available here: /presentations/2026/Open_Source_in_Closed_Ecosystems_-_SCALE_23x.pdf (1.3M pdf)

It was nice to run into Dave Neary, whose Open Source in Business series on YouTube got me some clues I need for my talk too. He gave a couple multiarch talks, and though they were focused on ARM64 it was still nice to hear someone talk about multiarch manifests and containers, since I bump into some confusion from community members about them. He gave some nice demos using Argo CD and Argo Rollouts that I’d like to take a closer look at.

Speaking of multiarch, I then enjoyed a talk by Amy Parker whose talk focused on QEMU user mode. I’ve used QEMU on and off over the years, but honestly since I’ve shifted my focus to bare metal testing, I’ve used it a lot less. I don’t have a lot of experience with the user mode emulation that she covered, which made the talk a fascinating dive through binfmt_misc, ld_preload, and chroots to accomplish a lot of interesting work across architectures. She also talked about using FatELF to create universal binaries, which wasn’t even on my radar. So many fun things to dig into!

Saturday evening I had the pleasure of joining Nathan Handler for dinner at a sushi place nearby. I’ve now had my first sake bomb. But just one!

Happy Sunday! The opening keynote was presented by Mark Russinovich of Microsoft, who, poor guy, spent the first 15 minutes of his talk convincing us that in spite of being known for Windows Internals, both he and Microsoft have a lot of Linux credibility. With that taken care of, he dove straight into a great tour of open source security solutions and how they relate to the growing interest in secure supply chains today. I was happy to see the security components of my own talk from the previous day reiterated, but more broadly, I’m glad SCaLE brought someone in to talk about all of this. Open source has made tremendous strides in recent years related to security, but it doesn’t get as much attention as I believe it deserves, both in terms of usage and awareness, and having more people to work on it, and its importance is only increasing.

The expo hall at SCaLE is always a delightful place to walk through, and this year was no exception. They have a wonderful mix of big, paid booth areas for larger companies, and smaller booths for non-profits, so it always brings a great assortment of people. I had a lovely time catching up with my friends from the Ubuntu community. It’s always a pleasure to catch up with Nathan Haines and George Mulak who’ve been quite involved in the Los Angeles computing scene for years. It was nice to get some time to chat with Erich Eickmeyer, lead for Ubuntu Studio, and I was pleased to learn that his wife, Amy Eickmeyer, is a professional educator and actually got the Edubuntu flavor off the ground again back in 2022! I had planned on a DIY lockdown of Ubuntu for our kids this year, but I’ll have to take a look at Edubuntu now.

And the RISC-V booth was on my list too, as I’m always eager to learn the latest (you know me and architectures!). That’s where I met a fellow IBMer who was involved with the Works with RISC-V community which I didn’t even know existed. Cool. I was able to ask about HDMI support on my VisionFive 2 and learn that there should be mainline kernel support soon, and learned that people are saying good things about the latest RISC-V mainboard for the Framework laptop. My kids like to remind me I have a lot of laptops, so I’ve held off on Framework for now, but I’ll have to take a closer look at this one.

I also went to a talk by Brendan O’Leary on From COBOL to Claude: What Hopper Knew (actually, his slides swapped “Claude” for “Cursor” (the AI coding environment). Things in AI move fast). I enjoyed this talk and his premise, given all I’ve said above about the inevitability of AI in our industry. He began by talking about Rear Admiral Grace Hopper’s desire to make “programming” computers a more human-language driven endeavor, and how that began with her FLOW-MATIC and ultimately COBOL, which is still widely used today. His belief is that she’d be happy that anyone today can vibe code their own application, and made the same comparison I tend to do with evolution of coding. AI does have very, very important things that differentiate it from the previous major evolutionary steps of computer programming, but I just don’t believe that things like being nondeterministic are enough to so forcefully push back on it. Most of this talk continued by talking about how software engineering practices that professionals are using will simply need to be adjusted to have a lot more planning and a lot less hands-on coding, and with these research, plan, and implement frameworks in place we’ll be able to trust the results AI comes up with a lot more. I think he’s right.

The conference concluded with a walk down network memory lane with Professor Douglas Comer. I love computer history, so I was familiar with a lot of the general touch points he discussed, but since his focus was on networking there were a few things I’ve missed along the way. He talked about magentic tape mailers, how Endianness caused problem with computers communicating in the early days, and how a technology like TCP/IP or even the client/server model were not obvious. His stories around how phone companies charge for transit and the sorted path to get households connected to internet was really insightful, especially when paired with the observations from Cindy Cohn the day before.

Huge thanks to all the volunteers who makes SCaLE happen, I’m really happy I could make it down this year, and after seeing how many kids where there, I’m going to make plans to at least bring our eldest down for the weekend next year.

]]>
Finally back at OLF! https://princessleia.com/journal/2025/12/finally-back-at-olf/ Thu, 18 Dec 2025 00:39:08 +0000 https://princessleia.com/journal/?p=18136 Back in 2018 spoke at Ohio LinuxFest and had a wonderful time with the community there. It’s a great mix of folks who are very local, and open source experts from across the country who come in for the event. Beth Lynn Eicher, who leads the event, is a champion in getting more folks involved in open source, and I’ve heard so many stories of how encouraging she always is to newcomers. There are key folks today you may have interacted with in open source communities who can thank Beth Lynn for encouragement in the early days that got them on the path to where they are today. Personally, I’ve also worked with her on some non-profit work with Computer Reach, most notable of which was going to Ghana together for a few weeks back in 2012 to support a deployment they were doing with a Ghanaian NGO.

So first, thanks to Beth Lynn, Vance Kochenderfer, Susan Rose Dudenhoefer, and the other volunteers who brought the event together this year on a tight deadline. I’m so grateful I thought to include the conference on my quarterly event requests in spite of it not being announced yet!

I’ll also mention that I keep calling it “Ohio LinuxFest” but they rebranded as “OLF Conference” to reflect “Open Libre Free” and their goal to include operating systems beyond Linux, mea culpa!

The event itself was a lot of fun. It was smaller than in years past, as they went with one track. They mentioned at closing that doing it in December is too late in the year, and along with the short runway for the conference likely impacted attendance. Still, if I had to guess I’d say there were a couple hundred people there.

I saw a lot of familiar faces. My friend Scott came out from Pittsburgh, and though we still chat regularly in a group cobbled together from our Ubuntu Pennsylvania days, we hadn’t seen each other in person in years. It was really cool to catch up, and to laugh about kid stories, since we’ve both became parents since we last saw each other. I also got to spend a bunch of time with Amber Graner, who I also got to know very well during our time in the Ubuntu project. We’ve stayed in touch, so we’re still pretty close, but this was the first time in a while that we had more than 20 minutes to catch up. And new people! I got to chat with a student who was attending an open source conference for his first time, and met several folks who have been working in open source for decades. It really was a great mix of folks.

I really enjoyed the opening keynote from Don Vosburg on Passion and Pragmatism. He tugged on a familiar thread in the open source world around the fact that a lot of folks got into open source software “for fun” or the passion of it, but most of us eventually had to get professional jobs that may have tested our fundamental commitment to open source. Or other things that have arisen in our lives that require us to make a choice. I’ve definitely had to walk a line throughout my career, but consider myself quite lucky to have found myself a series of good positions that have allowed me to follow my passion and make a living.

My talk was just after the keynote, and I was very happy that most people stayed! It’s a re-working of a talk I gave last year, but I notably added an architecture and made some adjustments to my slides about software testing. I was amused to learn that my closing keynote back in 2018 was about doing software testing on your open source project, and that this could be seen as an expansion of that. I joked at the beginning that I was very glad everyone listened to me last time, and now that they all have software testing, it was time to add non-x86_64 hardware architectures into that testing matrix. The slides are available here: Will_your_open_source_project_run_on_a_mainframe_Or_a_watch_OLF_2025.pdf (1.2M)


Thanks to Scott for taking a picture during my talk!

Catherine Devlin’s “Graph Data for Heroes II: Rise of the Bot” was an interesting one. A large chunk of it had her scraping web data, and as I was live-posting about it on Mastodon and Bluesky I was speculating about how web scraping is one tech that hasn’t gotten a whole lot better in 25 years, then mused that it was actually a good use for AI/ML technologies. Indeed, that’s where her talk went!

Scattered throughout the conference foyer were a few tables from supporters and sponsors, and I was delighted to see a series of ChromeBooks that had been repurposed to use various Linux distributions. I was delighted to see that Xubuntu made the cut, and when I walked over to check it out I was presented with our shiny new website. Lovely!

In the afternoon I enjoyed seeing Steven Pritchard’s “The Great Open Source Rug-Pull” where he talked about open source software license changes, which have caused a lot of disruption and contention in the open source world these past few years. And although I had heard of Hacker Public Radio before, it wasn’t until murph’s talk on the topic, along with a bunch of great tips, that I got a serious look into what it was and how the episodes are crowd sourced. These folks are doing great work.

Amber Graner concluded the day with the closing keynote “Bless Their Hearts: Open Source, AI, and Southern Survival Skills.” She took us on a personal, funny journey through some of the characters and situations in the open source world. I particularly loved at the end where she shared a list of things she wished people had told her when she started contributing to open source. I’ll be keeping some of these things in mind as I continue working with students who need more than just the basic misconceptions about contributing corrected so they can effectively contribute.

The end of the event crept up quickly! The group hosted a small closing after party in the hotel lobby with pizza and my favorite, cake!

In what is perhaps one of my shortest conference trips, I flew out at 5AM the next morning to get home by midday on Sunday. It left me pretty tired, but it was worth it.

]]>
A VisionFive 2 and a Raspberry Pi 1 B https://princessleia.com/journal/2025/04/a-visionfive-2-and-a-raspberry-pi-1-b/ Thu, 03 Apr 2025 20:43:31 +0000 https://princessleia.com/journal/?p=17828 A couple weeks ago I was playing around with a multiple architecture CI setup with another team, and that led me to pull out my StarFive VisionFive 2 SBC again to see where I could make it this time with an install.

I left off about a year ago when I succeeded in getting an older version of Debian on it, but attempts to get the tooling to install a more broadly supported version of U-Boot to the SPI flash were unsuccessful. Then I got pulled away to other things, effectively just bringing my VF2 around to events as a prop for my multiarch talks – which it did beautifully! I even had one conference attendee buy one to play with while sitting in the audience of my talk. Cool.

I was delighted to learn how much progress had been made since I last looked. Canonical has published more formalized documentation: Install Ubuntu on the StarFive VisionFive 2 in the place of what had been a rather cluttered wiki page. So I got all hooked up and began my latest attempt.

My first step was to grab the pre-installed server image. I got that installed, but struggled a little with persistence once I unplugged the USB UART adapter and rebooted. I then decided just to move forward with the Install U-Boot to the SPI flash instructions. I struggled a bit here for two reasons:

  1. The documentation today leads off with having you download the livecd, but you actually want the pre-installed server image to flash U-Boot, the livecd step doesn’t come until later. Admittedly, the instructions do say this, but I wasn’t reading carefully enough and was more focused on the steps.
  2. I couldn’t get the 24.10 pre-installed image to work for flashing U-Boot, but once I went back to the 24.04 pre-installed image it worked.

And then I had to fly across the country. We’re spending a couple weeks around spring break here at our vacation house in Philadelphia, but the good thing about SBCs is that they’re incredibly portable and I just tossed my gear into my backpack and brought it along.

Thanks to Emil Renner Berthing (esmil) on the Ubuntu Matrix server for providing me with enough guidance to figure out where I had gone wrong above, and got me on my way just a few days after we arrived in Philly.

With the newer U-Boot installed, I was able to use the Ubuntu 24.04 livecd image on a micro SD Card to install Ubuntu 24.04 on an NVMe drive! That’s another new change since I last looked at installation, using my little NVMe drive as a target was a lot simpler than it would have been a year ago. In fact, it was rather anticlimactic, hah!

And with that, I was fully logged in to my new system.

elizabeth@r2kt:~$ cat /proc/cpuinfo
processor : 0
hart : 2
isa : rv64imafdc_zicntr_zicsr_zifencei_zihpm_zba_zbb
mmu : sv39
uarch : sifive,u74-mc
mvendorid : 0x489
marchid : 0x8000000000000007
mimpid : 0x4210427
hart isa : rv64imafdc_zicntr_zicsr_zifencei_zihpm_zba_zbb

It has 4 cores, so here’s the full output: vf2-cpus.txt

What will I do with this little single board computer? I don’t know yet. I joked with my husband that I’d “install Debian on it and forget about it like everything else” but I really would like to get past that. I have my little multiarch demo CI project in the wings, and I’ll probably loop it into that.

Since we were in Philly, I had a look over at my long-neglected Raspberry Pi 1B that I have here. When we first moved in, I used it as an ssh tunnel to get to this network from California. It was great for that! But now we have a more sophisticated network setup between the houses with a VLAN that connects them, so the ssh tunnel is unnecessary. In fact, my poor Raspberry Pi fell off the WiFi network when we switched to 802.1X just over a year ago and I never got around to getting it back on the network. I connected it to a keyboard and monitor and started some investigation. Honestly, I’m surprised the little guy was still running, but it’s doing fine!

And it had been chugging along running Rasbian based on Debian 9. Well, that’s worth an upgrade. But not just an upgrade, I didn’t want to stress the device and SD card, so I figured flashing it with the latest version of Raspberry Pi OS was the right way to go. It turns out, it’s been a long time since I’ve done a Raspberry Pi install.

I grabbed the Raspberry Pi Imager and went on my way. It’s really nice. I went with the Raspberry Pi OS Lite install since it’s the RP1, I didn’t want a GUI. The imager asked the usual installation questions, loaded up my SSH key, and I was ready to load it up in my Pi.

The only thing I need to finish sorting out is networking. The old USB WiFi adapter I have it in doesn’t initialize until after it’s booted up, so wpa_supplicant on boot can’t negotiate with the access point. I’ll have to play around with it. And what will I use this for once I do, now that it’s not an SSH tunnel? I’m not sure yet.

I realize this blog post isn’t very deep or technical, but I guess that’s the point. We’ve come a long way in recent years in support for non-x86 architectures, so installation has gotten a lot easier across several of them. If you’re new to playing around with architectures, I’d say it’s a really good time to start. You can hit the ground running with some wins, and then play around as you go with various things you want to help get working. It’s a lot of fun, and the years I spent playing around with Debian on Sparc back in the day definitely laid the groundwork for the job I have at IBM working on mainframes. You never know where a bit of technical curiosity will get you.

]]>
A jellyfish and a mainframe https://princessleia.com/journal/2022/04/a-jellyfish-and-a-mainframe/ Thu, 21 Apr 2022 17:39:03 +0000 https://princessleia.com/journal/?p=16387 Happy Ubuntu 22.04 LTS (Jammy Jellyfish) release day!

April has been an exciting month. On April 5th, the IBM z16 was released. For those of you who aren’t aware, this is the IBM zSystems class of mainframes that I’ve been working on at IBM for the past three years. As a Developer Advocate, I’ve been able to spend a lot of time digging into the internals, learning about the implementation of DevOps practices and incorporation of Linux into environments, and so much more. I’ve also had the opportunity to work with dozens of open source projects in the Linux world as they get their software to run on the s390x architecture. This includes working with several Linux distributions, and most recently forming the Open Mainframe Project Linux Distributions Working Group with openSUSE’s Sarah Julia Kriesch.

As a result, I’m delighted to continue to spend a little time with Ubuntu!

For the Ubuntu 22.04 release, the team at Canonical has already been working hard to incorporate key features of the IBM z16, which Frank Heimes has gone into detail about on a technical level on the Ubuntu on Big Iron Blog, IBM z16 launches with Ubuntu 22.04 (beta) support, and also over on Ubuntu.com with IBM z16 is here, and Ubuntu 22.04 LTS beta is ready. Finally, Frank published: Ubuntu 22.04 LTS got released

Indeed, timing was fortuitous, as Frank notes:

“Since the development of the new IBM z16 happened in parallel with the development of the upcoming Ubuntu Server release, Canonical was able to ensure that Ubuntu Server 22.04 LTS (beta) already includes support for new IBM z16 capabilities.

And this is not limited to the support for the core system, but also includes its peripherals and special facilities”

Now that it’s release day, I wanted to celebrate with the community by sharing a few details of the IBM z16 and some highlights from those blog posts.

So first – the IBM z16 is so pretty! It comes in one to four frames, depending on the needs of the client. Inside the maximum configuration it has up to 200 Processor Units, featuring 5.2Ghz IBM Telum Processors, 40 TB of memory, and 85 LPARs.

As for how Ubuntu was able to leverage improvements to 22.04 to take advantage of everything from the AI Accelerator on the IBM Telum processor to new Quantum-Safe technologies, Frank goes on to share:

“Since we constantly improve Ubuntu, 22.04 was updated and modified for IBM z16 and other platforms in the following areas:

  • virtually the entire cryptography stack was updated, due to the switch to openssl 3
  • some Quantum-safe options are available: library for quantum-safe cryptographic algorithms (liboqs), post-quantum encryption and signing tool (codecrypt), implementation of public-key encryption scheme NTRUEncrypt (libntru)
  • Secure Execution got refined and the virtualization stack updated
  • the chacha20 in-kernel stream cipher (RFC 7539) was hardware optimized using SIMD
  • the kernel zcrypt device driver is now able to exploit the new IBM zSystems crypto hardware, especially Crypto Express8S (CEX8S)
  • and finally a brand new protected key crypto library package (libzpc) was added

This is a really interesting time to be a Linux distribution in this ecosystem. Beyond these fantastic strides made with Ubuntu, the collaboration that’s already taking place across distributions in our new Working Group has been exciting to watch.

Keep up the good work, everyone! And Ubuntu friends, pause a bit today to celebrate, you’ve earned it.


Jellyfish earrings!

Side note: I haven’t mentioned the IBM LinuxONE. As some background, the IBM z16 can have Integrated Facility for Linux (IFL) processors, so you can already run Linux on this generation of mainframes! But the LinuxONE product line only has IFLs, meaning they exclusively run Linux. As a separate product, it can have different release dates, and the current timeline that’s been published is “second half of 2022” for the announcement of the next LinuxONE. Stay tuned, and know that everything I’ve shared about Ubuntu 22.04 for the IBM z16 will also be true of the next LinuxONE.

]]>
The Big Iron Hippo https://princessleia.com/journal/2021/05/the-big-iron-hippo/ Mon, 10 May 2021 20:07:41 +0000 https://princessleia.com/journal/?p=15963 It’s been about a year since I last wrote about an Ubuntu release on IBM Z (colloquially known as “mainframes” and nicknamed “Big Iron”). In my first year at IBM my focus really was Linux on Z, along with other open source software like KVM and how that provides support for common tools via libvirt to make management of VMs on IBM Z almost trivial for most Linux folks. Last year I was able to start digging a little into the more traditional systems for IBM Z: z/OS and z/VM. While I’m no expert, by far, I have obtained a glimpse into just how powerful these operating systems are, and it’s impressive.

This year, with this extra background, I’m coming back with a hyper focus on Linux, and that’s making me appreciate the advancements with every Linux kernel and distribution release. Engineers at IBM, SUSE, Red Hat, and Canonical have made an investment in IBM Z, and are supporting those with kernel and other support for IBM Z hardware.

So it’s always exciting to see the Ubuntu release blog post from Frank Heimes over at Canonical! And the one for Hirsute Hippo is no exception: The ‘Hippo’ is out in the wild – Ubuntu 21.04 got released!

Several updates to the kernel! A great, continued focus on virtualization and containers! I can already see that the next LTS, coming out in the spring of 2022, is going to be a really impressive one for Ubuntu on IBM Z and LinuxONE.

]]>
Ubuntu 20.04 LTS… on Big Iron! https://princessleia.com/journal/2020/04/ubuntu-20-04-lts-on-big-iron/ Fri, 24 Apr 2020 01:25:20 +0000 http://princessleia.com/journal/?p=15364 Today we saw the release of Ubuntu 20.04 LTS!

Alongside the fanfare of a new server and desktop release for AMD64, and my own beloved Xubuntu, this new version walks in the path of 16.04 and 18.04 to be the third LTS to support the s390x mainframe architecture for IBM Z.

If you have been following my adventures over the past year, you’ll know that I’m just shy of my one year anniversary at IBM, where I’ve been working on the IBM Z team to spread the word among open source communities about the mainframe. The epic hardware on these machines was definitely one of the hooks for me, but the big one was the amount of open source tooling that was being developed for them. The ability to run Linux on them sealed the deal. I wrote last week about some new hardware, and mentioned then that Ubuntu 20.04 supports the new Secure Execution technology for virtual machines.

So, what else is new for Ubuntu 20.04? At the top of my list would be improved support for the new IBM z15 hardware, released back in September. A number of changes made it into the 19.10 release, but 20.04 builds further upon this, especially around support for the compression and encryption features of the z15. Additionally, Subiquity is now the default installer for Ubuntu Server for s390x, which you can read more about here: A first glimpse at subiquity, the new server installer, now also on s390x.

This is just a taste of what is in store for users of Ubuntu on the mainframe. The list of major changes, along with the Launchpad bug/feature report numbers that tracked development throughout this cycle can be found over on the Ubuntu on the Big Iron blog in a post by Frank Heimes: A new Ubuntu LTS is available: Focal Fossa aka 20.04.

Finally, that fossa stuffed toy is mighty cute, right? You can have one too! With a donation to the World Wildlife Fund to “Adopt a Fossa.” Just keep it away from your lemur toys.

]]>
Ubuntu and the new IBM LinuxONE III LT2 https://princessleia.com/journal/2020/04/ubuntu-and-the-new-linuxone-iii-lt2/ Tue, 14 Apr 2020 13:32:39 +0000 http://princessleia.com/journal/?p=15289 Back in September I wrote about Ubuntu on the new LinuxONE III. For the release of this new mainframe, there were balloons, and cake, and we had a great time celebrating. With Shelter in Place orders spreading throughout the US, we don’t have cake this time, but we do have a new hardware release!

The IBM LinuxONE III LT2 follows in the footsteps of the initial release, with support for the great PCIe cards that the LT1 has, but aimed at the mid-range market. Most notably, that means it only comes in a single frame version (versus the option of up to four frames for the LT1), the processor cores run at 4.5ghz, instead of 5.2ghz, and they are all air-cooled.

I wrote more about the hardware here: Inside the new IBM z15 T02 and LinuxONE III LT2.

What’s particularly notable here is that there’s an Ubuntu LTS release coming out next week. So, in addition to all the LinuxONE III features that Ubuntu 19.10 has for the LinuxONE, this new release will also have support for a new Trusted Execution Environment (TEE) for IBM Z, Secure Execution. If you’re interested in Secure Execution specifically, I wrote about that, too: Technical Overview of Secure Execution for Linux on IBM Z. For those who are curious adaptions in the kernel, qemu, and s390-tools were made for Ubuntu 20.04 to support Secure Execution on both LinuxONE III models, and Linux running on the IBM z15 and the new z15 T02.

I’m looking forward to the Ubuntu 20.04 LTS release next week and all the latest goodies that brings to the s390x mainframe platform. I’ll be doing an overview blog post next week, but keep an eye on the Ubuntu on Big Iron blog for an in-depth update of all the work that has gone into this LTS release.

]]>
Our upcoming Webinar on Security with Ubuntu and IBM Z https://princessleia.com/journal/2020/01/webinar-security-with-ubuntu-and-ibm-z/ Tue, 28 Jan 2020 18:26:22 +0000 http://princessleia.com/journal/?p=15174 My first interaction with the Ubuntu community was in March of 2005 when I put Ubuntu on an old Dell laptop and signed up for the Ubuntu Forums. This was just a few years into my tech career and I was mostly a Linux hobbyist, with a handful of junior systems administrator jobs on the side to do things like racking servers and installing Debian (with CDs!). Many of you with me on this journey have seen my role grow in the Ubuntu community with Debian packaging, local involvement with events and non-profits, participation in the Ubuntu Developer Summits, membership in the Ubuntu Community Council, and work on several Ubuntu books, from technical consultation to becoming an author on The Official Ubuntu Book.

These days I’ve taken my 15+ years of Linux Systems Administration and open source experience down a slightly different path: Working on Linux on the mainframe (IBM Z). The mainframe wasn’t on my radar a year ago, but as I got familiar with the technical aspects, the modernization efforts to incorporate DevOps principles, and the burgeoning open source efforts, I became fascinated with the platform.

As a result, I joined IBM last year to share my discoveries with the broader systems administration and developer communities. Ubuntu itself got on board with this mainframe journey with official support for the architecture (s390x) in Ubuntu 16.04, and today there’s a whole blog that gets into the technical details of features specific to Ubuntu on the mainframe: Ubuntu on Big Iron

I’m excited to share that I’ll be joining the author of the Ubuntu on Big Iron blog, Frank Heimes, live on February 6th for a webinar titled How to protect your data, applications, cryptography and OS – 100% of the time. I’ll be doing an introduction to the IBM Z architecture (including cool hardware pictures!) and general security topics around Linux on Z and LinuxONE.

I’ll then hand the reins over to Frank to get into the details of the work Canonical has done to take advantage of hardware cryptography functions and secure everything from network ports to the software itself with automatic security updates.

What I find most interesting about all of this work is how much open source is woven in. You’re not using proprietary tooling on the Linux level for things like encryption. As you’ll see from the webinar, on a low level Linux on Z uses dm-crypt and in-kernel crypto algorithms. At the user level, TLS/SSL is all implemented with OpenSSL and libcrypto. Even the libica crypto library is open source.

You can sign up for the webinar here, and you’ll have the option to watch it live or on-demand replays: How to protect your data, applications, cryptography and OS – 100% of the time and read the blog post from the Ubuntu blog here. We’re aiming to make this technical and fun, so I hope you’ll join us!

]]>
Ubuntu on the new LinuxONE III https://princessleia.com/journal/2019/09/ubuntu-on-the-new-linuxone-iii/ Thu, 19 Sep 2019 09:00:30 +0000 http://princessleia.com/journal/?p=14989 A few months ago I visited the IBM offices in Poughkeepsie to sync up with colleagues, record an episode of Terminal Talk, and let’s be honest, visit some mainframes. A lot of assembly still happens in Poughkeepsie, and they have a big client center with mainframes on display, including several inside a datacenter that they give tours of. I was able to see a z14 in operation, as well as several IBM LinuxONE machines. Getting to tour datacenters is a lot of fun, and even though I wouldn’t have meaningful technical interactions with them, there’s something about seeing these massive machines that I work with every day in person that brings me a lot of joy.

Now I have to go back! On September 12th, the newest mainframe was announced, the IBM z15 and accompanying Linux version, the IBM LinuxONE III. To celebrate, I joined my colleagues in the IBM Silicon Valley lab for a launch event watch party and, of course, cake.

I wrote a more in-depth article about the hardware of this machine for work here: Inside the LinuxONE III. The key thing about it is that we’ve gone from two versions of the LinuxONE (Rockhopper II and Emperor II), to just one, but one that fits inside a 19” rack space like the Rockhopper II did and is expandable to up to four frames.

The processors are 5.2Ghz each, and in a fully decked out configuration one of these 4-frame systems can have up to 190 processors and 40 TB of RAM. It’s a massively powerful machine. Add in on-chip crypto that we’ve come to know and love on the mainframe, you have a really impressive data processing powerhouse.

Now, I was brought on to the Z Ecosystem team because of my background with Linux, both in the Ubuntu community and broader experience with distributed systems, including OpenStack and Apache Mesos. That’s because these mainframes don’t just run z/OS. The LinuxONE series of machines, the first of which was released in 2015, are exclusively Linux. Last week I wrote an article over on OpenSource.com about How Linux came to the mainframe, where I talk about how this came to be. This morning the second part of that article was published, Linux on the mainframe: Then and now, where I explore the formal entrance of major distributions into supporting the mainframe architecture. Ubuntu joined that fold with an announcement in 2016 that Ubuntu 16.04 had support for the mainframe (s390x architecture). Today, Ubuntu boasts the most s390x packages of all the officially supported distributions.

All recent release of Ubuntu have supported s390x, so while they recommend the LTS releases, you can happily use Ubuntu 19.04 today to get the latest packages, and there are even more improvements in store for Ubuntu 19.10 coming out next month. When I chatted with Frank Heimes, who runs the Ubuntu on Big Iron blog (which you should totally check out!), he highlighted the following this for me with regard to Ubuntu support:

  • Special emphasis is put on kernel, KVM, hardware counters and security, allowing one to make use of z15 and LinuxONE III faster and enlarged number of processors with new CPU capabilities, facilities and larger caches, increased memory and IO throughput
  • Support for hardware cryptography, which he talks about in this blog post and the associated whitepaper: Hardware cryptography with Ubuntu Server on IBM Z and LinuxONE
  • Support for deployments on LPAR, z/VM, KVM, LXD, Docker and kubernetes (CDK), with installation media available as ISO, Cloud or container images.

It was also interesting for me to learn that their MAAS KVM product has been built for s390x, which I’ll point you to the Ubuntu on Big Iron blog for again, for one of Frank’s posts this month on the topic: MAAS KVM on s390x: Cross-LPAR walk-through. There have also been collaborations in the works to create proof of concepts around security, including Digital Asset Custody Services (DACS), which you can explore in more detail in this article from August: Digital Asset Custody Services (DACS) aims to disrupt the digital assets market with a secured custody platform.

For Ubuntu, s390x isn’t just another checkbox architecture that’s being supported. Just like the other officially supported distributions, there are whole teams within Canonical who are spending time making thoughtful and innovative solutions that specifically target the power of the mainframe. The following is their Design Philosophy for Ubuntu Server on IBM Z and LinuxONE, via Frank’s Ubuntu Server for IBM Z and LinuxONE slide deck (4.2M PDF):

  • Expand Ubuntu’s ease of use to the s390x architecture (IBM Z and LinuxONE)
  • Unlock new workloads, especially in the Open Source, Cloud and Container space
  • Consequentially tap into new client bases
  • Exploit new features and components faster – in two ways:
    • hardware: zEC12/zBC12 and newer
    • software: latest kernels, compilers and optimized libraries
    • Provide parity with other architectures
    • Release parity
    • Feature parity
  • Uniform user experience
  • Close potential gaps
  • Open source – is collective power in action
  • Upstream work and code only – no forks
  • Offer a radically new pricing approach (drawer-based pricing) but also an entry-level pricing based on the number of IFLs (up to 4 IFLs)
  • Of course we don’t have mainframes in our garages (even as an IBM employee, I’ve asked!). So as developers, our access is somewhat limited. However, that doesn’t mean you can’t build your Ubuntu .deb or snap for s390x! As I wrote about back in June, you can build your PPA for s390x with the clicking of a simple checkbox in the Launchpad UI for PPAs.

    Similarly, you can also build snaps for the s390x architecture. These build systems reside on a mainframe that Canonical hosts in their datacenter, so you don’t even need access to a mainframe yourself to build for it.

    But if you want to be extra sure your application runs on s390x, IBM has made a LinuxONE Community Cloud which gives users a VM running on a mainframe in New York for 120 days! You can try out your application on one of those, and then be confident it works when you submit it to the PPA or snap build system. Unfortunately the only options right now for OS are SLES and RHEL, but Ubuntu support is in the works. Beyond this cloud, we’re also working to get an open source developer cloud launched, but in the meantime you can reach out to me directly (lyz@ibm.com) if you’re interested in some longer-lived VMs for your open source project, or generally want to talk about how you can get more VMs for testing, CI systems, and more.

    If you had asked me a year ago to talk about mainframes, I would not have had much to say, but I’m really excited to be part of this story now. The machines themselves are impressive, the efforts that distributions like Ubuntu are putting into them is quite exceptional, and it’s really fun learning about a new architecture. And speaking of other architectures, s390x isn’t the only architecture Canonical works with IBM to provide support for. As noted on the Ubuntu on IBM partner page (which is worth checking out anyway), you’ll see there’s a lot of work being put in around POWER too.

    ]]>
    Building a PPA for s390x https://princessleia.com/journal/2019/06/building-a-ppa-for-s390x/ Tue, 18 Jun 2019 14:59:47 +0000 http://princessleia.com/journal/?p=14817 About 20 years ago a few clever, nerdy folks got together and ported Linux to the mainframe (s390x architecture). Reasons included because it’s there, and other ones you’d expect from technology enthusiasts, but if you read far enough, you’ll learn that they also saw a business case, which has been realized today. You can read more about that history over on Linas Vepstas’ Linux on the IBM ESA/390 Mainframe Architecture.

    Today the s390x architecture not only officially supports Ubuntu, Red Hat Enterprise Linux (RHEL), and SUSE Linux Enterprise Server (SLES), but there’s an entire series of IBM Z mainframes available that are devoted to only running Linux, that’s LinuxONE. At the end of April I joined IBM to lend my Linux expertise to working on these machines and spreading the word about them to my fellow infrastructure architects and developers.

    As its own architecture (not the x86 that we’re accustomed to), compiled code needs to be re-compiled in order to run on the s390x platform. In the case of Ubuntu, the work has already been done to get a large chunk of the Ubuntu repository ported, so you can now run thousands of Linux applications on a LinuxONE machine. In order to effectively do this, there’s a team at Canonical responsible for this port and they have access to an IBM Z server to do the compiling.

    But the most interesting thing to you and me? They also lend the power of this machine to support community members, by allowing them to build PPAs as well!

    By default, Launchpad builds PPAs for i386 and amd64, but if you select “Change details” of your PPA, you’re presented with a list of other architectures you can target.

    Last week I decided to give this a spin with a super simple package: A “Hello World” program written in Go. To be honest, the hardest part of this whole process is creating the Debian package, but you have to do that regardless of what kind of PPA you’re creating and there’s copious amounts of documentation on how to do that. Thankfully there’s dh-make-golang to help the process along for Go packages, and within no time I had a source package to upload to Launchpad.

    From there it was as easy as clicking the “IBM System z (s390x)” box under “Change details” and the builds were underway, along with build logs. Within a few minutes all three packages were built for my PPA!

    Now, mine was the most simple Go application possible, so when coupled with the build success, I was pretty confident that it would work. Still, I hopped on my s390x Ubuntu VM and tested it.

    It worked! But aren’t I lucky, as an IBM employee I have access to s390x Linux VMs.

    I’ll let you in on a little secret: IBM has a series of mainframe-driven security products in the cloud: IBM Cloud Hyper Protect Services. One of these services is Hyper Protect Virtual Servers which are currently Experimental and you can apply for access. Once granted access, you can launch and Ubuntu 18.04 VM for free to test your application, or do whatever other development or isolation testing you’d like on a VM for a limited time.

    If this isn’t available to you, there’s also the LinuxONE Community Cloud. It’s also a free VM that can be used for development, but as of today the only distributions you can automatically provision are RHEL or SLES. You won’t be able to test your deb package on these, but you can test your application directly on one of these platforms to be sure the code itself works on Linux on s390x before creating the PPA.

    And if you’re involved with an open source project that’s more serious about a long-term, Ubuntu-based development platform on s390x, drop me an email at lyz@ibm.com so we can have a chat!

    ]]>