OpenStack, Talks

Upstream Contribution – Give Up or Double Down?

Ever since I’ve been involved with OpenStack people have been complaining that upstream is hard. The number one complaint is that it takes forever to get your patches merged. I thought I’d take a look at some data and attempt to visualize it. I wrote some code that accepts an OpenStack project and a list of contributors and spits out a bunch of graphs. For example:

  • How long does it take to merge patches in a given project over time? Looking back, did governance changes affect anything?
  • Is there a correlation between the size of a patch and the length of time it takes to merge it? (Spoiler: The answer is… Kind of)
  • Looking at an author: Does the time to merge patches trend down over time?
  • Looking at the average length of time it takes to merge patches per author, how does it look like when we graph it as a function of the author’s number of patches? Reviews? Emails? Bug reports? Blueprints implemented?

The data suggestes answers for many of those questions and more.

Here’s the code for you to play with:

https://github.com/assafmuller/gerrit_time_to_merge

And some conclusions in the slides embedded below:

https://docs.google.com/presentation/d/17l720kXrAHJC9_gU81nuIGzJCosralsbtbHGUuXgaxk/edit?usp=sharing

Here’s a few resources about effective upstream contribution. It’s all content written by and for the Neutron community but it’s applicable to any OpenStack project.

Standard
OpenStack

Actionable CI

I’ve observed a persistent theme across valuable and successful CI systems, and that is actionable results.

A CI system for a project as complicated as OpenStack requires a staggering amount of energy to maintain and improve. Often times the responsible parties are focused on keeping it green and are buried under a mountain of continuous failures, legit or otherwise. So much so that they don’t have time to focus on the following questions:

  1. How do you determine that a job failed?
  2. How are the results presented to the relevant developers?
  3. Can developers do anything about a failure?

To bring it to concrete terms let’s take a look at how Rally is used in upstream jobs. This is not a criticism of the Rally project itself, which I’m a big fan of, but rather how it’s used upstream. It uses the standard upstream CI infrastructure, which is a miracle of engineering when it comes to correctness tests. The infrastructure spins up VMs from a node pool comprised of many clouds. It then uses devstack-gate and devstack to install OpenStack and runs several Rally scenarios. When the result of a CI run is True or False, the variance of hardware and congestion levels is irrelevant. However, when you’re trying to measure performance, variance matters. You can try setting a maximum, and any result over the maximum is declared as a failure, however with a variance sufficiently large setting up SLAs is an exercise in futility.

Let’s look at a recent Linux Bridge change [1] that cannot impact Rally results (The Rally job is setup to run against Open vSwitch). Consecutive runs would ideally show the same results. However, looking at the results of patchsets 10, 11, 13 and 14, we can see that the total length of the job runs between 60 and 83 minutes. The full duration of the create_and_list_ports flow runs from 1517 seconds to 1887 seconds. The average for a single create_and_list_ports execution runs between 4.58s and 5.29s. What am I supposed to do with the results of the next run? What can I learn from it? I’d argue: Nothing. The results of the job are not actionable. The result is that the job has been non-voting ever since its introduction and worse yet, none of the engineers I work with look at its results.

The next step would be to give up on the idea of gating or blocking performance regressions and instead detect them after the fact. We can do that by persisting historical results, graphing them and spotting trends. It’s clear that with a variance this large, the results would not be actionable either. To demonstrate this, let’s turn to the fantastic openstack-health project. Looking at the Neutron API test with the longest average run time [2] we can see that at the time of writing, the test ran 249 times in the past month so we get a great sample size. However, the run time graph looks like a Jackson Pollock painting, with a min of just under 5s and a max of just over 9s. Looking at the graph it’s clear we can’t clean up the data via statistical Jiu Jitsu either. When consistency matters, I don’t think you can get around a dedicated bare metal setup.

api_run_time

The Gerrit interface does a great job of presenting CI results, and a failing voting job forces developers to look at its results. However, I don’t know many engineers who look at CI results as a form of amusement. Post-merge and periodic CI runs in to these issues – They burn your favorite form of fossil fuels and drain the life force of the fine folks who maintain it but the results are often not presented in a consumable manner. Running the tests reliably is as important as making sure the intended audience is aware of the results. One solution could be to make sure the relevant developers subscribe to a mailing list, triggering a mail on failures filtered after distracting infrastructure issues. Periodic CI can only be valuable if it’s actionable and developers are held accountable and demonstrate a persistent urgency to failures.

[1] https://review.openstack.org/#/c/346377/
[2] http://status.openstack.org/openstack-health/#/test/neutron.tests.tempest.api.test_auto_allocated_topology.TestAutoAllocatedTopology.test_get_allocated_net_topology_as_tenant?resolutionKey=day&duration=P1M

 

Standard
Uncategorized

I’m looking for someone to join my team and work on OpenStack networking and service function chaining

EDIT: The position has been filled, thank you everyone.

I lead a globally distributed engineering team at Red Hat, working on OpenStack’s networking projects. I’m looking for someone to be a part of the team with a focus on the SFC project. The candidate will:

  • Become the subject matter expert on all matters SFC
  • Review code
  • Participate in upstream development
  • Resolve bugs
  • Draft design documents
  • Implement features
  • Lead and participate in design discussions
  • Attend conferences
  • Improve the project’s testing infrastructure
  • Own the project’s RPM packaging
  • Resolve customer issues

If you want to do open source, Red Hat is objectively where it’s at. We have an institutional culture of open source at all levels and this has a ripple effect on your day to day and your career at the company. You will work with a talented, autonomous, empowered and passionate team of people with a healthy work/life balance.

The ideal candidate is familiar with cloud, networking, Linux, Python, open source, or some combination of the above. You may work from home or from one of our offices listed here: redhat.com/en/jobs/locations.

Please email me CVs at assaf@redhat.com.

Standard
Uncategorized

“But I’m not a networking person!”

I hear that a lot, as if networking is this insurmountable mountain you could not possibly claim. Here’s how you become a networking person: You go to bed after a full day of work, overworked and frustrated with networking lingo. You have the craziest dream! It’s filled with streams of binary numbers, but you’re somehow able to convert them to ASCII instantly. You suddenly not only know that the first half of a MAC address designates the vendor of the NIC, you somehow also know that Qumranet‘s is 00:1A:4A. The seven layer model appears before you, every layer stacked upon the one before it like some perfectly formed Jenga tower from Cisco’s version of hell. Just as you’re breaking a sweat (This is getting too weird, you think), you wake up. Suddenly, Richard M. Stallman comes back from wherever it is he’s hanging around these days, networking cable in hand. He lays it on your shoulder, and then the other, like some sort of perverted Knighthood ceremony. You wake up from this dream within a dream and whisper: “I know networking.”

That’s how it usually works, anyway. Other people are not so lucky and have to learn the basics like they learn anything else: Read a book, then another one. Meet with some people, ask some questions. Practice it, learn it on the job, wing it. It’ll be alright.

Standard
OpenStack

New Neutron testing guidelines!

Yesterday we merged https://review.openstack.org/#/c/245984/ which adds content to the Neutron testing guidelines:

http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron

The document details Neutron’s different testing infrastructures:

  • Unit
  • Functional
  • Fullstack (Integration testing with services deployed by the testing infra itself)
  • In-tree Tempest

The new documentation provides:

  • Advantages and use cases for each testing framework
  • Examples
  • Do’s and don’ts
  • Good and bad usage of mock
  • The anatomy of a good unit test

It’s short – I encourage developers to go through it. Reviewers may save time by linking to it when testing anti-patterns pop up.

Enjoy, I hope you’ll find it useful.

Standard
OpenStack

Neutron in-tree integration tests

It’s time for OpenStack projects to take ownership of their quality. Introducing in-tree, whitebox multinode simulated integration testing. A lot of work went in over the last few months by a lot of people to make it happen.

http://docs.openstack.org/developer/neutron/devref/fullstack_testing.html

We plan on adding integration tests for many of the more evolved Neutron features over the coming months.

Standard