Skip to main content
Monthly Archives

July 2023

API Transition Timeline

By Blog, Community

Once upon a time, on a Thursday afternoon somewhere in Italy, a KernelCI backend API was created:

commit 08c9b0879ebe81463e124308192670c0e7447e0b
Author: Milo Casagrande <milo@ubuntu.com>
Date:   Thu Feb 20 16:10:41 2014 +0100

    First commit.

As you can see, it was nearly 10 years ago. How much does that represent in the modern software world? Of course, it depends. The Linux kernel is much older, still written in C and still going strong. But in most cases, including this particular one, it means a whole new world. The old API was written in Python 2.7 which stopped being maintained as a language on 1st January 2020. We could have just rewritten it in Python 3, which was the initial thought. But in the meantime, KernelCI was also growing as a project. It wasn’t just about building ARM kernels and doing boot testing on embedded dev boards any more. It had become a Linux Foundation project aiming to test the whole upstream kernel.

What is this new API?

Following this move, an increasing number of people became interested as it got under the spotlight. That is when we started to realise that the architecture needed to fundamentally evolve in order to match the scale of the new mission it had been assigned. The Linux kernel is a vast and complex open-source project with a unique ecosystem. As such, it requires some unique tooling too. We all know that Git was initially created out of a need to manage all the kernel patches. Now KernelCI needs an automated testing tool tailor-made to its unique requirements – and that’s why we’re finally launching the new API & Pipeline.

It comes with lots of improvements and it’s still a work-in-progress. We’ll keep publishing blog posts and update the documentation as things evolve over the next few months. Right now we have a pipeline that can monitor Git repositories for new revisions, take a snapshot of the kernel source tree in a tarball, run KUnit with it as well as an x86 kernel build and smoke test it in QEMU. It can then also send a summary email and detect regressions. That’s basically enough to prove we have a workable system. Nothing too groundbreaking there, you might think. So, what’s all the fuss about?

In a nutshell: a Pub/Sub interface to orchestrate distributed client-side services that can be run anywhere. You could have your own too at home. Also: user accounts so you can keep your own personal test data there, an abstraction for runtime environments so jobs can be run seamlessly in Docker, Kubernetes, a local shell, LAVA, [insert your own system here]… a new kci command line tool to rule them all and a unified Node schema to contain all the test data (revision, build, runtime test, regression…) in a tree. But again, we’ll go through all that later in more detail. It’s all based on requirements gathered from the community over the past few years.

Timeline

The main message in this blog post is the timeline for retiring the old system and getting the new API in production. Here’s the proposal:Early AccessMonday 4th September 2023Production DeploymentMonday 4th December 2023Legacy System DeprecationMonday 4th March 2024Legacy System SunsetMonday 4th November 2024

It only takes four Monday-the-Forth milestones to get through all this. Here’s what they mean:

Early Access

This is when a new API & Pipeline instance becomes available to let the public experiment with it. It can be seen as some form of beta-testing. It will be deployed in the Cloud to evaluate how a real production instance would behave, but it’s only kept online as a best effort. There should be frequent updates as the code evolves, probably at least weekly and at most daily. Only changes that made it through early testing on the staging instance should be deployed so it’s meant to be reasonably stable.

Production Deployment

The plan is to build upon the experience learned from the Early Access deployment to prepare a persistent instance that would eventually become the production one. Data should be carefully kept and backed up, changes in the database schema should go through managed migrations and the API code should be deployed from tagged releases. As soon as this has become reliable enough we might shut down the Early Access instance since it should have become redundant by then.

Legacy System Deprecation

In other words, this is when the new API & Pipeline production instance becomes the official main KernelCI instance. We’ll first be going through a transition phase to ramp up the build and test coverage on the new API while equally reducing the load on the legacy system to avoid overloading the shared infrastructure. Ideally, coverage should have reached 80% on the new API and 20% on the old one by this date.

Legacy System Sunset

After being deprecated, the legacy system will keep running with a bare minimal amount of coverage just to facilitate the transition for users who depended the most on it. It will be definitely shut down and the data will be archived when finally reaching the Sunset milestone.

Stay tuned

These dates have been identified as realistic targets for having the new API rolled out and retiring the old one with a transition in between. We’ll be aiming to have the new API in place by these dates and conversely retire the legacy system no sooner than announced here.

In the meantime, we’ll be posting updates as these milestones get reached or if any alterations need to be managed. We’ll also clarify how to use the API and exactly what features become available alongside the main documentation. Please share with us any feedback you may have, if you need some clarifications or to raise any concerns. The best way to do this is via the mailing list as always. Stay tuned!

Request for Proposals: UX Analysis 2023 – Q&A

By Blog, Community

Following the UX Analysis RFP, we’ve received a number of questions which seem worth sharing publicly in order to equally benefit all the proposals we receive.

Big Picture

What are your organization’s most important broader goals with this new dashboard?

We’ve identified a requirement to have a web dashboard in a community survey we did a couple of years ago. It’s mostly about providing Linux kernel developers with the information they need to facilitate their daily workflows, and also other types of users for example if they’re basing their products on the upstream kernel and need to monitor its quality.

What are biggest issues or problems you’re having with your current system that prompted this UX Analysis RFP?

We currently have a very old web dashboard with an associated backend that can’t be maintained any more. On top of that, the project has been growing and we’re now redesigning the whole approach to be able to better scale with a new API which doesn’t have any actual web dashboard right now.

What factors made your team decide to release an RFP for this project?

None of the KernelCI LF project members had enough in-house expertise. Also, looking for an independent external supplier appeared as an appropriate choice in this case.

Is there an incumbent bidder on this project?

No, this is the first RFP we do about UX Analysis and web development in general.

How will vendors be evaluated and scored?

We will come up with some criteria as a basic comparison method, then each member of the advisory board will look at all the proposals and we’ll discuss it and eventually hold a vote. We may also inquire further with some vendors if needed.

How many rounds of revisions and how many UX flows are you expecting as part of this project?

This is very hard to predict as we’re still in the early design stages. There should probably be a small number both of revisions and flows (e.g. 2 or 3), maybe later we would be dealing with incremental changes resulting in more revisions as part of the full implementation efforts.

Do you have a preference regarding the vendor’s location?

No, there is no preference over the country where the vendor is located. The KernelCI project’s team is remotely distributed around the world.

Content

Would you need any copywriting or content migration services?

None that we’re currently aware of.

Would you need any original or stock videography or photography?

Not with the UX Analysis phase. We might need some original content for a final website in production.

How much content do you currently have on your website?

Our static websites have tens of pages. Our current dynamic web dashboard has millions of entries, and this is what we’re expecting to see covered by the UX Analysis. Some static content may be part of the interactive web dashboard but it’s not the primary goal.

Is there a CMS that you have a preference for over the other?

No, however we do have an API for storing our data. How this is turned into a UX and web dashboard is up to the vendor to decide as part of the proposal.

What CMS platform do you use currently?

For some static websites we use WordPress and Hugo, but this UX Analysis work is for an interactive web dashboard. We don’t have any particular CMS requirements for it.

Requirements

Would you require hosting, dns or ssl services?

No, the KernelCI project can take care of this.

How much initial research has been done as part of this RFP?

A lot of research has been done in the past few years to try and understand what the public and users need. What’s missing is how this may translate into an actual UX. We now have some ideas about “what” we need but not “how” users can have it.

Are there any factors driving the timeline for the completion of the work?

We’re developing a new API which will be used hand-in-hand with the web dashboard. The timeline isn’t set in stone but having a prototype dashboard or basic demo around the end of September would be great. We’re thinking of having the new API in production in the first half of 2024 so it would be good to see the web dashboard getting finalised around that time too.

Can you give us a high-level overview of the demographics of each persona from the user stories?

  • “Someone who cares about the kernel” can be literally anyone, from a student to a high-profile maintainer or developer. The only real criteria is that they need to know about the upstream kernel code quality. There may be a million people in this category.
  • “Kernel / subsystem maintainer” are a relatively small set of people in charge of accepting changes into the kernel. They form some sort of pyramid of trust with several maintainers sending their collected changes to a common maintainer etc. Like any kernel contributors, they are located around the world and have various levels of experience. There’s maybe about 100 subsystem maintainers and 1000 maintainers responsible for smaller areas of the code.

Do you have examples of the email reports that are sent with summaries of test results?

Are the test results currently stored in a database that the new web dashboard will visualize?

Yes, the new API has an auto-generated documentation with OpenAPI description. This is still a staging instance for experiments, we’re planning to roll out a production-like instance in the coming weeks and start refining the schema. Basically, all the test data is contained in a tree of Node objects. The underlying engine is MongoDB, and we’re looking into using Atlas for this. The API also features a Pub/Sub interface for events that trigger different stages of the testing pipeline on the client side.

The KCIDB database has a different schema, but the web dashboard wouldn’t necessarily need to read data from both sources. That’s something we still need to define, there are several ways to solve this. It’s also something which might depend on the outcome of the UX Analysis. There’s already an interim web dashboard for KCIDB based on Grafana.