Mar 29, 2023, 2:00 PM – Mar 30, 2023, 7:00 PM (UTC)
Virtual event
Join us for the third online summit of KubeVirt contributors and users, March 29-30
About this event
**Update**
This event is now working so we will use it for Day 2. We will also start at 1300 with the two sessions that we missed yesterday. The schedule below has been updated to reflect this.
The KubeVirt Project is holding its third online Summit on March 29-30, here on Community.CNCF.io.
Join us to discuss KubeVirt development, share user experiences, learn about new features, and level up your project contributions. Check out the preliminary agenda for both days for more details on topics covered.
This is a CNCF Community event.
When
March 29 – 30, 2023 2:00 PM – 7:00 PM (UTC)
Agenda
Applying Parallel CI testing on Arm64, by Haolin Zhang
Currently, we have enabled parallel CI testing on Arm64 server. As the current arm64 server does not support nested virtualization, we use kind platform to run the test. In this section, I will show how we run the CI test in kind environment and what issues we meet when trying to enable the parallel testing.
Squash the flakes! - how does the flake process work? What tools do we have? How do we minimize the impact? By Daniel Hiller
Flakes aka tests that don’t behave deterministically, i.e. they fail sometimes and pass sometimes, are an ever recurring problem in software development. This is especially the sad reality when running e2e tests where a lot of components are involved. There are various reasons to why a test can be flaky, however the impact can be as fatal as CI being loaded beyond capacity causing overly long feedback cycles or even users losing trust in CI itself. We want to remove flakes as fast as possible to minimize number of retests required. This should lead to shorter time to merge, reduce CI user frustration, improve trust in CI, while at the same time decrease overall load for the CI system. We start by generating a report of tests that have failed at least once inside a merged PR, meaning that in the end all tests have succeeded, thus flaky tests have been run inside CI. We then look at the report to separate flakes from real issues and forward the flakes to dev teams. As a result retest numbers have gone down significantly over the last year. After attending the session the user will have an idea of what our flake process is, how we exercise it and what the actual outcomes are.
Scaling KubeVirt reach to legacy virtualization administrators and users by means of KubeVirt-Manager, by Marcelo Feitoza Parisi
KubeVirt-Manager is an Open Source initiative that plans to democratize KubeVirt usage and scale KubeVirt's reach to legacy virtualization administrators and users, by delivering a simple, effective and friendly Web User Interface for KubeVirt, using technologies like AngularJS, Bootstrap and NoVNC embedded. By implementing a simple Web User Interface, KubeVirt-Manager can effectively eliminate the needs of writing and managing complex Kubernetes YAML files. Containerized Data Importer is also used by KubeVirt-Manager as a backend for Data Volume general management tasks, like provisioning, creating and scaling.
How Killercoda works with KubeVirt, by Meha Bhalodiya & Adam Gardner
By using KubeVirt in conjunction with Killercoda, users can take advantage of the benefits of virtualization while still utilizing the benefits of Kubernetes. This can provide a powerful and flexible platform for running VMs, and can help to simplify the management of VMs and to improve the performance and security of the platform. The integration of virtualization technology with Kubernetes allows customers to easily manage and monitor their VMs while taking advantage of the scalability and self-healing capabilities of Kubernetes. With Killercoda, users can create custom virtual networks, use firewalls and load balancers, and even establish VPN connections between VMs and other resources.
DPU Accelerated Networking for KubeVirt Pods, by Girish Moodalbail
NVIDIA BlueField-2 data processing unit (DPU) delivers a broad set of hardware accelerators to accelerate software-defined networking, storage, and security. In this talk, we are going to focus on SDN and discuss: 1. How have we implemented network virtualization to provide network isolation between KubeVirt Pods 2. How have we pushed the network virtualization control plane to the DPU, “bump-in-the-wire” model, from the Kubernetes Node 3. How have we implemented multi-homed networks for KubeVirt pods 4. How have we leveraged the OVN/OVS SDN managed by OVN Kubernetes CNI to achieve the aforementioned features 5. How have we accelerated the datapath leveraging the DPU’s ASAP2 (Accelerated switching and Packet Processing) technology that has enabled us in achieving high throughput and low latency traffic flows while providing wire speed support for firewall, NATing (SNAT/DNAT), forwarding, QoS, and so on.
Case Study: Upgrading KubeVirt in production, by Alay Patel
NVIDIA recently upgraded KubeVirt in production from 0.35 to 0.50. This talk will discuss the challenges that we faced and the lessons learned. This talk will then cover some on-going work in the community (change in release cadence, discussion about api-stability, etc) in order to make upgrades better.
(Tutorial) Cloud Native Virtual Dev Environments, by Hippie Hacker & Jay Tihema
Want to develop in the cloud with your friends? We'll invite you to walk through a demo of using coder with templates using KubeVirt and CAPI to create on demand shared development environments hosted within their own clusters. Something you can host at home or in the cloud!