Join us for the second online summit of KubeVirt contributors and users online, Feb 16-17
About this event
KubeVirt Summit 2022 Day 2
The KubeVirt Project is holding its second Online Summit February 16-17, here on Community.CNCF.io. Join us to discuss KubeVirt development, share user experiences, learn about new features, and level up your project contributions.
Join us for this Summit! Please register for both DAY 1 and DAY 2
This is a CNCF Community event.
Thursday, February 17, 2022 2:00 PM – 7:00 PM UTC
KubeVirt Scale and Performance with SIG-Scale
Ryan Hallisey leads a discussion on:
In this session, we’ll talk about the work that's been going on in SIG-Scale. Specifically, issues we’ve identified, improvements we’ve made, new tooling, new tests, new metrics, new features like VirtualMachinePools and the work we’re planning for the future.
Benchmarking the performance of CPU pinning using different virtual CPU topologies: a KVM vs. KubeVirt analysis
Guoqing Li leads a session on: CPU pinning is well known to improve Virtual Machine (VM) performance. However, little is known about the performance of CPU pinning using different virtual CPU typologies in VMs, for example disabling or enabling virtual hyper-threads. A previous work listed some issues related to KubeVirt CPU pinning, showing a mismatch between the virtual and physical topology when using virtual hyper-threads. Recent fixes in KubeVirt have added new logic to perform CPU pinning based on the underlying CPU topology. In this session, we'll describe how to leverage this new logic to create KubeVirt VMs matching both virtual and physical CPU topology. As case studies, we will provide an experiment focused view comparing simple KVM and KubeVirt VMs with different virtual CPU typologies and elaborate on their performance implications.
Delivering High-Performance VNF Workloads in KubeVirt: Navigating Network Acceleration Low for Latency Requirements
Pooja Ghumre will discuss: "VNF workloads running in virtual machines rely on the capability to leverage SRIOV and/or DPDK technologies to achieve the necessary performance. KubeVirt has support for the Multus CNI to attach secondary network interfaces to virtual machines in addition to the default pod network. It can work with either SRIOV CNI or UserSpace CNI (for OVS-DPDK). DPDK support in KubeVirt is still not available upstream, but we were able to verify it using the prior work done by Saravanan KR to add vhostuser implementation to KubeVirt. Setting up cluster nodes to run such VNF workloads can be a little overwhelming as it includes a no. of configuration steps that need to be done for GRUB config, CPU manager, huge pages, driver binding for physical functions etc. Once this is done, one needs to add specific attributes to the virtual machine yaml for creating a VM with SRIOV/DPDK network interfaces. The goal of this talk is to present all these details around how to run a KubeVirt VM for such VNFs.
Extending Kube-Burner to Support CRDs KubeVirt: An Open Source Benchmark Suite for Kubernetes Control Plane Analysis
Marcelo Amaral presents: "The popularity of VMs in Kubernetes is growing, sparking great interest in improving performance and scalability from the KubeVirt community. While it is well known that plain Kubernetes can safely scale clusters of up to 5k, they do not provide any guarantees for third-party add-ons such as custom resource definition (CRD). Therefore, little is known about benchmarking crds, and Kubevirt is based on CRDs. Hence, this talk presents the process we went through to introduce support for KubeVirt CRDs in the kube-burner, wait for the ready condition, collect detailed latency information, and collect a set of well-defined Prometheus metrics for in-depth performance analysis."
Automatic configuration of mediated devices / vGPUs in KubeVirt
Vladik Romanovsky will share: "This session will present the recently added abilities to KubeVirt to automatically configure and consume mediated devices / vGPUs. So far, KubeVirt has been able to discover and allocate mediated devices (vGPUs). However, it was a cluster administrator's task to pre-create these devices on each node. A recently added functionality simplifies this work for cluster administrators. Administrators can now provide a list of desired device types for KubeVirt to automatically create the relevant devices on nodes that can support it. We will also discuss some of the possible configuration options. "
Volume Populator Support
Michael Henriksen will lead a discussion of: Volume Populators are set for GA in 1.24. Let's discuss the current plan to support them and any potential changes to the existing DataVolume API. Feedback will be appreciated!
KubeVirt Performance Visualization at Nvidia
Qian Xiao will share: how the GeForce Now workload applies pressure to the KubeVirt control plane and how it’s reflected on our dashboards.
1. Important indicator(average/95 quantile) of phase transition time displayed on top dashboard for easy reference
2. VM creation breakdown: Pending/Running/Scheduling/Scheduled
3. VM cleanup breakdown: Time to reach succeeded/failed phase
4. Heatmap to visualize dynamic of VMs in each phase
5. Scenario1: GeForce Now load test
6. Scenario2: NGN Production environment
Fabian Deutch will close the Summit by discussing the future of KubeVirt