Kubevirt Live migration and SRIOV | On virtualized environments Live Migration is a tool you want to have in your toolbox especially on production. It enables you to improve your services availability and reduce the recovery time drastically.
Kubevirt Live Migration now supports VM's connected to SRIOV NIC's, on this session we will discuss why and how to use this feature for VM's with SRIOV NIC's. |
Moving oVirt and VMware VMs to KubeVirt with VM Import Operator and Forklift | VM Import Operator (VMIO) allows Kubernetes administrators to easily import their oVirt- and VMware- managed virtual machines to KubeVirt.
Konveyor's Forklift is a project that leverages VMIO to propose a user interface for large scale migrations, introducing the concept of migration plan and implementing inventory and validation services.
In this talk, the speakers will explain the design of the Virtual Machine Import Operator and how it can be used to import virtual machines to KubeVirt. Afterwards the speakers will show how Forklift uses VMIO to deliver better user experience while importing virtual machines to KubeVirt. |
KubeVirt opinionated deployment via Hyperconverged Cluster Operator | How deploy KubeVirt and several adjacent operators with ease
The HyperConverged Cluster operator (HCO) is a unified operator deploying and controlling KubeVirt and several adjacent operators:
Containerized Data Importer
Scheduling, Scale and Performance
Cluster Network Addons
Node Maintenance
|
Privilege dropping, one capability at a time | KubeVirt's architecture is composed of two main components: virt-handler, a trusted DaemonSet, running in each node, which operates as the virtualization agent, and virt-launcher, an untrusted Kubernetes pod encapsulating a single libvirt + qemu process.
To reduce the attack surface of the overall solution, the untrusted virt-launcher component should run with as little linux capabilities as possible.
The goal of this talk is to explain the journey to get there, and the steps taken to drop CAPNETADMIN, and CAPNETRAW from the untrusted component.
This talk will encompass changes in KubeVirt and Libvirt, and requires some general prior information about networking (dhcp / L2 networking). |
Introducing new Kubevirt driver for Ansible Molecule | Molecule is a well known test framework for Ansible. But when you run your Molecule test in Kubernetes, no real good solution exists. I'm working on creating new Molecule driver for Kubevirt to find a better approach and get a 100% pure Kubernetes solution.
In this session I will introduce quickly why it may be better than actual drivers, how it works, and make a demo. |
Virtual Machine Batch API | KubeVirt extends the Kubernetes ReplicaSets API to provide Virtual Machines with similar functionality and the same can be done with Kubernetes Jobs. In order to bulk schedule VirtualMachines, an admin could use a VirtualMachine Batch API, a VirtualMachineJob, to launch many VirtualMachines from a single API call. In this session, we’d like to share ideas, discuss use cases, and consider possible solutions to bulk Virtual Machine scheduling. |
CPU Pinning with custom policies | CPU Pinning : Kubevirt supports CPU pinning via the Kubernetes CPU Manager. However there are a few gaps with achieving CPU pinning only via CPU Manager: It supports only static policy and doesn’t allow for custom pinning. It supports only Guaranteed QoS class. The insistence by CPU Manager to keep a shared pool means that it is impossible to overcommit in a way that allows all CPUs to be bound to guest CPUs. It provides a best-effort allocation of CPUs belonging to a socket and physical core. In such cases it is susceptible to corner cases and might lead to fragmentation. That is, Kubernetes keeps us from deploying VMs as densely as we can without Kubernetes. An important requirement for us is to do away with the shared pool and let kubelet and containers that do not require dedicated placement to use any CPU, just as system processes do. Moreover, system services such as the container runtime and the kubelet itself can continue to run on these exclusive CPUs. The exclusivity offered by the CPU Manager only extends to other pods. In this session we’d like to discuss the workarounds we use for supporting a custom CPU pinning using a dedicated CPU device plugin and integrating it with Kubevirt and discuss use cases. |
The Road To Version 1.0 | KubeVirt Project maintainers will discuss what's planned for version 1.0, and what's still needed to graduate to it. |
Moving a Visual Effects Studio to the cloud with Kubernetes and KubeVirt | As the rapid transition to remote work happened, VFX studios and designers used to beefy workstations, on-site storage clusters and high performance networking have had to scramble to make those resources available to people at home.
This presentation details how a VFX studio with 60 designers transitioned from a fully on-prem environment to complete cloud workflow. Combining KubeVirt powered Virtual Workstations with render nodes and storage running natively in Kubernetes provided a solution that beat expectations. Being able to manage all components via the same Kubernetes API allowed for a quick integration into existing systems.
We will be discussing our experience integrating KubeVirt under a strict deadline while leveraging bleeding edge features such as Virtio-FS. |
Office Hours with KubeVirt Team | Our final session is an opportunity for you to ask all your KubeVirt questions, whether they're about the project, or they are about using KubeVirt in production. Maintainers and experts will be on hand. |