Quantcast
Channel: Ubuntu Forums - Virtualisation
Viewing all articles
Browse latest Browse all 4211

KVM - GPU Passthrough From Ubuntu host to Ubuntu Guest (headless)

$
0
0
Hi all,

I Have been a long time lurker and just now registered because I can't seem to find anything that fits what I am trying to do for once. I have verified that my motherboard and processor support IOMMU, vt-d and vt-x.

My physical machine specs are:
- GA x99p SLI motherboard
- Intel i7 5820k
- 3x Nvidia GTX 1080

Essentially my setup is a ubuntu 14.04 host (not necessarily tied to this version) and my guest VM is a ubuntu 16.04. My host is headless (though can connect over VNC) and I have 3 GPUs attached to the physical machine, I would just like to pass through one top this one VM. Throughout all the guides and forum posts I have read it seems all of them relate to passing through to a windows VM and using the GPU for gaming or some other monitor need. In my case all I need on my guest VM is access to a shell and for the GPU to be available (meaning the system can use it for computations), basically no monitor, just the GPU in the system. I’ve attempted to follow various guides and forum posts to no tangible success.

I started going through steps outlined here:
https://www.pugetsystems.com/labs/articles/Multiheaded-NVIDIA-Gaming-using-Ubuntu-14-04-KVM-585/

My GPUs are blacklisted and have been properly claimed by the pci-stub, but I am reading this is an outdated method as mentioned here https://ubuntuforums.org/showthread.php?t=2332943 so not really sure if I should try a different method? And I'm not really sure what the differences are or if it really matters?

The progress I have made so far seems to pass through the GPU to extent as I am able to see the GPU when I run
Code:

info pci
in the qemu monitor. This works when I attempt to start my VM (using a disk.img that already has 16.04 installed on) using this command (modified from the Puget systems tutorial):


Code:

sudo qemu-system-x86_64 -enable-kvm -M q35 -m 4096 -cpu host \
-smp 4,sockets=1,cores=4,threads=1 \
-bios /usr/share/qemu/bios.bin -vga none \
-device qxl \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=04:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=off \
-device vfio-pci,host=04:00.1,bus=root.1,addr=00.1 \
-drive file=/home/username/ubuntu-test.img,id=disk,format=raw -device ide-hd,bus=ide.0,drive=disk \



Notable differences are not specifying an iso. I should also note I am only able to see the qemu monitor console through my VNC screen (obviously) since running this command starts up a QEMU window. Also, when I do specify an .iso file I do get to the ubuntu installation screen but I am unable to install / move around with the keyboard (though this might be an issue with my VNC) however, ideally, I wouldn't be installing via the GUI or installing the OS at all (it would already be a .img disk ready for starting up).

Ideally, my process would be this:

  1. virt-install ubuntu 16.04 OS via a script and pressed file to a .img file (which I have done and verified it works)
  2. Shutdown VM
  3. modify the XML to add passthrough for the GPU / PCI device OR some other means by adding the GPU / PCI device after the VM is created
  4. Restart VM
  5. ssh into VM and be able to run lspci | grep NVIDIA and see the GPU passed through


As I said, most of the guides / forum posts are for passing through to a Windows VM not a linux one, though I don't think this should change anything? Any tips or links pointing me in the right direction would be greatly appreciated! Thanks!

Viewing all articles
Browse latest Browse all 4211

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>