I've successfully set up Windows 10 VM with 3 x Nvidia 1080Ti cards passed through, using kvm, libvirt, virt-manager and vfio-pci on Ubuntu 16.04. All working fine. If I attempt to attach more than 3 GPUs, the VM freezes before booting, this means that I'm not even able to reach the windows boot-loader, there is brief spike on the CPU graph in virt-manager before it returns to idle.
So far:
It makes me think that the issue is not related to the graphics driver, and rather bound to the passing of the PCIe device. I have a Ubuntu VM working with more than 4 GPUs passed through, so I assume the problem is related to the virtualisation of Windows.
I've tried:
Now I have no clue how to go about trouble-shooting this, so I though I'd ask for hints. What puzzles me is that it only works for a certain number of GPUs, yet I don't know what factors limits this.
So far:
- The Virt-Manager VM console spits out unreadable data if I boot with less than 4 GPUs, with 4 or more GPUs the console stays empty.
- WinDbg (src) does not detect a client when booting with 4 or more GPUs.
It makes me think that the issue is not related to the graphics driver, and rather bound to the passing of the PCIe device. I have a Ubuntu VM working with more than 4 GPUs passed through, so I assume the problem is related to the virtualisation of Windows.
I've tried:
- Experiementing with different CPU and RAM settings, thinking that it might be related to PCIe lanes as on a physical machine, this cause some different behavior (longer boot times), but no success.
- Looking at the libvirt logfiles (they are close to identical booting with 3 or 4 GPUs), no warnings as far as I can see.
Now I have no clue how to go about trouble-shooting this, so I though I'd ask for hints. What puzzles me is that it only works for a certain number of GPUs, yet I don't know what factors limits this.