Quantcast
Channel: Ubuntu Forums - Virtualisation
Viewing all articles
Browse latest Browse all 4211

Windows Gaming VM - KVM / UEFI Version - HowTo

$
0
0
For the last year or so I've been running Windows under Linux using KVM. I started off with true VGA passthrough using instructions from here



Then a UEFI mechanism became available - which meant no need to deal with legacy VGA any more and no need for custom kernels or arcane Qemu commands passed to Libvirt. I use a standard version of Ubuntu Trusty ie. the long term stable release - as you would expect for a server

So here's a relatively easy way to create A Windows VM with real passthrough .... using the GUI to create, manage and start your VM. It's been very very stable for me and very easy to manage.

There are a few tricks along the way, nothing too arcane.

NOTE that you do NOT need the host to be booted using a UEFI bios so you need not change your motherboard bios for this. The only bios change is to ensure VT-d or AMD-VI are turned on

First off you must have the right hardware. You will need
  1. A CPU which supports IOMMU ie. VT-D in Intel or VI for AMD
  2. A motherboard with BIOS / UEFI which supports IOMMU as above. Note that this can be the most problematic to ensure. Broadly speaking recent Asrock boards are good, Gigabye are probably good and others are hit and miss. Many people are very frustrated with Asus (including me)
  3. A Graphics card to be passed through. Note that you cannot pass an IGP through at present so if your cpu has integrated graphics use it for the host.
  4. A plan for host interaction. You can use ssh or vnc or better (for most people) use your IGP for the host
  5. Sufficient RAM and disk


In my case
  • CPU Intel 4670
  • RAM 16 GB
  • Motherboard Asrock Z87 Extreme 6
  • GPU AMD HD6950
  • Disk Sandisk Extreme II 480 GB (boot drive and windows C drive host)
  • WD Black 2 Tb


This spreadsheet lists hardware success stories https://docs.google.com/spreadsheet/...rive_web#gid=0

For these instructionsm, you'll also need a UEFI capable graphics card. Mine is an older AMD card for which there is no official UEFI bios .... but I was able to construct one using the instructions here
http://www.insanelymac.com/forum/top...any-ati-cards/
http://www.overclock.net/t/1474306/r...fi-bios-thread
I used the tool from Insanely Mac (Windows version - installed in a temporary non-UEFI, simple VM I created for the purpose), link here http://www.overclock.net/t/1474306/r...#post_23400460

I also bought a cheap PCIe USB card (based on the Renesas-NEC chipset) to be passed to the VM. I tried to pass USB devices directly with mixed success, so the add-in card made life much easier at a cost of < AUD$20

Next you need to enable the IOMMU in BIOS. Usually there's a bios setting on Intel boards for VT-d - it will need to be set on. The following command can be used to verify a working iommu
Code:

egrep -q '^flags.*(svm|vmx)' /proc/cpuinfo && echo virtualization extensions available
Ensure you have all the latest versions of packages etc.
Code:

sudo apt-get update
sudo apt-get upgrade

Install KVM
Code:

sudo apt-get install qemu-kvm seabios spice-client hugepages spice-client
or use this tutorial https://help.ubuntu.com/community/KVM/Installation

Create a new directory for hugepages, we'll use this later (to improve VM performance)
Code:

sudo mkdir /dev/hugepages
find your PCI addresses using the following command
Code:

lspci -nn
or lspci -nnk for additional information or lspci -vnn for even more information

choose the PCI devices you want to pass through and work out which IOMMU groups they belong to. I suggest you start simple and just passthrough the graphics card itself (don't passthrough the built in audio)
Use this script to display the IOMMU groupings (thanks to Alex WIlliamson)
Code:

#!/bin/sh

# List the devices in each IOMMU group, from AW at
# https://bbs.archlinux.org/viewtopic.php?id=162768&p=29

BASE="/sys/kernel/iommu_groups"

for i in $(find $BASE -maxdepth 1 -mindepth 1 -type d); do
        GROUP=$(basename $i)
        echo "### Group $GROUP ###"
        for j in $(find $i/devices -type l); do
                DEV=$(basename $j)
                echo -n "    "
                lspci -s $DEV
        done
done

Find the groups containing the devices you wish to pass through. All the devices in a single group need to be attached to pci-stub together (except bridges and hubs) – this ensures that there is no cross-talk between VMs ie. a security feature which IOMMUs are designed to support.
If the grouping is too inconvenient you can apply the ACS patch to your kernel (refer to the Arch discussion linked at the begging of this post).
If you find that you have 2 devices in a single IOMMU group which you want to pass to different VMS, you're going to need the ACS patch and an additional grub command line parameter (I encountered this on my Asrock motherboard and so am not using 2 passthrough VMs simulataneously so I don't have to patch the kernel - it would be a maintenance irritation)

you're ready to change the Grub entries in /etc/default/grub
in order to enable IOMMU facilities and attach pci devices top pci-stub so they can subsequently be used by vfio. Mine looks like this (at the top) after changes
Code:

GRUB_DEFAULT="saved"
GRUB_SAVEDEFAULT=true
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on pci-stub.ids=1002:6719,1002:aa80,8086:1539,1912:0014,1412:1724,1849:1539"
GRUB_CMDLINE_LINUX=""

Update with
Code:

sudo update-grub
NOTE, if you have installed Xen, you may find it has created another default file in /etc/default/grub.d/xen.conf which overrides the selection of the grub default, in my case (when experimenting) I changed it like this
Code:

#
# Uncomment the following variable and set to 0 or 1 to avoid warning.
#
#XEN_OVERRIDE_GRUB_DEFAULT=0
XEN_OVERRIDE_GRUB_DEFAULT=0

you probably need to blacklist the drivers for graphics card being passed through (sometimes they grab the card before it's allocated to pci-stub). Change /etc/modprobe.d/blacklist.conf and add the relevant entry. In my case (for amd graphics) I added the following to the end of the file
Code:

# To support VGA Passthrough
blacklist radeon

those using an NVidia card will need to blacklist Nouveau

Whilst in the above directory you may also wish to modify /etc/modprobe.d/kvm.conf to select appropriate options, in my case (I have not used all the options, just “documented” existence)
Code:

# if vfio-pci was built as a module ( default on arch & ubuntu )
#options vfio_iommu_type1 allow_unsafe_interrupts=1
# Some applications like Passmark Performance Test and SiSoftware Sandra crash the VM without this:
# options kvm ignore_msrs=1

Create following script at /usr/bin/vfio-bind, this is from NBHS at https://bbs.archlinux.org/viewtopic.php?id=162768&p=1, and make it executable (sudo chmod ug+x /usr/bin/vfio-bind)
Code:

#!/bin/bash

modprobe vfio-pci

for dev in "$@"; do
        vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
        device=$(cat /sys/bus/pci/devices/$dev/device)
        if [ -e /sys/bus/pci/devices/$dev/driver ]; then
                echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
        fi
        echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done

Create the script which actually binds PCI cards to VFIO, this is from NBHS at https://bbs.archlinux.org/viewtopic.php?id=162768&p=1, and save it as /etc/init.d/vfio-bind-init.sh and make it executable (sudo chmod ug+x /etc/init.d/vfio-bind-init.sh)
Code:

#!/bin/sh

### BEGIN INIT INFO
# Provides:          vfio-bind
# Required-Start:   
# Required-Stop:
# Default-Start:    S
# Default-Stop:
# Short-Description: vfio-bindings
# Description:      bind selected PCI devices to VFIO for use by KVM
### END INIT INFO

# Script to perform VFIO-BIND function as described at https://bbs.archlinux.org/viewtopic.php?id=162768
#
#
# /usr/bin/vfio-bind /etc/vfio-pci.cfg
/usr/bin/vfio-bind 0000:01:00.0 0000:01:00.1 0000:02:00.0
exit 0

To make this run automatically on startup in Ubuntu (there are no dependencies)
Code:

sudo update-rc.d vfio-bind-init.sh defaults

Create the config file for vfio-bind at /etc/vfio-pci.cnf, again from NBHS on the Arch forums. This is my example – I link multiple entries, though I only actually use 4 at most (currently 2)
Code:

# List of all devices to be held by VFIO = taken from pci_stub .....
# IMPORTANT – no blank lines (DEVICES line is the last line in the file)
DEVICES="0000:01:00.0 0000:01:00.1 0000:02:00.0 0000:04:00.0 0000:05:00.0 0000:06:00.0"

If using hugepages (recommended for better performance), update sysctl whilst in /etc ie. Add the following lines to /etc/sysctl.conf
Code:

# Set hugetables / hugepages for KVM guest of 6GB RAM
vm.nr_hugepages = 3200

Later on we'll refine the use of hugetables. The above figures are set for my system where, hugepages are 2MB each, the Windows VM which needs this facility the most is allocated 6GB of ram, so we need 6144 MB which => 6144 / 2 = 3072 … plus add some extra for overhead (about 2% ie. 61 additional pages, so I have overachieved :)

Also increase the “ulimit” max by adding the following to /etc/security/limits.conf
Code:

<your user-id>          hard    memlock        8388608  # value based on required memory
Set vfio-bind-init.sh to start automatically at boot
Code:

sudo update-rc.d vfio-bind-init.sh defaults
If you haven't included pci-stub in the kernel (see the kernel config recommendations above) then you may need to add the module name to your initramfs, update /etc/initramfs-tools/modules to include the following line
Code:

pci-stub
and “update” your initramfs (use “-c” option to build a new one)
Code:

sudo update-initramfs -u
Note that I usually update initramfs as a matter of course when I update grub – to ensure the two are always synchronised

now you're about ready to reboot and start creating the VM

After rebooting, check that the cards to be passed through are assigned to pci-stub using
Code:

dmesg | grep pci-stub
download the virtio drivers from redhat (Windows will need these to access the vfio devices)
http://alt.fedoraproject.org/pub/alt...latest/images/ eg. Obtain virtio-win-0.1-94.iso which can be used later as a cd-rom image for the Windows guest
download the spice drivers for enhanced spice experience on windows from
http://www.spice-space.org/download.html

I installed the Ubuntu supplied OVMF so that all the necessary links etc are created but it will NOT work.
Code:

sudo apt-get install ovmf
You must find the latest OVMF file … preferably download from Gerd Hoffman's site https://www.kraxel.org/repos/jenkins/edk2/ and extract OVMF-pure-efi-fd from the rpm and copy to /usr/share/ovmf and create a "reference" copy as OVMF.fd (take a copy of Ubuntu version first – just in case).

install libvirt and virt-manager - this will provide the GUI VM management service
Code:

sudo apt-get install libvirt-bin virt-manager
update the libvirt qemu configuration at /etc/libvirt/qemu.conf -
  • add this if you want to use host audio (not recommended
    Code:

    nographics_allow_host_audio = 1
  • set this to maintain security
    Code:

    security_require_confined = 1
  • set these to enable qemu to access hardware. You'll need to work out which VFIO items you're going to provide access to (/dev/vfio) and you may or not want to provide access to "pulse"
    Code:

    cgroup_device_acl = [
        "/dev/null", "/dev/full", "/dev/zero",
        "/dev/random", "/dev/urandom",
        "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
        "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
        "/dev/vfio/1", "/dev/vfio/14", "/dev/vfio/15", "/dev/vfio/16", "/dev/vfio/17",
        "/dev/shm", "/root/.config/pulse", "/dev/snd",
    ]

  • add this to enable access to the hugepages directory we created earlier
    Code:

    hugetlbfs_mount = "/dev/hugepages"
    maintian the security constraints on VMs - we're running "unpriveleged" and then providing specific accesses with changes above, plus the Apparmor changes below
    Code:

    clear_emulator_capabilities = 1



Update apparmor to allow libvirt to allocate hugepages, use VFIO and sound. Add the following to the apparmor definition in /etc/apparmor.d/abstractions/libvirt-qemu
Code:

  # WARNING: this gives the guest direct access to host hardware and specific
  # portions of shared memory. This is required for sound using ALSA with kvm,
  # but may constitute a security risk. If your environment does not require
  # the use of sound in your VMs, feel free to comment out or prepend 'deny' to
  # the rules for files in /dev.
  /{dev,run}/shm r,
 # ================ START Changes ================ #
  /{dev,run}/shm/pulse-shm* rw,
  @{HOME}/.config/puls** rwk,
  @{HOME}/** r,
  # Only necessary if running as root, which we no longer are
  #/root/.config/puls** rwk,
  #/root/.asoundrc r,
  /dev/vfio/* rw,
  /dev/hugepages/libvirt** rw,
  # ================ END Changes ================ #
  /{dev,run}/shmpulse-shm* r,
  /{dev,run}/shmpulse-shm* rwk,
  /dev/snd/* rw,
  capability ipc_lock,
  # spice

Then reload the apparmor defintiions so the changes take effect
Code:

sudo invoke-rc.d apparmor reload
Then restart libvirt (just to be sure)
Code:

sudo service libvirt-bin stop
sudo service libvirt-bin start

Be sure to back these change up as updates to Apparmor may overwrite them .....

Start virt-manager (should appear in the menu as "Virtual Machine Manager") and add a connection to QEMU on local host (File/Add Connection), this will give you the ability to create and manage KVM machines

Now we can start creating a new VM (right click on the "localhost (QEMU)" line in the main screen area and select "New")

You'll need a copy of Windows 8 or 8.1. It's apparently possible to install Windows 7 but I found it was more trouble than it's worth. DO NOT try to install the Ultimate version - install Professional or Home

Your graphics card will need to be UEFI capable. Mine is an older AMD card for which there is no official UEFI bios .... but I was able to construct one (see the beginning of this post)

Create a disk image for the VM - I recommend an LVM partition. In my case I allocate LVM partitions on the mechanical drive until I'm happy everything's running ok and then copy them across to the SSD using dd.
You'll also need a small (4GB) Fat32 partition on a "GPT" drive which we'll copy the installation files to. The install iso should be copied to an install partition as accessible to a VM ie.
  • Create a new partition for this purpose, can be LVM or 'flat' but must reside on a GPT disk not MBR
  • Allocate the new partition to a VM – so it can be formatted. DO NOT format this using host format tools, it must be done from a VM
  • Allocate the new partition to a windows VM
  • Format as FAT32, consider adding the WIN xx bootloader as well. Not necessary but seems cleaner and make the partition bootable (set “active” or bootable flag in parition table)
  • Copy the install iso to the newly formatted partition. This can be accomplished by passing the iso to the VM used to format the new install partition as a CD-ROM (use virt-manager)
  • Check that the \efi\boot partition exists and contains the same files as \efi\microsoft\boot. If necessary copy files into a new \efi\boot partition. Also must contain a copy bootmgw.efi named bootx64.efi in \efi\boot
  • Check the contents of \source\ei.cfg to ensure it nominates the correct OS product (best to use “Professional”).
  • It can be beneficial to use Win Toolkit to include the linux qxl driver (spice screen driver) in the Windows build although I'm not convinced this is necessary.
  • Exit the VM used to format the install partition


Define the new VM in virt-manager. Remember to -
  • Select a “dummy” iso to install from … we're going to replace this later
  • Select UEFI as the bios (Advanced Options on last screen)
  • use appropriate names etc


STOP the install triggered by virt-manager BEFORE it installs anything. You should easily have time to stop the install while the UEFI boot process is iniating.
Stopping is easy if you're working with virt-manager (the gui) - click on the red "shutdown" icon at the top of the screen(use the virt-viewer “force off” option). You'll probably have to stop it twice – on my system it automatically restarts itself even after a forced stop.

Create a new copy of the OVMF-pure-efi.fd file you downloaded from Gerd's site and rename it for your new VM eg.
Code:

sudo cp /usr/share/ovmf/OVMF-pure-efi.fd /usr/share/ovmf/OVMF-win8-pro.fd
sudo ln -s /usr/share/ovmf/OVMF-win8-pro.fd /usr/share/qemu/OVMF-win8-pro.fd

Ensure all the following parameters are set BEFORE attempting an install using
Code:

EDITOR=nano virsh edit <your-vm-name>
ie.

  • Initial domain definition – to support direct qemu parameters, right at the top of the file
    Code:

    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  • Memory backing
    Code:

      <memoryBacking>
        <hugepages/>
        <nosharepages/>
      </memoryBacking>

  • Change the loader in the <os> section ie. The OVMF file name (custom for each VM – create in /usr/share/ovmf and create link in /usr/share/qemu as described above)
    Code:

    <loader>OVMF_win8_pro.fd</loader>
  • CPU. I chose to define my real CPU type. You can set it to "host", this seemed the best overall result for me
    Code:

      <cpu mode='custom' match='exact'>
        <model fallback='allow'>Haswell</model>
        <topology sockets='1' cores='2' threads='1'/>
      </cpu>

  • Features (hv_relaxed etc). NVidia crivers don't seem to like these HyperV optimisations and will probably fail if they're encountered
    Code:

      <features>
        <acpi/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='4096'/>
        </hyperv>
      </features>

  • Clock (hv_time), more performance parameters the NVidia drivers won't like
    Code:

      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>  # Not sure about this one
      </clock>

  • Change emulator in the <devices> section
    Code:

    <emulator>/usr/bin/qemu-system-x86_64</emulator>
  • Spice access – add to end, just before </domain>. You don't absolutely need this, but I find it useful to be able to control the keyboard during UEFI intialisation before any USB drivers have been loaded, and it doesn't seem to do any harm
    Code:

      <qemu:commandline>
        <qemu:arg value='-spice'/>
        <qemu:arg value='port=5905,disable-ticketing'/>
        <qemu:env name='DISPLAY' value=':0'/>
      </qemu:commandline>



Save the changes. Note that Virsh may respond with an error in which case edit the file again like this ("Failed. Try again? [y,n,f,?]:") .... use the try again option to return to the editor and fix the problem

For those of you using NVidia cards you will also need to (a) use the kvm=off parameter and (b) choose the version of Windows driver carefully because NVidia seems to be intent on not supporting passthrough. See the VFIO blog for more information and options

For what it's worth do NOT set USB = USB2 in the VM setup, leave at default. USB processing seems to cause a lot of grief, so best to use with default. Also Logitech G27 wheel will not work correctly via Renesas USB 3 add-in card but works fine if passed directly from a USB 2 port on host via host USB passthrough - other USB devices may be similarly afflicted one way or the other (some only work when connected to a passed through card, some will only work when passed through fromm the host controller ymmv)

The install iso should be copied to an install partition as accessible to a VM ie.
  • Create a new partition for this purpose, can be LVM or 'flat' but must reside on a GPT disk not MBR
  • Allocate the new partition to a VM – so it can be formatted. DO NOT format this using host format tools, it must be done from a VM
  • Allocate the new partition to a windows VM
  • Format as FAT32, consider adding the WIN xx bootloader as well. Not necessary but seems cleaner and make the partition bootable (set “active” or bootable flag in parition table)
  • Copy the install iso to the newly formatted partition. This can be accomplished by passing the iso to the VM used to format the new install partition as a CD-ROM (use virt-manager)
  • Check that the \efi\boot partition exists and contains the same files as \efi\microsoft\boot. If necessary copy files into a new \efi\boot partition. Also must contain a copy bootmgw.efi named bootx64.efi in \efi\boot
  • Check the contents of \source\ei.cfg to ensure it nominates the correct OS product (best to use “Professional”).
  • It can be beneficial to use Win Toolkit to include the linux qxl driver (spice screen driver) in the Windows build although I'm not convinced this is necessary.
  • Exit the VM used to format the install partition


Now you can use the Virt-Manager gui to further modify the VM definition (the manual edits should not be impacted by this , you an easily check at any time with "virsh edit")
  • Add the newly created install partition to your new VM definition as created in #3 above and remove the previously added “dummy” install cd iso. The easiest way to do this is to use virt-viewer “information view” - add hardware. Be sure to add as “IDE disk” and not virtio

also add the following -
  • virtio drivers iso as cd-rom.
  • qxl drivers iso as cd-rom (can be part of the virtio if easier). Note that this probably cannot be used during Windows install since they are unsigned. You'll need to add them later
  • any other files need as cd-rom eg. drivers. You can easily create an iso from any files using the Ubuntu disk burning tools eg. k3b
  • Ensure that the “boot menu” option is enabled in the boot order section of virt-viewer
  • Ensure the main partition to be installed to is accessed via virtio (disk bus under the advanced options for the device)
  • Ensure the network adapter is defined as virtio (device model for the NIC)
  • ensure the default graphics and display are spice based. Windows doesn't seem to need additional drivers for these (which is why youshould NOT need to build the drivers into the install image).


Run the install. If necessary press the space key as the UEFI boot progresses and select the disk to boot from. Sometimes uefi doesn't find the correct boot disk. You will need to connect via Spice to enable this (spicec -h 127.0.0.1 -p 5905)

You'll need to install "additional drivers" and select the virtio drivers for disk and network access. This makes a significant difference to VM performance AND the install will fail if you've set the VM definition to provide a virtio disk but Windows cannot find a driver

Add the required PCI devices Only add PCI passthrough devices AFTER the main install is complete

Windows tuning hints
  • Turn off all paging. The host will handle this
  • I tell Windows to optimise for performance. This makes screens lok pretty ordinary, but since I only use the VM for gaming and games take direct control of the screen, it doesn't really matter
  • Consider turning on update protection ie. Snapshots you can fall back to if an update fails. Then take the first snapshot directly after the install so you have a restore point


Shut the vm down using the Windows shut-down procedure ie. Normal termination

Add the PCI passthrough cards. In my case I pass
  • Graphics card – primary address (01:00.0)
  • Graphics card - sound address (01:00.1)
  • USB add-in card (Renesas based) to which the following are attached via downstream hubs (on display panels) (02:00.0)
  • Host USB device (Logitech wheel)



Add any USB devices to be passed from the host. In my case there seems to be a problem with USB 3 drivers on the guest (and possibly on the host) so I had to detach the wheel from the add-in card and attach it to a USB 2 port on the host, then pass it through via host usb passthrough – which works well.

Reboot and verify all is working

When the graphics card is working. Shut down the VM. Remove the following from the VM definition
  • Spice display
  • qxl graphics
  • console definition
  • serial port definition
  • channel definition


Reboot the VM to verify everything continues to work

In my case I now set up eyefinity and gaming software. The AMD control centre seems a bit flaky and sometimes caused a lock up while trying to establish an eyefinity group. 1 or 2 reboots later (forced shutdown-poweroff from the virt-manager interface) it's all working

No more need to customise the kernel or worry about loss of function on the host graphics (due to VGA-Arbiter patch) !!!
No real performance difference for(for me) between UEFI and BIOS …. more stable, easier to manage using libvirt / virt-manager (everything exposed to libvirt & managed there).
You can connect to the VM using “spicec -h 127.0.0.1 -p 5905” and use the host keyboard during bootup should the need arise – before the guest VM loads any drivers ie. before the guest keyboard and mouse are active

here's what my lspci looks like
Code:

lspci -nn
00:00.0 Host bridge [0600]: Intel Corporation 4th Gen Core Processor DRAM Controller [8086:0c00] (rev 06)
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)
00:01.2 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x4 Controller [8086:0c09] (rev 06)
00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller [8086:0412] (rev 06)
00:14.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI [8086:8c31] (rev 05)
00:16.0 Communication controller [0780]: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 [8086:8c3a] (rev 04)
00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection I217-V [8086:153b] (rev 05)
00:1a.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 [8086:8c2d] (rev 05)
00:1b.0 Audio device [0403]: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller [8086:8c20] (rev 05)
00:1c.0 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 [8086:8c10] (rev d5)
00:1c.2 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 [8086:8c14] (rev d5)
00:1c.3 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #4 [8086:8c16] (rev d5)
00:1c.4 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #5 [8086:8c18] (rev d5)
00:1d.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 [8086:8c26] (rev 05)
00:1f.0 ISA bridge [0601]: Intel Corporation Z87 Express LPC Controller [8086:8c44] (rev 05)
00:1f.2 SATA controller [0106]: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] [8086:8c02] (rev 05)
00:1f.3 SMBus [0c05]: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller [8086:8c22] (rev 05)
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Cayman PRO [Radeon HD 6950] [1002:6719]
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cayman/Antilles HDMI Audio [Radeon HD 6900 Series] [1002:aa80]
02:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014] (rev 03)
04:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 01)
05:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
06:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 01)

I chose to pass the AMD card and its audio controller through along with the Renesas USB controller ie. 01:00.0 to 02:00.0

here's my final libvirt definition
Code:

cat  /etc/libvirt/qemu/ssd-win8-uefi.xml
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit ssd-win8-uefi
or other application using the libvirt API.
-->

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>ssd-win8-uefi</name>
  <uuid>redacted</uuid>
  <memory unit='KiB'>6291456</memory>
  <currentMemory unit='KiB'>6291456</currentMemory>
  <memoryBacking>
    <hugepages/>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>2</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-trusty'>hvm</type>
    <loader>OVMF_ssd_win8_pro.fd</loader>
    <boot dev='hd'/>
    <bootmenu enable='yes'/>
  </os>
  <features>
    <acpi/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='4096'/>
    </hyperv>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Haswell</model>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/ssd-virt/lvwin8_uefi_kvm'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/wd2t-lvm-data/lvwin7_data'/>
      <target dev='vdb' bus='virtio'/>
      <shareable/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/wd2t-lvm-data/lvwin7_kvm'/>
      <target dev='vdc' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/programming_data/isos/winfiles_3.iso'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <shareable/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address=<redacted>'/>
      <source bridge='rebr0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <input type='tablet' bus='usb'/>
    <sound model='ac97'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </sound>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc29b'/>
      </source>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-spice'/>
    <qemu:arg value='port=5905,disable-ticketing'/>
    <qemu:env name='DISPLAY' value=':0'/>
  </qemu:commandline>
</domain>

Some people may need to force pci-stub and vfio modules to load at boot time. Update /etc/modules
Code:

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

lp
rtc
pci_stub
vfio
vfio_iommu_type1
vfio_pci
kvm
kvm_amd

and then update your intramfs at /etc/initramfs-tools/modules
Code:

# List of modules that you want to include in your initramfs.
# They will be loaded at boot time in the order below.
#
# Syntax:  module_name [args ...]
#
# You must run update-initramfs(8) to effect this change.
#
# Examples:
#
# raid1
# sd_mod
pci_stub ids=1002:6719,1002:aa80,8086:1539,1912:0014,1412:1724,1849:1539

I created these notes as I installed the VM so they may not be complete or may contain inaccuracies (though they should be close). For reference see the Arch thread and VFIO links at the start of this post
I'm happy to update tje post to improve accuracy if anyone has constructive comments

Viewing all articles
Browse latest Browse all 4211

Trending Articles