Virtualbox PCI passthrough

Thinking about upgrading my computer and PCI passthrough is one of the features I really want to add, but I’m finding some of the info out there confusing and contradictory. Has anyone had success setting it up who might be willing to help me pick out the right hardware?

I’m planning on picking up a Sabertooth 990FX mobo and want to be able to entertain the idea of some occasional gaming in a windows guest. I’d prefer an Nvidia graphcs cars but details are sketchy on if they work and which.

Make sure the processor you’re using has an IOMMU. Intel doesn’t like to put IOMMU (VT-d) functionality on their unlocked chips. Most current generation AMD processors have an IOMMU.

BIOS support for the IOMMU is often buggy. In particular, I would recommend against using Asus motherboards for this purpose - IOMMU functionality is often buggy on Asus boards.
Personally I would use a workstation or server board for a project like this because they are better designed.

Nvidia drivers require some kind of hack to get them to work (otherwise you get a “Code 43” error.)

Be prepared to compile your own kernel, etc., because certain patches may be required that haven’t been accepted into the Linux trunk.

I have run into a few bizarre issues implementing a similar setup, but with KVM instead of Virtualbox. One board that I have cannot boot Linux with the IOMMU enabled unless the IOMMU is placed into passthrough mode using a kernel parameter. Some of the peripherals on the motherboard encounter malfunctions while the IOMMU is enabled. I was told by someone else that this is due to some kind of flaw in the BIOS.

I also encountered an issue with IOMMU groups in VFIO. Any 2 devices that can talk to each other must be in the same group. The Linux kernel likes to play it safe. However some devices, like video cards, aren’t very specific about their behavior, so the kernel puts them in a group with practically everything else on the same bus, because the kernel is worried that they might try to talk to each other (even if they don’t end up actually talking to each other.) Unfortunately this can prevent passthrough because all the devices in a group must be passed through at the same time. There is a patch that adds kernel parameters to manually fix this, but upstream, the Linux kernel devs don’t want to include this patch.

Also, video card performance is limited, because every PCI-E transfer has to go through a bunch of translations in order to actually make it to the other side. Different patches might be able to improve performance.

Despite all of the issues, I was able to get it to work. Just be prepared to get your hands dirty.

We use pci passthrough on a product we ship at work. It works but the latency is awful.

I’ve never done it with virtual box but I have done it with VMware, KVM, and Xen.

You need either an AMD GPU or a QUADRO GPU (or a newer geforce card with modded bios into it’s quadro component. [MOVED] Hacking NVidia Cards into their Professional Counterparts - Page 1 or with a kernel thats got the new vfio-vga module [haven’t tried])

your CPU and MOBO need to support IOMMU.
Intel only shoves iommu on their xeons and their i7s I believe. (most i5s don’t have it :frowning: )
AMD shoves iommu on the fx-6XXX and fx-8XXX cpus as far as I’m aware.

I’ve got pci passthrough working on the ASUS Sabertooth 990FX. Note: it will fry your onboard sound card (both me and my buddy fried our sound cards doing this, me with KVM, him with XEN).
I’ve also got pci passthrough working on the ASUS M5A97 R2.0 (Even though they don’t list iommu support the sb970 chipset supports it).
I’ve also got pci passthrough working on the Gigabyte GA-990FXA-UD3.

Here are my aging and really rough steps/guide that i made a while back. Doing it with VMware was SUPER easy BTW.


UBUNTU with kvm
Rough Steps

Hardware Specs:

  • fx-8320
  • Gigabyte GA-990FXA-UD3
  • 4x8gb of ram
  • amd 6950 2GB
  • 4x64gb ssds raid0

Install Ubuntu and then install KVM.

sudo apt-get install -y ubuntu-virt

Add “iommu=pt iommu=1” to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default grub:

sed -i “s/GRUB_CMDLINE_LINUX_DEFAULT="/GRUB_CMDLINE_LINUX_DEFAULT="iommu=pt iommu=1 /” /etc/default/grub

Figure out what your PCI device’s vendor ID is and Product ID, and which PCI bus it’s on by running this command.

lspci -nn

Edit your /etc/rc.local file and the following:

#Disable selinux, make sure allow_unsafe_assigned_interrupts is enabled at boot.
echo 0 > /selinux/enforce
echo 1 > /sys/module/kvm/parameters/allow_unsafe_assigned_interrupts

#NOTE: The following steps might be handled by virt-manager/virsh now, and might not be needed…

#Make sure pci stub module is loaded
modprobe pci-stub

#Using virsh to detatch your PCI devices (this is for my amd 6950 + it’s onboard sound card)
virsh nodedev-dettach pci_0000_01_00_0
virsh nodedev-dettach pci_0000_01_00_1

#Add pci devices IDs to pci-stub module
echo “1002 6719” > /sys/bus/pci/drivers/pci-stub/new_id
echo “1002 aa80” > /sys/bus/pci/drivers/pci-stub/new_id

#Unbind existing modules for the PCI devices
echo 0000:01:00.0 > /sys/bus/pci/devices/0000:01:00.0/driver/unbind
echo 0000:01:00.1 > /sys/bus/pci/devices/0000:01:00.1/driver/unbind

#Bind pcistub module to those pci devices
echo 0000:01:00.0 > /sys/bus/pci/drivers/pci-stub/bind
echo 0000:01:00.1 > /sys/bus/pci/drivers/pci-stub/bind

exit 0

Restart your computer… then start your virtual machine [create/manage your VMs using virt-manager (from another computer if necessary if you don’t have another gpu)].

Voila.


Headless VMware ESXI5.5 BOX with a real GPU:

Hardware Specs:

  • fx-8320
  • Asus m5a97 R 2.0
  • 4x4gb of ecc ram
  • quadro 4000
  • lsi 9260-4i with 4x64gb ssds raid0
  • USB 3.0 PCI-e card
  1. Installed ESXi 5.5 then installed the nvidia .vib driver and lsi vib
    driver.
  1. Uploaded the vcenter applicance ova which is SUSE based (No longer need a windows domain+windows server running vcenter!) via a windows machine but you should be able to do it via command line (http://www.virtuallyghetto.com/2012/05/how-to-deploy-ovfova-in-esxi-shell.html).
  2. From there I was able to setup the vcenter appliance from the webmangement console, which is on a different port than vcenter port.
  1. Then in vcenter I just had to enable pci passthrough for the devices I wanted to passthrough (quadro 4000 ) and restart.
  1. Then I uploaded my windows 8 iso, and created a VM configured with GPU attached and installed windows.
  1. Unfortunately i had issues with the usb3.0 card (can’t recall what, i think it wasn’t showing up).
  1. After windows installed and quadro drivers installed I disabled the vmware gpu in device manager.
  2. To passthrough the keyboard and mouse I had to add a line manually to the vmx file defining the vendor id and product id, and unplug+plug the keyboard/mouse. (Your alternative is to use Synergy, Download Synergy 3).

But there you have it, a headless ESXI box with windows VM with a real GPU.

Notes: Amd 970 chipset has iommu support but they don’t list it saying it does. only 990fx are advertised as having iommu support which i suppose is due to marketing reasons. Also surprised to find out that the ASUS m5a97 R2.0 has support for ECC memory, even though they don’t list support for it.

Thanks for the help, folks. Right now I have an Asus M5A97 R2.0 which
seems to support with a Phenom II x4 965 and a GTX760 as my primary card…
It sounds like I just have to find a suitable secondary card?

yeah, having two cards makes it much easier when you are figuring it out.
Need to borrow a working card while you figuring it out? let me know if you do.

Shit, your Phenom II x4 965 doesn’t support IOMMU either :confused:

I wouldn’t object to letting you borrow my m5a97 R2.0, fx8120, 4x4gb of ram, and quadro 4000 for a while, assuming you plan on figuring out pcipassthrough first then buying hardware later and giving me back my stuff :slight_smile:

Nah, I’ll hold off want an 8-core before I do this anyway. FX-8370
perhaps… We’ll see. I appreciate the offer though

I typically buy the cheapest CPU with the most cores, then overclock…
My fx-8120 3.1ghz chip has been overclocked to 4.2ghz for most of it’s life (water cooling).
My fx-8320 3.5ghz chip has been overclocked to 4.5ghz for most of it’s life (water cooling).
But to each their own!

shrug I buy the most fastest one I can afford and overclock it
fastester! As you said, to each their own. ALL THE PETAFLOPS!