IOMMU / VM / PCI Passthrough Thread

How's your experience with a Linux distro as Host OS, and Windows as Guest OS? Is it a good alternative if you want to play games or run applications exclusive or without Linux equivalent while not having to worry about privacy and security issues? Any limitations worth mentioning?

Other urls found in this thread: 4025 4093 600286742 600030349 600100181 600007855 600007856&IsNodeId=1&bop=And&Order=REVIEWS&PageSize=30 Radeon HD 7480D/review's_EFI_Install_Guide

Assuming you didn't get jewed out by Intel, make sure you have a monitor plugged in on the integrated graphics or an additional GPU.

I don't have a vt-d capable PC yet, but im planning on building one (Haswell with Asrock motherboard).
Is it possible to have the iGPU dedicated to Linux, and a graphics card to Windows at the same time? I've heard that 1) On Haswell, it needs patches to work; 2) Works natively on Skylake; 3) A discrete GPU and a high-end GPU, preferably different models, by AMD with UEFI is guaranteed to work too.

Thanks for replying by the way, there's not much organized and updated info about this topic, but it seems to be getting progressively better with time.

By the way, do USB and peripherals in general work without much fiddling?

I'm currently using my iGPU for my Arch Linux machine, and an R9 290 for my Windows VM. It took a bit of tweaking but it works pretty well.

You can passthrough peripherals that you want like keyboard and mouse, or headphones and microphones. However I use a program like Synergy to use a keyboard and mouse on both machines at the same time.

That sounds pretty sweet. Is there a noticeable disadvantage in contrast to running native Windows? I've got a TV connected through HDMI in clone mode and my main monitor with DVI both handled by the graphics card, so i was wondering if 2 monitors plus the TV would cause some kind of conflict whenever i want to watch movies or something.

As far as I can tell, my hardware supports virtualization ( vmx (AMD-V) ) but my BIOS doesn't. Is there a way around this?

Just update your BIOS. Hopefully you didn't get your support dropped early on.

If it's passthrough then the performance should be identical to running native. And I'm sure it could work with your monitor setup.

I've heard that Vulkan will eliminate the need for a discrete graphics unit assigned to each VM for real passthrough.

Anyone else hear this?

Xenfag here, been running this setup for about 2 years now. Everything works, Windows is quite fast and can run almost any game at high/ultra. AMD, 8350, R9 380 + 6870 for loonix, SSDs for the OSes.

Is this normally a BIOS update, or did Linux and industry find a good standard in their updated distros?

I remember is being in early stages a year or so ago.

I am not a constant Holla Forumsie

Industry standard, needs to be supported by CPU and Motherboard.

Are there any special requirements or can I just place random low-end GPU into free slot and start passing my usual GPU to VMs?

is there any way i can automatically hibernate linux and reboot into windows? I have a pentium d which doesn't allow 64-bit vms and muh adobe after effects cc. Fucking 502

Can somebody list some compatible parts?

As long as your CPU and motherboard are compatible it should work.

Why don't you just dual boot?

Holy shit teach me your way senpai,
this is exactly what i want to do!!!!!!!!!!!!!!!!!!!!

I made some shitty guide a while ago on how to do that shit, now it may or may not be outdated. just go to and save that page along with the pics/vid because imma host some other shit on it in a week or two. Also take that guide as some kind of guideline or info source because i think something is wrong with how i've done things back then, basically hdd read/write speeds are fuckslow.

It's honestly the best setup if you don't want to abandon gaming. As for my experience with it, it was a very informative and hellish learning experience for a week since i've never touched linux before at the time but i loved every second of it going through success after sucess of fixing shit. Ultimately i went back to windows as my main OS for one reason, i couldn't fix the hdd speed problem on the VM, i have no idea what was causing it and anything i found online didn't fix it. it might be the install/start script i used from a youtube guide or not. In any case i didn't have the skills to fix it and it fucked with the performance.

Crunchwheels lemme post

It's not too terribly difficult to setup, providing your hardware supports it.
I'll monitor this thread; ask anything you'd like.

I thought about running a setup like this one, but when I've switched to SSD I kinda stopped caring, it's easier to dualboot. Not to mention that now I have a multimonitor setup.

i'm not even sure how to install virtualbox

i did the "sudo dpkg -i packagename" but then i don't know how to run it, or where it is at all


Thinking of getting a thinkpad T450 to run Xen and Fedora + whonix together then another space for gaming windows box (don't like Qubes and already have a computer for qubes and they don't support windows at all)

lmao, dreamin, it will work sure but not without $$$ expensive support and other technical shit

If you have NVIDIA you better learn to script qemu or fuck off because NVIDIA doesn't let you run their drivers in a VM.

Im running a xeon 1265l v3, amazing deal. I had a i5 2500k. the performance jump single threaded is about 25% (the same from sandy bridge to skylake) and I also netted 45w tdp instead of 95w and 8 threads instead of 4. The cheapest asrock mobo I could get that was itx and had integrated wifi, and my old gtx 760 though I want to upgrade to at least a r9 285 so I can passthrough. My mobo also has integrated graphics that supports 3 monitors at the same time so I can run all 3 screens windows or linux and have no limitations.

I had it working with QEMU/KVM in Gentoo with my older Radeon card. I couldn't get it working with my GTX760. Worse, I can't control the primary display adapter in the BIOS, so I'd need to put my good card in the bad PCIE slot if I wanted the VM to use it. Next graphics card I get will be an Radeon. I'll never get Intel or Nvidia again. I'll also make sure the mobo has a good BIOS menu.

I have a gtx760 too. Did you manage to get yours working without soldering?

No. I'm too lazy to learn to use qemu with scripts. I'd rather switch to AMD than waste time.

You can either use drivers before a specific version (didn't work for me), use commandline (too lazy) or like you said solder it to the quadro equivalent (don't want to solder my only gpu)

I thought soldering was the only way. Do you still have links for the commands or scripting as you say? I'm pretty comfortable with it myself.

no but I would start here.

I believe the specific flag is kvm=off

Saved this from a previous thread. Never done this myself, so I don't know if it works. (I plan to get an AMD card anyway.)

Protip, you can softmod any NVIDIA card by patching QEMU. Apply the below patch then build QEMU. Then run QEMU with:,x-vid=0x10DE,x-did=0x11BA,x-ss-vid=0x10DE,x-ss-did=0x0965,romfile=/path/to/BIOS.romappended to the arguments for the passed through gfx card device, replacing the IDs with the appropriate IDs for the card you want to mod to. The IDs in the example are for a Quadro K5000. You can find a BIOS and the IDs for some quadro cards at techpowerup.Then just run the VM and the GEForce drivers should recognize your card as a quadro and work without having to disable anything.The patch:
--- a/hw/vfio/pci.c 2015-04-27 07:08:25.000000000 -0700+++ b/hw/vfio/pci.c 2015-08-01 21:08:41.158189382 [email protected]@ -160,6 +160,10 @@ #define VFIO_FEATURE_ENABLE_REQ_BIT 1 #define VFIO_FEATURE_ENABLE_REQ (1 pdev.config[PCI_BASE_ADDRESS_0], 0, 24); memset(&vdev->pdev.config[PCI_ROM_ADDRESS], 0, 4); + if (vdev->virtual_vendor_id != 0xFFFF) {+ vfio_add_emulated_word(vdev, PCI_VENDOR_ID,+ cpu_to_le16(vdev->virtual_vendor_id), 0xFFFF);+ }+ if (vdev->virtual_device_id != 0xFFFF) {+ vfio_add_emulated_word(vdev, PCI_DEVICE_ID,+ cpu_to_le16(vdev->virtual_device_id), 0xFFFF);+ }+ if (vdev->virtual_ss_vendor_id != 0xFFFF) {+ vfio_add_emulated_word(vdev, PCI_SUBSYSTEM_VENDOR_ID,+ cpu_to_le16(vdev->virtual_ss_vendor_id), 0xFFFF);+ }+ if (vdev->virtual_ss_device_id != 0xFFFF) {+ vfio_add_emulated_word(vdev, PCI_SUBSYSTEM_ID,+ cpu_to_le16(vdev->virtual_ss_device_id), 0xFFFF);+ }+ vfio_pci_size_rom(vdev); ret = vfio_early_setup_msix(vdev);@@ -3553,6 +3574,10 @@ intx.mmap_timeout, 1100), DEFINE_PROP_BIT("x-vga", VFIOPCIDevice, features, VFIO_FEATURE_ENABLE_VGA_BIT, false),+ DEFINE_PROP_UINT16("x-vid", VFIOPCIDevice, virtual_vendor_id, 0xFFFF),+ DEFINE_PROP_UINT16("x-did", VFIOPCIDevice, virtual_device_id, 0xFFFF),+ DEFINE_PROP_UINT16("x-ss-vid", VFIOPCIDevice, virtual_ss_vendor_id, 0xFFFF),+ DEFINE_PROP_UINT16("x-ss-did", VFIOPCIDevice, virtual_ss_device_id, 0xFFFF), DEFINE_PROP_BIT("x-req", VFIOPCIDevice, features, VFIO_FEATURE_ENABLE_REQ_BIT, true), DEFINE_PROP_INT32("bootindex", VFIOPCIDevice, bootindex, -1),

Thanks a lot.

Unlikely. KVM is for OSes that aren't made to be guests. It's for full virtualization, I think, as apposed to something like Linux which can have a kernel made to be virtualized. That won't work for Windows.

Is there a point to doing a passthrough if you don't play vidya? I can't think of any reason. I don't need CAD or anything windows-specific, and I hate windows too much to warrant installing it just for sony vegas.

PCI passthrough has very niche uses. If you can't think of why you would want or need it, you probably have no use for it.

Well, it's time I set things straight in my pc.

Previously I had installed windows 8.1 on an ssd and used a 3tb drive for storage. Later I got lazy and unplugged the ssd and threw in an old hdd I boot arch linux from.

Now I need a distro that is stable enough to not break KVM and is still fairly modern. I will install it on the ssd and move the windows install into a virtual environment.

Can I move the windows install to a virtual machine somehow without breaking the license? Can I use the license I have at all?


I have AMD and it's just fookin works m8. I only needed to buy a kvm switch THAT'S IT. Can't wait till zen and polaris comes out end of this year.
I feel sad for those of you who fell for the 'le intel lga socket mustard race' meme.

Once summit ridge why would you even think about touching intel? Only thing Intel is good for is super power efficient i3 cpu's for laptops. Those broadwell and skylake i3's sip power at a measly 10-15w while preforming better then a i7-640lm which gulps 25-30w.

Point is buy AMD summit ridge for desktops and buy intel for low end laptops.

NO. Just get summit ridge when it comes out. It will preform better then haswell, have ddr4, be on AM4 socket, die shrink down to 14nm, and support iommu and virtualization while being overclockable. Can't overclock and virtualize with intel. Amd support kvm much better then intel.

Kvm+qemu > xen

Its me senpai, the Thinkpad guy was my post,

Is the thinkpad good enough hardware-wise?
thinking Xen + fedora dom0, domU fedora + whonix then domU windows gayming

I am using qubes with a super-old laptop atm and it works fine without the gaming/windows... i imagine with the T450 Xen will be plenty when i Kill the other domu

I also want to make a .deb/source git from Xen/dependenices so I could set up offline if I ever need to where are the yum install download at when I CLI the whole xen setup/virt mang?

I always just call M$ and tell the automated voice that sweet nothings until it gives me a new key. A real human never answers, and the robot always believes everything I say.

Social engineering with robots.

I'll see if I can extract the keys somehow...

This would propbably be easier. - from the wiki.

would a thinkpad x220t with an eGPU work with this? The processor seems to support vt-d, I'm just not sure how having an eGPU would complicate the process.
Pic semi-related but not mine.

Very good, got it going with an 860K, R9 290. Host GPU is a 9800 GT.

PSU got raped last week and I'm still waiting for the replacement. In the meantime, no way I can power that R9 so I'm stuck without winshit games for now.

I wish you luck. Windows doesn't like it when your storage controller changes. Even Windows 10 is finicky with controller drivers.

Not particularly. I've tested both extensively - performance wise, virtio and xenpv are nearly identical. For me, KVM used to not play well with my mobo's iommu and grsecurity *cough*, and Xen has a far superior interface for setup and management than running qemu instances. The choice was easy.

Side note, if you use libvirt, you should consider suicide.

It can probably run it just fine, but there's the matter of graphics. You need another fully seperate card from the one driving dom0, not just extra outputs. If you can get a T430 with integrated + dedicated graphics, then you're golden.

I have heard rumors of people getting this working on laptops with nvidia optimus, etc. using the discrete card and framebuffer, but I've yet to see this. Fuck, that would be cool.

I would assume so. Would be neat to try out.

My condolences.

Yeah I guess maybe I can look at the double GPU thinkpads as well

have dom0/U fedora config with intel card and nividia to domU windows

Got a Bash history of a short write up of how to do it from start?

I wanted to do this, but then Asus pulled their bullshit so my specific MB isn't capable of it.

Should I buy an FX-8350 or some Opteron(s) for a setup like this?

Used Opteron 6282s are pretty cheap to come by, but the only ATX-sized G34 board I can find only has 1 PCIe x16 slot, which means I'd have to get a new case to fit a dual/quad socket board into, which I don't need.

I am dualbooting but a script could be helpful.


I've got mine running pretty well: two Windows seats running at once so two people can game on one PC. Helped my family save money by buying one PC instead of two. Only thing I can't figure out is how to get my X-Fi Titanium HD working. Something is wrong with the way VFIO handles PCI interrupts, so the sound just won't work when I install the official drivers. I've already tried with "nointxmask" turned on. No one seems to know what's going on, as I got no replies on the mailing list.

I need a cheap gpu literally just to support my monitors on the linux side, no integrated on my mobo unfortunately
any recommendations?

I don't, sadly. It's really not hard setting up Xen, though. Just follow your distro's instructions.

That could be a problem.

VFIO is a broken mess that only works when you hit it. No surprise there.
What kinda hardware is in there?

Whats your setup? Debian Kernel?

my advice is that you grow up and get out of video games, use a real os


lads, if you are running newish intel graphics, you almost don't need pci passthrough.


this is the new age with gpu virtualization.


I wasn't really expecting help from anyone, just sharing about potential issues, but I appreciate it.

Here's my hardware:
Kernel: Gentoo 4.1.6-ck x86_64MoBo: Asus X99-PRO/USB 3.1CPU: i7-5820KGPU: 1) GTX 970 - Passthrough #1 2) GT 740 - Linux 3) GTX 970 - Passthrough #2Audio: 1) Creative X-Fi Titanium HD (EMU20k2) - Passthrough #1 (DOESN'T WORK) 2) Intel HD Audio - Passthrough #2

It's likely that none of this info is relevant to the issue, but maybe someone will spot something.

While my system pretty much support it, I don't want to use my integrated shit GPU for my linux gaming and stuff. There is like 4 games that I play on wangblows, it would be a total waste to sacrifice my R9280X on it. There is really no way to use my R9280X for both (my linux and vm wangblows)? Even if not at the same time? Like disable it for linux and then starting wangblows?

Yes, I'm a faggot and I don't know shit about how this works.

Unfortunately proprietary video drivers don't allow dynamic binding of video cards. If you use the open source ones, I believe that it's possible, but I couldn't tell you how. You definitely can't use Linux and Windows at once; in the future maybe.

-t. office drone
Someone could probably use this.

I'd just like to interject for a moment. What you're referring to as Linux,
is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux.
Linux is not an operating system unto itself, but rather another free component
of a fully functioning GNU system made useful by the GNU corelibs, shell
utilities and vital system components comprising a full OS as defined by POSIX.

Many computer users run a modified version of the GNU system every day,
without realizing it. Through a peculiar turn of events, the version of GNU
which is widely used today is often called "Linux", and many of its users are
not aware that it is basically the GNU system, developed by the GNU Project.

There really is a Linux, and these people are using it, but it is just a
part of the system they use. Linux is the kernel: the program in the system
that allocates the machine's resources to the other programs that you run.
The kernel is an essential part of an operating system, but useless by itself;
it can only function in the context of a complete operating system. Linux is
normally used in combination with the GNU operating system: the whole system
is basically GNU with Linux added, or GNU/Linux. All the so-called "Linux"
distributions are really distributions of GNU/Linux.

You can make two boot entries in grub, one blacklisting the R9280X so it can be used by VM. Of course this way you still have to reboot when switching the graphics card to/from the VM.
Another option would be to create a GNU/Linux gaming VM with PCI passthrough, but I have no idea how the card reacts to driver change without shutting down.

I'm looking into trying this myself, but I'd rather have the GPU be used by Linux on boot for Linux/Wine software, then disable it when I need to start Windows so the VM can use the GPU. Then obviously re-enable it when the VM is closed so Linux can use it again. (I haven't bought a GPU yet, so I can't test anything. Waiting for my next paycheck. The R9 380 looks fine, but is there any reason to wait for new GPUs?)
If I understand the answer correctly, this just blacklists the module meaning it won't be used. Will this do what I want, or is there something else I should be looking at?

Also, I have a second monitor. Is it possible in Linux to have each screen use a different GPU but share the desktop space? (I thought Windows didn't work well with trying to use 2 GPUs at once, which is why I ask.)

Lastly, Xen or QEMU? I've seen guides usually recommend QEMU, but it seems some of you prefer Xen. What are the pros and cons of each? Any other VM software I should be looking at?

When will we be able to pass optimus enabled GPUs on laptops? Currently its not possible because optimus works by writing final image to the CPU's framebuffer rather than the GPUs, but apparently it would be simple enough with a kernel mode driver to hook the GPU such that a VM uses the GPU framebuffer before compositing with the real framebuffer.

I only do it for Internet Explorer because my job has webapps that don't work in any other browser. IE works perfectly if your hardware isn't shit. I haven't tried vidya or anything really resource-heavy since all I have is this old X200, but if you get a good GPU I don't see why it shouldn't work.

I remember doing this back with XP when it was not a robot but some subhuman. But yeah, you can get away with calling them and asking for another activation so many times. I think my excuse was that "I test VMs for my job and blah blah"

what OS are you using?

If you use tab-completion it should show up. Alternatively it should show up in the GUI. If neither work trying using apt-get.

I would assume they are using Debian or one of it's bastard children.

there was something that allows you to use intel's iGPU in VMs while sharing it with the host, but I don't know how viable it actually is
also, it only works on intel, no AMD or nvidia yet

12 gb - host- SSD
12 gb - guest-1tb ssd+HDD
12 gb - guest-1tb ssd+HDD

16 gb - servers

08 gb - XBMC-HDD

+4gb of ram left over.

So is this RAM allocated to your VMs?

Multi-gpu screen extending is totally possible with randr.
YMMV, Xen is a more managed solution that (for me) has been stable. Qemu seems to have more bleeding edge features.
So this came up as the only free alternative to Synergy for Linux on (the license part is probably redundant as there are really no other alternatives)
Is it any good, or are there any alternatives not listed on the site that might work for sharing keyboard/mouse input? Anything built into Xen/QEMU that might work?

I've heard that Xen doesn't work with some Nvidia cards, while QEMU should. (Not sure if the latter still requires some tweaking) Is that true?

Can I do this with a macbook

Only kinda works with open source drivers (meaning not great for gaming) right now. AMD's and nVidia's proprietary drivers aren't well-behaved when it comes to unbinding devices, so you have to reload the driver which itself causes other issues. Not to mention X doesn't really have support for GPU hotswapping, so you'd have to do some ghetto setup involving restarting X or using multiple X servers.

I have no problems on a xeon 1265l v3 (haswell). My problem is the gtx 760 driver tosses error code 43

sell your macbook and build a gaming pc and buy a mediocre laptop

So looking into this, I found that wayland added gpu swap support back in 2010. For X, I found 1 person who showed adding a GPU using his own patch, but posted no progress since then or even the patch itself.
Why is X so widely used again?

If you blacklist the driver then it won't load and you won't get host graphics. What you want is to just proceed as normal using the open source driver module (proprietary shit tends to not unbind well and you get fucked when you try to forward into the VM). Then, when you want to start the VM, you need to close the X server running on that card, unbind the card from your driver and bind it to vfio-pci. Basically this involves some low level fuckery where you write your device's location string (such as 0000:02:00.0) to /sys/bus/pci/drivers//unbind and then echo it again to /sys/bus/pci/drivers/vfio-pci/bind. Note you will need to manually load the vfio-pci module first since it won't be autodetected as a necessary driver. If you don't, the vfio-pci directory won't even exist.

Getting the card back is the inverse; unbind from vfio-pci, bind to your gfx driver ("radeon" in my case).

Important resource for /sys fuckery:

You can see the entire location string of your PCI devices using lspci -D

Okay, this actually gives me an idea.
Can't test this until I get the GPU, but is there any reason this wouldn't work?

Startpage results show wayland was going to add GPU swapping back in 2010, but I can't find anything recent that confirms it was added or how to do it.

Sounds like that could work, albeit a bit hacky. Is it possible to drag windows between X servers? That might be something to keep in mind when running programs in a particular X server.


Want to try this. Do I need a relatively up-to-date distro, or should even Debian-based distros be okay now with software and drivers?

Your kernel needs to support virtio or at least PCI-stub.

I recently wiped everything and went full-on Arch Linux, no partitions. Using Win7 in a VM for certain things and have had no problems, except when it comes to using a DAW and audio interface.

ASIO doesn't work, it can't open the device. But even then, I still noticed a problem. To record with the box, the guest OS needs to have the box, but to monitor, the host OS needs the box. It seems like a conundrum. But again, ASIO doesn't even work in the first place.

I'm basically sequencing in the VM and rendering it down, then importing the mixdown into Ardour and doing recording there. It works and I'm pretty happy with it.

Is there a capture card that will give me a 0-latency 1080 window of my console in my linux desktop?

So why QEMU or Xen? Why not VirtualBox?

Short version because of the different audience being targeted. One which just wants their VM to work without have to really mess with settings. Not that it is a bad thing.

I'm planning on making vidya work with a new Xen setup, Arch or gentoo or something as dom0, Windows 8.1 or 10 as a VM. AMD APU, and NVIDIA or AMD GPU.

Would this be able to work well? Could I exit or close out of the VM and go back to Arch/gentoo whenever I want, no dual/rebooting? Would I get native or near native graphics performance on the VM?

I'm great with server vritualization, but this home/gaming stuff I'm really inexperienced at......

fucking kikewheels fix your shit


It would work fine. You can use a program like synergy to switch between your host os and guest without having to reboot or anything.

I'm running on Gentoo with an Intel CPU and AMD GPU, no issues hardware wise. Graphics performance seems to be the same as native; I haven't actually benchmarked it, but I have no issues running most games at 60fps+ with my r9 7970 and i5 6600.

It can be a bit of a pain to set up if you don't know what you're doing. Since you're experienced with server virtualization you probably won't have too much trouble.

What's the best setup for running a Linux VM in the background if you're a shitlord on Windows? In the past, I used VirtualBox, but I have no idea what the best solution is.

I'm not interested in using it as a desktop environment. Been there, done that. I just want to test some server software before I shell out for a VPS or a dedicated server.

Just install Microsoft's Ubuntu emulation garbage

Would this work with the integrated graphics from a10-5800k and a gtx 750 ti?
Also dumb question but where do I plug the HDMI cords? And should the setting in mobo disable integrated graphics or keep default?

I thought about that.

But since I simply do not care anymore about video games I see no reason.

Literally all I need is available in Linux even the two games I really enjoy plaing are running native.

I can't live without Linux but I can live without Windows.

I have PCI passthrough working, but from what I can tell my CPU is bottlenecking the performance and increasing the thread count on QEMU doesn't seem to do anything for the benchmarks. Is there some trick to optimizing it or does the FX8350 just not have enough power?

What resolution are you trying to run? GPU? Is your CPU overclocked?

VMWare if you don't care about FOSS or the ubuntu emulation shit as mentioned.

The CPU is stock 4GHz and I'm using a Radeon R9 290 at 1920x1080. Right now I'm passing "-cpu host -smp 6" to QEMU, but I haven't noticed much difference from changing it.
Unigine Valley gives me about 14 fps on the lowest settings and 12 on the highest settings.

Have you tried changing the display driver in use?

I think the problem was that "-smp 6" may have been using 6 cores but the guest was only seeing one of them. I changed it to "-smp cpus=6,cores=1,threads=6,maxcpus=6" and that immediately pushed me up to 40 fps.

why bother with the license? either pirate Windows 7 (very easy to pirate), or put Windows 10 in the VM. Since it's just for vidya and it's sandboxed in the VM, it doesn't matter if the Telemetryshit is running in it.

I'd suggest something from these search results. Just do some googling first to make sure it plays nice with the drivers, no screen tearing, etc. 4025 4093 600286742 600030349 600100181 600007855 600007856&IsNodeId=1&bop=And&Order=REVIEWS&PageSize=30

holy shit. I hope AMD doesn't sleep on this.

Hey Holla Forums, here. I think I've managed to get this working with an eGPU on my x220t using this guide (I haven't benchmarked anything so I'm not quite sure). It boots up and everything, but at an awful resolution (which I think is suppose to happen). I think I just need to set the right resolution for the monitor connected to the GPU.

But, I've ran into 2 problem when I get to the part where you use xrandr to set the proper resolution for the VM. Firstly, none of the output devices xrandr list makes anything show up on the monitor (normally, Windows appear on the monitor but at a shitty res, like I said before). Secondly, when I exit QEMU and the shell script stops, it says "xrandr: cannot find mode 1920x1080". What am I doing wrong here?

Also I'm sort of a novice at Linux, I'm just competent enough at it to follow the guide. so pls no bully if the answer is something obvious.

what's the best way to share folders/usb devices between Linux and the VM? I tried smb and passing through the usb device to the vm, but neither worked.

With QEMU/KVM you need Samba. Only Virtualbox and VWware have working shared folders for Windows VM.


It's because you fell for the meme and did some retarded shit. VMs will NEVER have the same performance as a regular system.

nevermind, I got the samba folder working.

okay, thanks,. Also, possibly dumb question: does the samba folder use up data/bandwidth? I noticed it uses the network to send data over from Linux to Windows. I'm on a fairly limited capped internet plan, so I can't use that much.

I run vmware esxi and do pci pass through for the gpu to my windows box. Easy as pie but a big limitation is the Linux vm has no native display accessible to hardware outputs so I have to connect it remotely. I could fix this with a second gpu but I'm too cheap and lazy for that.

Don't sweat it. Yes this breaks your licence but no one will care. Micro dollar only allows you to use windows in vm's using Microsoft volume licencing if you follow there rules which is a huge pain in your asshole to get set up. Your software will register and work just fine and I suspect that if they actually tried prosecuting people with legitimate licenses from using them in vm's they would have a good chance of getting those licence terms deemed invalid by the courts.

No, it's physically impossible to send data faster than the speed of light.
If you want really low latency your only option is a crt connected to the analog video output on your console, anything more complex will increase the latency.


Just use WinSCP.

98% is good enough. In gaming, you can occasionally exceed native performance.

Do I have to dedicate the external GPU everytime I boot? I want to use it with Linux emulators.

You should use your package manager. dpkg is a low-ish level package management tool. Use "sudo apt-get install " instead.

Also, if you use apt-get instead of dpkg, you will be able to use that nice graphical tool called Synaptic, which I really like because it makes browsing your packages way easier and I'm too lazy to learn Aptitude's key bindings.

How much money I need for a PC with PCI pass through ? I'm stuck with very old computer and I'm slowly saving money for a new one, but I might delay it to save up a bit more for the pc with PCI passthrough

I'm in UK btw.

Let's see

Depends, $80-600

The requirements are basically an UEFI-compatible PC with integrated GPU, which is every PC in the past 3 years or so.

only if you have intel or an amd APU. otherwise you'll also need at least a cheapo GPU like this for the host.

You don't have to dedicate the GPU on boot every time I believe, but you will need to restart or kill the X server if you want to boot with the GPU or pass it on to the VM. See >>570562 I think this user wants to do the same thing you're trying with GPU passover.

vmware esxi

esxi would require two pass through gpu's to work like this and hyper-v is going to have a remote desktop like disply if you dont have two gpu's used for passthroug.

But hyper-v will allow something called remoteFX which allows partitining/sharing of one gpu into sevral v-gpu's and accelerated remote desktop clients. It's really spiffy.

Is there any security/privacy risk to doing this?

It is hypetically possible that the guest could break out of the supervisor and gain access to the host. Along with the usual ones e.g. leaking of data through mistakes.

that's a risk with any VM. it would be extremely rare though.

If you want stable support for 3D graphics forget about VirtualBox. It's been in Experimental state for years.

My two cents.

USB is easy as fuck to pass through. Most hypervisors come with an USB bridge driver that runs on the physical OS and relays data between the VM and the device. This mode usually only works on USB 1.1 and 2.0 though.

I plan on building a computer that supports PCI passthrough. I intend to use an AMD FX 8320 CPU, a Gigabyte GA-970 motherboard which has multiple PCIE slots, and the upcoming AMD Radeon RX480 GPU for my virtual OS. However, the CPU's official information page is kinda confusing as to whether this CPU has an embedded graphics chip or not. Will I have to get a second graphics card for my physical OS? And if so, will an el cheapo AMD Radeon HD5450 do the trick?

why not just wait for Zen? if you can't wait, you can sell the CPU and mobo when Zen comes out I guess.

AMD specifically markets the ones with integrated GPUs as "APUs" and the mainline AMD CPUs are not APUs.

might as well get a 6450. the one I got is fanless and it's perfect for a tear-free graphical desktop environment. the open source radeon driver is rock-solid in KDE4, AND it doesn't crash in KDE5-- a problem I've had with open and closed-source Nvidia drivers.

also, you have to think of your PSU. If you have a dual GPU setup anyway, you may be tempted to get a second 480 in the long run. you may also want extra hard drives for VMs, dual boot, storage, etc. don't use newegg's kike wattage calculator, use this one:

4-5 months is too long for me. I lose money very fast and I need to spend it ASAP.

Also, on second thought, is the Radeon HD6450 better than a 2013 AMD APU? Since I'm going to spend money on a second GPU I might as well get something that delivers slightly better graphics while we're at it.

I do intend to get a pretty big PSU since I'm probably going to be running some pretty big hardware on my new machine. Right now all I have is something to power an AMD APU, one single RAM stick, an SSD, my USB ports and that's it.

So far I've been kinda comfy with my 240 GB SSD. It has enough space for LOL, WOWS, a bunch of other games, and 20 GB for Gentoo Linux. I stash all my downloaded shit on external hard drives that amount to 10 TB.

idk. I wouldn't trust integrated graphics to do as well with open source drivers, but maybe you can check some forums for people's experiences.

well I think a 650W PSU would probably be fine, and a 750W would make you pretty invincible

The radeon drivers work fine with my APU, but I've never bothered with 3D acceleration since I just use a basic window manager (Windowmaker).

the main concern is how well they perform with stuff like gnome3, kde5, compton, etc. the last thing anyone should have to suffer is screen tearing out the ass.

Nevermind, the HD6450 seems to deliver pretty much the same performance as my current AMD A4-5300 (I'm on my home computer now). Radeon HD 7480D/review

I think I'm just going to get instead an AMD Radeon R5 230, that one sells for like 40 dollars which I currently have around and it runs twice as fast as my current APU, so I'll have something usable while the RX480 is released.

until bumblebee can do vga passthrough on switchable graphics IOMMU is useless to me.

It can be done, but it requires some heavy modifications of bumblebee, i915, and nouveau.

Right off the top of my head I'm thinking about making a Digium phone card available to a server running Asterisk to implement a VOIP gateway. They're PCI/PCIE, so you need to use IOMMU to make them available to a VM.

Well, I just ordered my Radeon R5 230. My computer shop will stock it between Thursday and next Monday. In the meantime, I'll just twiddle my thumbs while I figure out how am I going to do that PCI passthrough.

so far the closest thing i've seen is kvmgt/xengt, but that doesn't support discrete cards

been experimenting with passing through the entire IGP but im fairly certain memory corruption is occuring

read everything.

If one was planning a new build for around 2017, what would be the best hardware for supporting this?

Whats the best virtualization software?

Wow KVMGT looks like a really good solution. All I need is windows support so I can do rendering and gayming, looks like that's going to be the next feature implemented.

AMD chipset, AMD Graphics Cards, Windows 7 Client.

KVM Hypervisor.

couldn't you just wait for RX 480?

basically this:
Because from what I've read, you have to do a bunch of extra shit to pass Nvidia GPUs through. and for the host, I just think radeon drivers are the most reliable open source drivers (tear-free, no crashing on plasma5, etc). that said, the intel CPUs wouldn't have a problem, and you can avoid getting a host GPU by using the integrated graphics.

just keep in mind you need two gpus and a good power supply.

Any particular reason for KVM over Xen?

KVM doesn't require rebooting into a custom kernel

actually, i'm pretty sure it has windows support now, you can install drivers to get it running
it's can be a pain in the arse to get setup, but all it is are patches to qemu/kvm & the intel igm drivers

qemu/kvm runs on almost any distribued kernel and doesn't need a management daemon. setup of qemu is easy, configuring the vm not so much
xen needs kernel support and uses xl to interact with it. setup is more complicated than qemu/kvm, configuration is a breeze.

YMMV. you should be able to achieve the same results with either one, even performance wise. use what works, or what appeals to you.

Is there any reason for preferring this over PoL?
Just for this use case, that is.

Radeon R5230 user reporting. My graphics card arrived today and I just set it up on my computer, so now I'm trying to enable IOMMU on Gentoo Hardened with kernel 4.3.3-r4. I followed the advice on this guide here:

Problem is, "Support for DMA Remapping Devices" and "Enable DMA Remapping Devices" are missing from my kernel configuration, which means I now have no fucking clue on how to compile IOMMU support on my kernel.

And nobody on the internet seems to know jack shit on how to compile IOMMU on Gentoo, because all I find is advice for Debian, Ubuntu and Fedora, where IOMMU apparently Just Werks™ out of the box.

I tried fumbling through my kernel configuration and enabled as many options that seemed related to IOMMU, ended up enabling the following options, but it still won't work.


When I check my kernel logs to see if AMD-Vi came up (dmesg | grep AMD-Vi), nothing shows up.

For the time being I'm taking a break, and will try following this guide here that suggest using something called OVMF on Arch Linux. I have no fucking idea of how I'm going to do it on Gentoo, but given that a lot of Arch stuff works on Gentoo I might as well give it a shot.

If any of you niggas have any idea of what should I do, please post it.

Whelp, just reviewed the Arch Linux forum I saw and still nothing about how to get that stupid fucking IOMMU working on the kernel.

Dear fuck, why did I have to choose the one distro that has a kernel where everything Just Never Werks?

Silly question but have you got AMD-vi and IOMMU stuff enabled in the BIOS?

Yep, I double-checked. It won't hurt to check once more when my kernel finishes building, which will probably be within 15 minutes.

Another silly question,have you edited grub with the line suggested in ?

I use an UEFI stub kernel, so basically instead of editing GRUB command lines I have to rebuild my kernel with a hard-coded command line defined on .config

Lol check out reddit, not kidding. They have better Arch community support.

Have you set your bootloader cmdline to enable "iommu=on iommu=pt"

There are some shitty BIOS's released that fuck up IOMMU as well on the motherboard. Looking at ASUS.

What bootloader are you using?

I just found out that my BIOS can take screenshots, so here you have it: proof that I do have IOMMU enabled. Still no luck with it.

Come to think about it, maybe I should set up Debian on an USB drive and see if it picks up IOMMU with the command line parameters?

None. UEFI stub kernels don't need any bootloaders. The BIOS just looks by default on a file located on EFI/Boot/bootx64.efi that contains the UEFI stub, which in my case doubles as a kernel. That's also why their command line is hardcoded, because there is no bootloader to pass them a command line.

I got it working from the Sakaki Gentoo Linux for UEFI.'s_EFI_Install_Guide

Doing that now. I'll get back in about 5 minutes.

Right, sounds like you're just making things more complicated for yourself.

Why not just use efibootmgr and Grub? I've had to tinker a fair bit with this set up with getting things perfect, also I can still dual boot between my VMs as well.

Ironically enough I run this setup because I thought efibootmgr+GRUB was going to be too complicated.

I reckon for a modern laptop/tablet or simple desktop thats only using one OS, EFISTUB would be the way to go, probably not the best idea for a VM host.

You could use EFISTUB for Linux guests alright though on top of a host. As said I do a bit a tinkering on the host machine

Any reason why you have native IDE rather than IHDCA (spelling)?
Also I have a similar motherboard,fuck gigabyte for making numerical input`s round rather than making it raw. Also it loves/d to randomly not show up of even just given a black screen with a frozen mouse (8320 OC`d to 4.6). In addition ethernet networking worked only once with a specific x64 kernel and twice the kernel paniced from a minor update. The second time requiring me to try and reinstall my OS, which set off a chain of fun pain. This will be my last Gigabyte product.

Well, that iommu=on iommu=pt trick didn't work either.

I'm off to sleep. Enough tinkering with Gentoo, the next course of action is going to be setting up something that Just Werks like Debian or Fedora to rule out the possibility of being my system's fault and see if my hardware actually does support it.

If it doesn't, well, I'm getting a new computer soon anyway with hopefully better IOMMU support. This is more of a test drive to see how IOMMU works.

Sucks to see you had so much trouble. I've never had any issues with mine. Maybe I'm just lucky.

My cpu is a i7 950 which does not support vt-d however the i7 920 does support it for some fucking reason but thats beside the point. should i get a 920 and downgrade from my 950 for playing muh windoze games? a 920 is only like 40€ on ebay.

What do you think Holla Forums?

Where does it say the i7 920 supports VT-D?

Maybe go for a second hand 6core xeon?

it says it on the intel site. Are xeon's acutally any good for playing videa? Because if thats an option i imagine that they would be considerably better at virtualization than the more consumer hardware.

Ah. well it doesnt matter anyway as my motherboard does not support xeons acording to asus.

Radeon R5230 user reporting again.

Debian did pick up IOMMU as shown here, so it's a matter of me being a fucking noob and not knowing jack shit about how to set up IOMMU on Gentoo.

Now that I have ruled out my CPU and motherboard being unable to IOMMU, I'm going to tinker a little bit with a KVM machine to see if my entire system can support it.




And I didn't have to get the PCI stub driver to grab my second GPU for that. Apparently, just binding my card with the vfio-bind script from the Arch Linux forum link was enough. I guess it's because I selected my CPU built-in graphics as the primary output device on my BIOS and as a result Linux will not grab the R5 230?

The next step is to actually piece together a functional VM. I'm going to have to leave virt-manager behind for that and use a direct QEMU command line because apparently I'm going to need a couple options that virt-manager does not support. Dear fuck, I feel naked without my good ol' virt-manager...

I need some more RAM and some way to run my non vidya stuff on linux without remoting into my host from the guest, but otherwise it runs very well. (4690K/GA-Z97X-SLI/R9 280x for Windows and iGPU for Arch).

Aaaaaaaand everything went A-OK until I got to the part where I restart my computer with the AMD Catalyst drivers installed. When Windows attempts to initialize the card I get a blue screen saying the GPU timed out during initialization.

Incidentally, I got a popup right before rebooting that said there was an update available, which means I still have a lead in rebooting to safe mode and updating the display driver from there.

If that don't work, I don't fucking know how else am I going to get that card rolling.

Also, I found the hard way that Windows 7 doesn't like SCSI drive controllers and very much would rather use an old-ass IDE controller.

Does Vt-d PCI passthrough work with virtualbox's seamless mode?
Hardware is Intel Core i7 6500U dual core 2.5
and AMD Radeon R5 M330 2GB DDR3L

First see if PCI passthrough will even work with Virtualbox. All I get is a guru meditation, and then if I start my VM again my entire system hangs.

Now, after a couple of failed driver installations, I came up with the idea of seeing if passing my APU to the VM instead of my discrete card will work. I'm running headless anyway at this point (controlling my computer over SSH).

Now that we're at it... maybe another lead could be switching from KVM to Xen?

Aaaaaaaaaaand guess which was the GPU I could never properly pass to my VM.

I managed to dig up a mailing list entry that says it could be VFIO's fault, maybe the Debian kernel doesn't really support it as much as Fedora. Which means I now have two possible courses of action: I can try using Fedora (I used Debian since it's the easy modo Linux I'm most familiar with), or I can try using Xen instead of KVM.

Well, at least I'm not stumped without knowing what the fuck I'm going to do...

On second thought... maybe it's because Debian uses an old-ass QEMU? The latest version is 2.6 and Debian Jessie is still stuck on version 2.1.2. Maybe if I switch my distro to Stretch I'll be able to get it running?

Yea I use Arch for this reason

holy shit dude. yeah, debian and ubuntu are not going to be great for this. switch to a rolling release distro.

I just did. Sort of, I upgraded my Debian to Stretch. Now I'm giving KVM another shot.

Windows went on 640x480x16 colors when I installed it. This could be good because that means the VM is working differently. My display driver is downloading.

AAAAAAAAAAAND IT FINALLY WORKED. All I had to do was upgrading my distro to one that had up-to-date packages. I gave LOL a try and it seems to work with the exact same FPS performance I have on plain ordinary Windows.

To anyone who wishes to try IOMMU: do yourself a favor and use an unstable-ish distro like Debian Testing or Arch. You'll save many hours of agony, teeth gnashing and wondering why your virtualized OS blue screens while initializing the display driver.

Now comes the extended testing part where I actually start phasing out my physical Windows in favor of virtualized Windows with IOMMU. That means playing one game of League over IOMMU a day. I also still have to figure out how to set that shit up on Gentoo now that I got it running on Debian.

congrats dude. I am definitely going to switch from Mint to Manjaro when I order my new GPUs.

Definitely do it. IOMMU is your big fuck you to Macrohard and Applel who want to force us into their botnet bullshit. Yes, you will run Botnet 10, but it will be safely sealed into a VM with little chance of compromising the host.

it also means you can have a full, non-backdoored, encrypted windows partition just by virtue of storing it on your LUKS linux install.

I managed to get this working on a Thinkpad x220 and an egpu 960. Seems pretty nice, and the only performance drop is cpu-wise(but I expected that, what with how many gens behind this i5 is).
I managed to play through Dark Souls 3 on it on ultra at 30 fps.

I think I might get a 420/430/430s with one of the better cpus and see if that performance improves.

I can see how an encrypted partition might help prevent a seizure of your physical machine, but unless your firewall rules are very strict, letting the VM have internet access means it is going to talk to M$ servers.

Wouldn`t using TinyWall help quite a bit along with blocking at the router?

How would you go about putting up a firewall just for the VM, anyway? And could you block out everything but something specific (like everything but Steam?)

I didn't have much time to advertise it but I wrote the PCI passthrough guide on the Installgentoo Wiki a couple of days ago. My intention is to provide a mostly distro-agnostic guide (or with distro-specific instructions) because it's honestly stupid up that all the guides out there are for Ubuntu and Fedora and not a single guide in the world exists for Gentoo.

I've already been playing League for a good while only on the VM, so it's a pretty safe bet to declare that PCI passthrough works for me. If everything works fine this week, I'll start getting it working on Gentoo.

Good work. I don't use Windows but I can see that this could help move people away from that platform, that are reluctant to give it up or have a great desire to run icky proprietary programs.

It is simple as installing TinyWall via the usual methods. By default everything but a couple of selected programs are blocked (Mostly Windows related, which can easily be found in the simple settings menu) and it works as a whitelist. To add a program, right click on the tray Icon, then select your method and then you are done (You can restrict it further if needed). I can make a picture guide if you want?


The management engine on the CPU if it's an intel processor can still access the network interface. Software firewalls are effective against some malware and doing things like blocking software updates for programs that don't allow you to disable them.

A standalone firewall is the only real answer if you want more privacy. I recommend pfSense.

I tried and failed to get a pfSense VM running on Citrix Xenserver along with a windows server VM. Wish I had the free time to try again.

I've been doing just that on a Gentoo install. Works brilliantly.

no experience with iommu and passthrough, but I have working dma
don't know if this helps

Radeon R5230 user here. I found you have CONFIG_DW_DMAC and I don't. I'm rebuilding my kernel with this and CONFIG_DW_DMAC_PCI set.

Aaaaaaaaaaaaand guess what?


It turns out that IOMMU was working for me all along. But I never knew, because I don't know what fucking hipsterly obscure kernel option I didn't had enabled that prevented the kernel from showing anything on the kernel log about IOMMU.

BUT THAT IOMMU WAS THERE ALL ALONG. I found out accidentally when I mistakenly typed this:

rockshooter ~ # ps -ef | grep -i iommuroot 66 2 0 04:19 ? 00:00:00 [amd_iommu_v2]

And as before, I tried it with a game of League, and everything worked exactly the same as in Debian. With the added bonus that Gentoo does display video output without the video driver being loaded.

I'm updating the wiki right now.

great gentooanon, your progress will be usefull to others

Think things would be significantly different on something like Void Linux?
Or maybe I should man up and use Gentoo?

I'm not familiar with Void. Does it comes with a precompiled kernel and recent-ish packages? If so, it would probably work.

Yes to both questions.

I heard IOMMU can't be done with a single GPU because the kernel panics the moment you try to disable the GPU to give control of it to the guest OS. Is that true? If so, it seems like some really ugly bug that should be fixed.

We could have a script that disables the GPU, launches the VM then reenables it the moment you shut the VM down if that was fixed. Imagine how fucking cheap it would be to play on GNU/Linux if that was the case

Did you put that eGPU together yourself user?

What we need is at very least a more convenient way to switch GPUs, blacklist them, and turn them on and off.

what do you guys thing about

cpu is 1090T x6
16gb ram

or should i just dualboot and get an ssd?

also if im not running my VM is it possible to tell how much power the GPU is drawling?
Ive got 650W and by every calculation ive done my system SHOULD be able to handle full load of 2 250W gpu's and since only one or the other would ever be under load I am almost positive it will work fine.

The simplest solution is plugging your system in via a cheap power meter and running a couple benchmarks.
Actually tracking gpu power usage is more hassle than it's worth, a non-shit 650W psu should be able to handle that setup without any issues.

If you're planning on using OVMF you need to make sure your vbios supports uefi but if it doesn't you could mod the bios to add uefi support.
Read this for more info:
It's suprisingly easy and you can just specify the bios location to qemu/libvirt, no need to actually flash it to the card.

The x6 1090t is a great cpu, i've been running one for years and feel no need to upgrade.

Intel has igvt-g

Thank you for the reply, I am not 100% sure my bios supports UEFI but it MIGHT
however I do know right now it is setup as BIOS and would be one of the last modern mobos to be bios only if it does not have uefi support

So after reading the thread, I assuming I should upgrade from Mint to a more up-to-date distro like Void if I want to do this?

Heh, I tried doing this last night
Ended up almost breaking my install after isolating my GPU
I dun diddle goofed

Yes, definitely.

Did you have a second graphics generator and did you doubble/triple check before executing any commands?

Are there any basic guides to setting this up?

I have a 4690k with a z97 asrock, I wanna boot my windows10 hdd to a vm and passthrough my gpu to it and use integrated gfx on my main OS.

What software is recommended here?

I have the exact same specs as you so try looking here

also this user has some very valuable info

If you want but you can do this with relative ease even with Ubuntu and VMM.

Has anyone used a RX 480?

Consider X58 era Xeon and related motherboards

Damn, you might want to check for people's experiences with Xeons and your motherboard. It may not officially support it but if it works that would be the thing to go for.

How easy would it be to migrate a Windows 7 dual boot to to a QEUM PCIE-passthrough while ensuring that the drive letters and such in the dual stay the same?

How easy would it be to migrate a Windows 7 dual boot to to a QEUM PCIE-passthrough while ensuring that the drive letters and such in the dual stay the same?
I have, I am some what doubtful that I will get a response that I can work off of.

VGA passthrough has been getting easier and easier with each iteration.

You can use physical drives for storage, just ensure they have the same port order as the one you have on your sata controller.

This way you can boot your physical win7 in kvm.

Prerequisite knowledge to be considered "easy" would require:

grub config knowledge
lspci and grep
qemu config, best source is the gentoo wiki

Thanks for the almost instantious reply.
If I do that can I remove the Windows bootloader seeing is I`l loading it directly. Also is there any way to resize the OS partition after the built in partition manager stops me? I used gparted but it caused issues when I tried it afterwards?

Windows will still expect a bootloader.

You want to resize what where? I'm not sure what your goals are now.

Do you mean convert you current machine into a vm only? or retain dual booting?

The main goal is to use the current Windows 7 dual boot as a vm. If I can resize the main Windows partiton without Windows freaking out, great as I can then put that freed space onto the SSD gaming partition or /. If I can't do so oh well.
Convert it to a vm only so I wouldn't have to reboot to play games.

You can use a microsoft utility called disk2vhd

This can create vhdx images which only take up used space on the partition, qemu-img can convert that to qcow2, probably more stable in the long run.

Not a perfect solution, but otherwise you'll probably have to do a defrag then a shrink.

I assume I would only have to use this on the Windows 7 OS partition, leaving the Windows boot loader intact?

Why is audio so shit in VMs?

I even tried USB passthrough of an external audio interface ffs.

You can pass through GeForce cards in ESXi and have working drivers by adding the following to the VMX file.

hypervisor.cpuid.v0 = "FALSE"

Occasionally the driver installation fucks up and the driver will constantly crash. If that happens just shutdown the VM, remove the pass through, uninstall the driver, and start over again. Before hand you should add the following registry keys so your VM doesn't blue screen itself if the driver crashes too frequently


And be sure change the VM's latency sensitivity to high in vCenter otherwise you'll notice some minor stuttering while playing video.

And I forgot, if you need SLI support:

As proofs, courtesy of bedserver