Proxmox VE 8.1: Windows 11 vGPU (VT-d) Passthrough with Intel Alder Lake

Interested in sharing your Intel Alder Lake GPU between multiple Proxmox 8.x VMs? This post covers configuring Intel vGPU virtual functions on Proxmox 8.1/8.0 using an Intel 12th Generation CPU (Alder Lake) with Windows 11 Pro. I will walk you through the Proxmox 8.x kernel configuration, modifying GRUB, Installing Windows 11 Pro, and then setting up the Intel graphics inside Windows. 

I’ve completely updated the content from my Proxmox 8.0 vGPU post which appeared on my blog back in June. While the process is generally the same as before, I’ve added new sections and refreshed all the prior content. I’ve redirected the old blog post URL to this post. In case you need the old post you can view it here: Proxmox VE 8: Windows 11 vGPU (VT-d) Passthrough with Intel Alder Lake. It has a lot of user comments, which you might find helpful. However, use this post for the installation process on both Proxmox 8.0 and 8.1. 

Update Dec 7, 2023: Proxmox has released a minor kernel upgrade. Kernel 6.5.11-7-pve. If you update from the Proxmox UI, it looks like DKMS rebuilds the kernel and you should be good to go. However, in my experience your 7 VFs will vanish if you do nothing. As my article states, just re-run the all the GRUB steps which rebuild DKMS from scratch, then reboot. Now my 7 VFs are back. This is why it is wise to pin your kernel version and only upgrade as needed. I also added a bit to the troubleshooting section.

Update December 2, 2023: Added a screenshot for the secure boot MOK setup. 

Update November 27, 2023: Added an additional step for Proxmox 8.1 or later installs that use secure boot. Updated Plex information, as their stance has now changed. They will work on SR-IOV (vGPU) support for a future release. 

Update November 26, 2023: I’ve modified both the DKMS and GRUB sections so they are now responsive to the kernel version your Proxmox host is running. It also performs more system cleanup, and is pretty much just a copy/paste endeavor now. The changes were inspired by a Github comment, which I modified for even more automation.

Update November 25, 2023: The DKMS repo has been updated to address the minor tweak needed for Kernel 6.5. So I removed the step to modify one of the DKMS files. I also added a LXC vGPU VF section to address other services like Plex and their vGPU support.

GPU Passthrough vs Virtualization

Typical GPU passthrough passes the entire PCIe graphics device to the VM. This means only one VM can use the GPU resources. This has an advantage of being able to output the video display via the computer’s HDMI or DisplayPort ports to an external monitor. But, you are limited to one VM using the GPU. This can be useful if you want Proxmox on your desktop computer, but also want to fully use the GPU resources for your primary desktop OS with an external monitor. 

However there is a way to share the GPU among VMs. This technology called Intel VT-d (Intel Virtualization Technology for Directed I/O) which enables the virtualization of the GPU resources and a present VF (virtual function) to up to 7 VMs. This enables (up to) 7 VMs to concurrently use the GPU, but you lose the ability to use a physically connected monitor. Thus you are limited to Remote Desktop access only for these VMs.  

Full GPU passthrough and vGPU setups both have their place. It totally depends on what you want your setup to do. This post will focus on vGPU configuration and sharing your GPU with up to 7 Proxmox VMs. I’ve only tested this with Windows 11 VMs. Linux VMs, in the past, have had Intel driver issues that caused some issues with vGPUs. So test thoroughly if you want to try vGPU with Linux VMs. 

What GPU does Windows 11 see?

When using a vGPU VF the OS does not know you have virtualized the GPU. The screenshots below are from my Beelink i5-1240P Proxmox VE 8.1 host. The “root” GPU is at PCIe 02.0, and you can see 7 GPU VFs (virtual functions).

I’ve assigned one GPU VF to a Windows 11 Pro VM (00:02.1). From the Windows perspective the stock WHQL Intel driver works as normal. The GPU even shows up as an Intel Iris Xe. You can also look at the Intel Arc control application and view the various hardware details of the GPU. 

My Working Configuration

Not all Intel CPUs support VT-d with their GPU. Intel only supports it on 11th Generation and later CPUs. It considers prior generations “legacy”. I’ve seen forum posts that users have issues with Intel 11th Gen, so that may or may not work for you. My latest configuration is as follows:

  • Beelink SEi12 Pro (Intel 12th Gen Core i5-1240P)
  • Proxmox VE 8.1
  • Linux Kernel 6.5.11-4-pve
  • Windows 11 Pro (23H2)
  • Intel GPU Driver

The setup I wrote about in June 2023 used Proxmox 8.0 and Linux Kernel 6.2.x. Both Proxmox 8.0 and 8.1 have been solid performers. Note that your Proxmox host must have Intel VT-d (or whatever they call it) enabled in your BIOS, and your motherboard must properly implement it. Not all motherboards have working VT-d for GPUs. 

Note: Several readers left comments on my 8.0 post about the Parsec app not working. Unfortunately, I can confirm even on Proxmox 8.1 and the latest Intel drivers that Parsec does not work. 

What about LXC vGPU VF Compatibility?

If you are nerdy enough to run a Windows 11 VM with a vGPU, you might also have some Linux LXCs that can use GPU resources as well (such as Plex). Linux LXC compatibility with vGPU VFs might be problematic. 

As of the publication date of this post, a Plex LXC with Ubuntu 22.04 has problems with HDR tone mapping. Basically the Linux Intel Media Driver (IMD) doesn’t like using a vGPU VF. If you enable HDR tone mapping on the Plex server and you are viewing HDR content on a device that needs server side HDR tone mapping, the video stream will likely be corrupted. 

However, Plex hardware transcoding will still work. Hardware transcoding uses the XE graphics module in the CPU (not GPU), whereas HDR tone mapping uses the GPU itself. Meaning, these two hardware offloads use different APIs and the GPU offload for HDR tone mapping is broken when using a vGPU VF. 

Chuck from Plex posted on the forums on that he consulted with his engineers, and they will work on making Plex compatible SR-IOV. However, the needed kernel mods or GRUB updates will be up to the user to make. Intel has mainline SR-IOV for the Linux 6.4 kernel, and is working on 6.5. You can follow the progress in SR-IOV Mainling.  

Bottom line, if you have LXCs on a Proxmox host that need GPU resources, they may not work with a vGPU VF. When Intel releases their official 6.5 package, it should offer wider compatibility. Configuring a LXC to use a vGPU VF requires modifying the LXC config file. Since this post is about Windows 11 vGPU, that procedure is out of scope for this post. I’ll cover the required LXC config changes in a separate post.

Proxmox Kernel Configuration

Note: The commands automatically detect the kernel version you are running and adjusts the commands accordingly. If you are running Proxmox 8.0 you should be using kernel 6.2x. Proxmox 8.1 comes with kernel 6.5x. Both should work. 

  1. On your Proxmox host open a shell and run the following commands. First we need to install Git, kernel headers, do a bit of cleanup, then set the kernel variable with the right version. 
					apt update && apt install git pve-headers mokutil -y
rm -rf /var/lib/dkms/i915-sriov-dkms*
rm -rf /usr/src/i915-sriov-dkms*
rm -rf ~/i915-sriov-dkms
KERNEL=$(uname -r); KERNEL=${KERNEL%-pve}

2. Now we need to clone the DKMS repo and modify the configuration file to set the kernel version. Check that the package name is i915-sriov-dkms and the package version matches your kernel version.

					cd ~
git clone
cd ~/i915-sriov-dkms
cp -a ~/i915-sriov-dkms/dkms.conf{,.bak}
sed -i 's/"@_PKGBASE@"/"i915-sriov-dkms"/g' ~/i915-sriov-dkms/dkms.conf
sed -i 's/"@PKGVER@"/"'"$KERNEL"'"/g' ~/i915-sriov-dkms/dkms.conf
sed -i 's/ -j$(nproc)//g' ~/i915-sriov-dkms/dkms.conf
cat ~/i915-sriov-dkms/dkms.conf

3. Here we install DKMS, link the kernel source, and check the status. Verify that the kernel shows as added.

					apt install --reinstall dkms -y
dkms add .
cd /usr/src/i915-sriov-dkms-$KERNEL
dkms status

4. Let’s now build the new kernel and check the status. Validate that it shows installed.

					dkms install -m i915-sriov-dkms -v $KERNEL -k $(uname -r) --force -j 1
dkms status

5. For fresh Proxmox 8.1 and later installs, secure boot may be enabled. Just in case it is, we need to load the DKMS key so the kernel will load the module. Run the following command, then enter a password. This password is only for MOK setup, and will be used again when you reboot the host. After that, the password is not needed. It does NOT need to be the same password as you used for the root account.

					mokutil --import /var/lib/dkms/

Proxmox GRUB Configuration

Note: Default installations of Proxmox use the GRUB boot loader. If that’s your situation, follow the steps in this section. If you are using ZFS or another config that uses systemd bootloader, skip down to the systemd section below. 

1. Back in the Proxmox shell run the following commands if you DO NOT have a Google Coral PCIe TPU in your Proxmox host. You would know if you did, so if you aren’t sure, run the first block of commands. If your Google Coral is USB, use the first block of commands as well. Run the second block of commands if your Google Coral is a PCIe module. 

					cp -a /etc/default/grub{,.bak}
sudo sed -i '/^GRUB_CMDLINE_LINUX_DEFAULT/c\GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7"' /etc/default/grub
update-initramfs -u -k all
apt install sysfsutils -y

If your Proxmox host DOES have a Google Coral PCIe TPU and you are using PCIe passthrough to a LXC or VM, use this command instead. This will blacklist the Coral device at the Proxmox host level so that your LXC/VM can get exclusive access.

					cp -a /etc/default/grub{,.bak}
sudo sed -i '/^GRUB_CMDLINE_LINUX_DEFAULT/c\GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7 initcall_blacklist=sysfb_init pcie_aspm=off"' /etc/default/grub
update-initramfs -u -k all
apt install sysfsutils -y

2. Now we need to find which PCIe bus the VGA card is on. It’s typically 00:02.0

					lspci | grep VGA

3. Run the following command and modify the PCIe bus number if needed. In this case I’m using 00:02.0. To verify the file was modified, cat the file and ensure it was modified.

					echo "devices/pci0000:00/0000:00:02.0/sriov_numvfs = 7" > /etc/sysfs.conf
					cat /etc/sysfs.conf

Proxmox SystemD Bootloader

Note: If your Proxmox host doesn’t use GRUB to boot (default), but rather uses systemd, then follow these steps. This is likely the case if you are using ZFS. Skip this section if you are using GRUB. 

  1. Let’s modify the kernel loader command line:
					nano /etc/kernel/cmdline

2. Add the following text to the END of the current line. Do NOT add a second line. 

					intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7

3. Run the following command to update the boot loader. 

					proxmox-boot-tool refresh

Finish PCI Configuration

1. Now we need to find which PCIe bus the VGA card is on. It’s typically 00:02.0

2. Run the following command and modify the PCIe bus number if needed. In this case I’m using 00:02.0. To verify the file was modified, cat the file and ensure it was modified.

					echo "devices/pci0000:00/0000:00:02.0/sriov_numvfs = 7" > /etc/sysfs.conf
					cat /etc/sysfs.conf

3. Reboot the Proxmox host. If using Proxmox 8.1 or later with secure boot you MUST setup MOK. As the Proxmox host reboots, monitor the boot process and wait for the Perform MOK management window (screenshot below). If you miss the first reboot you will need to re-run the mokutil command and reboot again. The DKMS module will NOT load until you step through this setup. 

Secure Boot MOK Configuration (Proxmox 8.1+)

4. Select Enroll MOK, Continue, Yes, <password>, Reboot. 

5. Login to the Proxmox host, open a Shell, then run the commands below. The first should return eight lines of PCIe devices. The second command should return a lot of log data. If everything was successful, at the end you should see minor PCIe IDs 1-7 and finally Enabled 7 VFs. If you are using secure boot and do NOT see the 7 VFs, then the DKMS module is probably not loaded. Troubleshoot as needed. 

					lspci | grep VGA
dmesg | grep i915

6. Now that the Proxmox host is ready, we can install and configure Windows 11. If you do NOT see 7 VFs enabled, stop. Troubleshoot as needed. Do not pass go, do not collect $100 without 7 VFs. If you are using secure boot and you aren’t seeing the 7 VFs, double check the MOK configuration. 

Windows 11 Installation

  1. Download the latest Fedora Windows VirtIO driver ISO from here
  2. Download the Windows 11 ISO from here. Use the Download Windows 11 Disk Image (ISO) for x64 devices option. 
  3. Upload both the VirtIO and Windows 11 ISOs to the Proxmox server. You can use any Proxmox storage container that you wish. I uploaded them to my Synology. If you don’t have any NAS storage mapped, you probably have “local“, which works. 

4. Start the VM creation process. On the General tab enter the name of your VM. Click Next.
5. On the OS tab select the Windows 11 ISO.  Change the Guest OS to Microsoft Windows, 11/2022. Tick the box for the VirtIO drivers, then select your Windows VirtIO ISO. Click Next. Note: The VirtIO drivers option is new to Proxmox 8.1. I added a Proxmox 8.0 step at the end to manually add a new CD drive and mount the VirtIO ISO.

6. On the System page modify the settings to match EXACTLY as those shown below. If your local VM storage is named differently (e.g. NOT local-lvm, use that instead).  

7. On the Disks tab, modify the size as needed. I suggest a minimum of 64GB. Modify the Cache and Discard settings as shown. Only enable Discard if using SSD/NVMe storage (not a spinning disk).

8. On the CPU tab, change the Type to host. Allocate however many cores you want. I chose 2.

9. On the Memory tab allocated as much memory as you want. I suggest 8GB or more. 
10. On the Network tab change the model to Intel E1000. Note: We will change this to VirtIO later, after Windows is configured.

11. Review your VM configuration. Click Finish. Note: If you are on Proxmox 8.0, modify the hardware configuration again and add a CD/DVD drive and select the VirtIO ISO image. Do not start the VM. 

Windows 11 Installation

  1. In Proxmox click on the Windows 11 VM, then open a console. Start the VM, then press Enter to boot from the CD.
  2. Select your language, time, currency, and keyboard. Click Next. Click Install now.
  3. Click I don’t have a product key
  4. Select Windows 11 Pro. Click Next.
  5. Tick the box to accept the license agreement. Click Next.
  6. Click on Custom install.
  7. Click Load driver.

8. Click OK.
9. Select the w11 driver. Click Next.

10. On Where do you want to install Windows click Next.
11. Sit back and wait for Windows 11 to install.

Windows 11 Initial Configuration

Note: I strongly suggest using a Windows local account during setup, and not your Microsoft cloud account. This will make remote desktop setup easier, as you can’t RDP to Windows 11 using your Microsoft cloud account. The procedure below “tricks” Windows into allowing you to create a local account by attempting to use a locked out cloud account. Also, do NOT use the same username for the local account as your Microsoft cloud account. This might cause complications if you later add your Microsoft cloud account.

  1. Once Windows boots you should see a screen confirming your country or region. Make an appropriate selection and click Yes.
  2. Confirm the right keyboard layout. Click Yes. Add a second keyboard layout if needed. 
  3. Wait for Windows to check for updates. Windows may reboot. 
  4. Enter the name of your PC. Click Next. Wait for Windows to reboot.
  5. Click Set up for personal use. Click Next. Click Sign in.
  6. To bypass using your Microsoft cloud account, enter no @ thankyou .com (no spaces), enter a random password, click Next on Oops, something went wrong
  7. On the Who’s going to use this device? screen enter a username. Click Next.
  8. Enter a password. Click Next.
  9. Select your security questions and enter answers.
  10. Select the Privacy settings you desire and click Accept.
  11. In Windows open the mounted ISO in Explorer. Run virtio-win-gt-x64 and virtio-win-guest-tools. Use all default options. 
  12. Shutdown (NOT reboot) Windows.

13. In Proxmox modify the Windows 11 VM settings and change the NIC to VirtIO.

14. Start the Windows 11 VM. Verify at least one IP is showing in the Proxmox console.

15. You can now unmount the Windows 11 and VirtIO ISOs. 

16. You will probably also want to change the Windows power plan so that the VM doesn’t hibernate (unless you want it to). 

17. You may want to disable local account password expiration, as RDP will fail when your password expires with no way to reset. You’d need to re-enable the Proxmox console to reset your password (see later in this post for a how to).

					wmic UserAccount set PasswordExpires=False

Windows 11 vGPU Configuration

1. Open a Proxmox console to the VM and login to Windows 11. In the search bar type remote desktop, then click on remote desktop settings.

2. Enable Remote Desktop. Click Confirm.

3. Open your favorite RDP client and login using the user name and credentials you setup. You should now see your Windows desktop and the Proxmox console window should show the lock screen.
4. Inside the Windows VM open your favorite browser and download the latest Intel “Recommended” graphics driver from here. In my case I’m grabbing
5. Shutdown the Windows VM. 
6. In the Proxmox console click on the Windows 11 VM in the left pane. Then click on Hardware. Click on the Display item in the right pane. Click Edit, then change it to none.

Note: If in the next couple of steps the 7 GPU VFs aren’t listed, try rebooting your Proxmox host and see if they come back. Then try adding one to your Windows VM again.

7. In the top of the right pane click on Add, then select PCI Device.
8. Select Raw Device. Then review all of the PCI devices available. Select one of the sub-function (.1, .2, etc..) graphics controllers (i.e. ANY entry except the 00:02.0). Do NOT use the root “0” device, for ANYTHING. I chose 02.1. Click Add. Do NOT tick the “All Functions” box. Tick the box next to Primary GPU. Click Add.

9. Start the Windows 11 VM and wait a couple of minutes for it to boot and RDP to become active. Note, the Proxmox Windows console will NOT connect since we removed the virtual VGA device. You will see a Failed to connect to server message. You can now ONLY access Windows via RDP. 
10. RDP into the Windows 11 VM. Locate the Intel Graphics driver installer and run it. If all goes well, you will be presented with an Installation complete! screen. Reboot. If you run into issues with the Intel installer, skip down to my troubleshooting section below to see if any of those tips help. 

Windows 11 vGPU Validation

1. RDP into Windows and launch Device Manager
2. Expand Display adapters and verify there’s an Intel adapter in a healthy state (e.g. no error 43).

3. Launch Intel Arc Control. Click on the gear icon, System Info, Hardware. Verify it shows Intel Iris Xe.

4. Launch Task Manager, then watch a YouTube video. Verify the GPU is being used.

Troubleshooting Intel Driver Installation

The first time I did this on my N100 Proxmox server the Intel drivers had issues installing. For some reason the RDP session would freeze mid way through the install, or would get disconnected and then fail to connect. I had to reboot the VM from the Proxmox UI and then re-start the Intel installer using their “clean” option. After a couple of re-installs, it ran just fine. It ran flawlessly the first time on my i5-1240P server. If after a VM reboot RDP can’t connect after a few minutes, reboot the VM and try again. 

On rare occasions if you reboot the Proxmox host and the Windows 11 VM gets a GPU device error, try rebooting the Proxmox host again and see if it clears. Re-installing the Intel drivers might help too. 

Also, if you see the following message in the dmesg logs, this likely means you have secure boot enabled and did not properly configure MOK or enter the MOK password after your first host reboot. If this is the case, re-run the mok utility command, connect a physical monitor/keyboard to your Proxmox host, reboot, and run through the MOK setup.

i915: module verification failed: signature and/or required key missing – tainting kernel

How to Use Intel GPU VFs

You can configure up to 7 VMs to use vGPU resources on the Proxmox host. Each VM MUST be assigned a unique PCIe VF. In addition, no VMs can use the “root” PCIe GPU device. If a running VM is assigned the root GPU, the VFs will not function properly. 

Some readers asked if they could connect a HDMI cable to their mini PC and access the Windows 11 desktop. As far as I know this is not possible, as the HDMI output is tied to the primary PCIe GPU device, which we are not using. You will be limited to using RDP to access your desktop(s). You would need to use full GPU PCIe passthrough for that. 

Future Proxmox Kernel Upgrades

As Proxmox gets updated over time with newer Linux Kernel versions, you WILL need to reconfigure DKMS to patch the new kernel. Thankfully this is a pretty simple process. Just follow the section Proxmox Kernel Configuration. This will rebuild the new kernel with the latest DKMS module. 

There are dependencies between the DKMS module and the Linux kernel. Sometimes the DKMS module breaks with newer kernels, or manual tweaks to the DKMS files are needed. DO NOT assume the very latest Proxmox kernel will be successfully patched by DKMS. You can check out the dkms GitHub Issues page and see if there are known issues, or report an issue if you are having problems. If you want to play it safe, after a new kernel comes out I would wait a few days or weeks to see if any issues pop up on the DKMS repo. 

There is a neat solution to prevent Proxmox updates from installing a new kernel. You can use the pin command, as shown below, to allow Proxmox to update everything EXCEPT your kernel. Of course you eventually do need to update the kernel, but this way you can update it on your schedule and after you’ve reviewed forums to see if anyone ran into an issue with the latest Proxmox kernel.

To use the pin command, run the command below to pin your current kernel version. 

					proxmox-boot-tool kernel pin $(uname -r)

When you are ready to upgrade your kernel, run the unpin command. Update your Proxmox host then re-pin the new kernel. 

					proxmox-boot-tool kernel unpin

Unable to connect via RDP

If for some reason you can’t connect via RDP to your VM, there is a way to regain a local Proxmox console. This can happen if your password expires, for example. To enable the Proxmox console (and disable vGPU):

  1. Shutdown the Windows VM.
  2. Remove the PCIe VF device attached to your GPU.
  3. Modify the Display hardware property and change it to Default.
  4. Start the VM and wait for the Proxmox console to connect.

Do whatever you need to do to troubleshoot the issue. To re-enable vGPU:

  1. Shutdown the VM.
  2. Change Display to None.
  3. Re-assign the PCIe GPU VF device.
  4. Start the VM.

Kernel Cleanup (Optional)

Over time as you run Proxmox and do routine upgrades, your system can get littered with old kernels. This isn’t a problem per say, but does waste storage space. If you want to clean up unused kernels, you can use the awesome script by tteck to remove old versions. No need to reboot after the cleanup. This is entirely optional and is only mentioned for good housekeeping. 

					bash -c "$(wget -qLO -"



The process for configuring Windows 11 for vGPU using VT-d on Proxmox VE 8.1 is a bit tedious, but it has worked very well for me. By using the virtual functions (VFs), you can share the Intel GPU with up to seven VMs at the same time. Remember that each VM on the Proxmox host that you want to use a vGPU must use a unique VF. Using GPU VFs means you can’t use a physically connected monitor to your Proxmox host and access your VM’s desktop. You have to use remote desktop to access your Windows 11’s desktop. 

Configuring your Proxmox host for vGPU with VFs may have a negative impact on Linux based VMs or LXCs that need GPU resources, such as Plex. If you are using other VM/LXCs that need GPU resources, be sure to test them thoroughly including HDR tone mapping. Also remember that when you do Proxmox host updates that install a newer kernel, you will have to re-patch the kernel with dkms or your vGPU VFs won’t function. Using the kernel pin command you upgrade kernels on your schedule, while allowing other Proxmox updates to install. 

Related Posts

Notify of
Newest Most Voted
Inline Feedbacks
View all comments
November 25, 2023 2:30 pm

Thanks for the update, it helped me update my cluster again.

November 25, 2023 4:08 pm

Hello, Nice rework of your previous post (june), that I had followed to get rid of the error 43 in intel GPU driver in my Win11 Pro install. In reading this post, I understand why I couldn’t get my connected screen to show something. It’s just not possible when using vGPU. Ok. But for my NUC, I didn’t manage to get the iGPU passthrough working nor having a display on the connected screen… I own a NUC, Geekom Mini-IT13, with an Intel i9-13900H with an Iris XE for iGPU. Do you have a working guide lie yours to follow for… Read more »

Last edited 3 months ago by Miles
December 10, 2023 1:48 am
Reply to  Derek Seaman

sorry for the delay of my answer, was busy at work and at home with my toddler.

My NUC’s bios doesn’t really come with options… no cpu options…
but under windows (the one pre installed) HWInfo64 says that it’s ok for VT-D.

I’d be glad if you wrote a guide for full iGPU passthrough like this one.

February 7, 2024 3:53 pm
Reply to  Derek Seaman

I would be very glad to read about FUll GPU passthough.

November 27, 2023 12:20 pm

For others having an issue with a clean install of Proxmox v8.1 and SR-IOV setup, you’ll need to sign the module built with DKMS. To do this, you’ll need to enrol the key with mokutil –import /var/lib/dkms/ before you build/install it. You’ll also need to enter the password on 1st reboot, or it will not accept the driver. Hope this helps.

November 28, 2023 1:46 am

Amazing guide!
Can it be use for Amd or Nvidia cards?

November 28, 2023 2:19 am

Hello Derek, Thank you for this updated write up and you others. I find them all easy to follow for what is a very technical concept. Based on your review of the Beelink products I decided it was time I upgraded my whole setup. My aim is to go low power. I usually use Windows for most work but am happy with Linux. I also run a Home Assistant node. I went for the Beelink EQ12 Pro as it will more than cope with the load I need. Added Proxmox 8.0 and installed the 2 VM’s . Initially I thought… Read more »

December 2, 2023 4:27 pm

Can confirm this works on 14th gen chips (14600k). It’s a bit dodgy (as you mention in the article) where sometimes a VM won’t really come online or the drivers don’t install the first time. But other than that it’s great. Hopefully Parsec gets patched (or the driver, whatever is broken) to work since that’s mostly what I want it for. My fresh install of Proxmox 8.1 uses the systemd UEFI bootloader instead of GRUB. To update it run: nano /etc/kernel/cmdline Add “intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7” (without quotes) to the end of the first and only line. Then run: proxmox-boot-tool… Read more »

December 2, 2023 7:17 pm

Thanks for the writeup! There’s some really interesting tech here. Would this work for intel arc gpu’s? If so, are there features that the (non-intel) cpu and motherboard would need to support in order for this to work?

December 4, 2023 9:30 am

Hello! I have tested on an Intel NUC 13 (i-1340P), Proxmox 8.1.3 (kernel 6.5.11-4), so thought I’d share for people struggling with this / googling: ZFS, secure boot turned off in BIOS -> leads to systemd-boot loader installed by Proxmox -> doesn’t work, I went through the instructions multiple times, but in the end can’t see the 7 VF functions with dmesg | grep i915. Though based on other dmesg output looks like the module gets loaded successfully otherwise, ZFS, secure boot turned on in BIOS -> GRUB loader -> works ext4, secure boot off -> GRUB loader -> works… Read more »

Don Joe
January 10, 2024 7:49 pm
Reply to  sam

Hi Sam, I struggled with this as well. But then I read this:

and this:

For EFI Systems installed with ZFS as the root filesystem systemd-boot is used, unless Secure Boot is enabled. All other deployments use the standard grub bootloader 

So it seems that ZFS + Secure Boot enabled in BIOS = GRUB Loader (NOT systemd-boot)

December 4, 2023 7:10 pm

If anyone wants to use parsec and sunshine, one option is to use an older version of Intel drive.

January 17, 2024 10:23 am
Reply to  adm

This doesn’t work unfurtunately

February 10, 2024 10:44 am
Reply to  adm

Worked for me! Thank you!

Stan S
December 6, 2023 8:44 pm

I am getting an error

Building module:
Cleaning build area…
make -j1 KERNELRELEASE=6.2.16-5-pve -C /lib/modules/6.2.16-5-pve/build M=/var/lib/dkms/i915-sriov-dkms/6.2.16-5/build KVER=6.2.16-5-pve……………………………(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.2.16-5-pve (x86_64)
Consult /var/lib/dkms/i915-sriov-dkms/6.2.16-5/build/make.log for more information.

Stan S
December 7, 2023 6:22 pm
Reply to  Stan S

I was able to pass this point after i upgraded Proxmox to 8.1 with 6.5 kernel. Now after I finish all the configuration I get this root@proxmox:~# lspci | grep VGA dmesg | grep i915 00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04) 0a:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 52) [  0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-6.5.11-7-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7 [  0.067792] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.5.11-7-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7 [  4.618943] i915: module verification failed: signature and/or required key missing – tainting kernel [  66.410260] snd_hda_codec_hdmi hdaudioC0D2:… Read more »

Stan S
December 7, 2023 8:48 pm
Reply to  Derek Seaman

Thank you for your reply. The secure boot is not enabled on my proxmox. In the summary, under boot mode I have EFI. How can i change it to secure boot?

December 11, 2023 12:29 am

Go to the operation
dkms install -m i915-sriov-dkms -v $KERNEL -k $(uname -r) -force -j 1
dkms status
Error! Kernel headers for kernel 6.5.11-4-pve could not be found.
Please install the linux-headers-6.5.11-4-pve package,
or use the –kernelsourcedir option to tell DKMS where it is located.

I don’t know what to do about this error

December 18, 2023 12:03 pm
Reply to  hunter

You have to reboot pve after updating the kernel.

February 5, 2024 2:19 pm
Reply to  hunter
apt-get install linux-headers-6.5.11-4-pve
December 11, 2023 2:50 pm

Out of curiosity; is there a specific reason why the number of virtual functions was chosen to be 7? Is there any specific detriment to setting it to the specific number of vm’s you wish to use or putting it into a gpu pool, and then just assigning the pool to multiple vms?

December 16, 2023 2:53 am
Reply to  Derek Seaman

I think my question might actually be better posed to the i915 dkms driver creator, I can see in the sriov parameter, it just returns the value specified(possibly limiting to u16?). I think the direction where I wanted to head with the question is what/where in the hardware determines the optimal number of vfs. I recognize that since the driver creator ran it with 7, that the number is probably already optimized.

December 12, 2023 9:02 am

God bless you.
Thank you so much for this guide. Please keep it updated.

December 13, 2023 1:38 pm

Thanks. Worked like a charm.

Can you please tell how to use this vGPU with ubuntu server VM ?

February 27, 2024 12:13 am
Reply to  Shailesh

I’d like to know this as well. Thanks!

December 14, 2023 8:29 pm
December 21, 2023 5:53 am

Thank you. Following your method, I succeeded see 7 VFs enabled,but this VFs cannot be used on Synology NAS,Is there any solution?

December 25, 2023 3:32 am

Hello and thank you for the great instructions.
However, I have a strange error.
After setting up and restarting everything seems to work.
I can see the 7 cards, but after a while they are deactivated.

root@pve:~# dmesg | grep i915
[   0.000000] Command line: BOOT_IMAGE=/vmlinuz-6.5.11-7-pve root=ZFS=/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on iommu=pt initcall_blacklist=sysfb_init pcie_aspm=off pcie_port_pm=off i915.enable_guc=3 i915.max_vfs=7
[   0.186420] Kernel command line: BOOT_IMAGE=/vmlinuz-6.5.11-7-pve root=ZFS=/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on iommu=pt initcall_blacklist=sysfb_init pcie_aspm=off pcie_port_pm=off i915.enable_guc=3 i915.max_vfs=7
[  15.622358] i915 0000:00:02.5: [drm] PMU not supported for this GPU.
[  15.629766] i915 0000:00:02.0: Enabled 7 VFs
[  31.432536] i915 0000:00:02.0: Disabled 7 VFs

February 17, 2024 6:18 am
Reply to  DOM

Where you ever able to solve this? I’m also working with a 12th gen (and 11th, and 10th…) but I’m having this exact issue with Disabled 7 VFs and then nothing’s available anymore.

December 25, 2023 11:15 am

Thanks for the guide, just point that going through the systemd path, sysfsutils install command is missing. In my case, without it, lspci only displayed 1 VGA.

January 5, 2024 2:22 am

do we need to have secure boot on for this process? the guide implied IF secure boot was enabled this is what needs to be done. im on 8.1 without secure boot enabled – all process pass other than the reboot at the end. no MOK enrollment screen appears and after the reboot lspci | grep VGA reports just one listing of the gpu. im on i5-12400, vt enabled and secure boot disabled.

January 6, 2024 6:40 am

Would this guide work with passing a 12th gen chip into a linux VM? Ive followed this guide right up until adding the hardware to the linux VM however the device is not detected or shown within /dev/dri

February 7, 2024 8:15 pm
Reply to  greenie

Did you get this working?

January 11, 2024 9:07 pm

Thx for the post!

It goes with out saying, VT-d and SR-IOV should be enabled in BIOS. I spent sometime troubleshooting before i realized SR-IOV was not enabled. After i made sure both were enabled, it all worked great!

January 24, 2024 12:32 am
Reply to  Tuan

What was the symptom of SR-IOV not being enabled?

January 17, 2024 10:26 am

This is great article! I followed it for my Proxmox 8.1 kernel 6.5.11-7-pve with Windows 10 VM setup. At first I couldn’t get the VFs work but I realized that I started the procedure on fresh Proxmox install so as soon as I ran “apt update && apt upgrade” the kernel version changed after the next reboot which takes place at later point! I think such a warning/remark could be added to this intruction. Eventually I reinstalled Proxmox, updated packages and kernel, REBOOTED the server and only then I started this whole procedure. Everything works fine for me! PS Indeed… Read more »

January 24, 2024 5:58 pm

The instructions were superb. The only deviation i made was having to install the iris drivers before removing the display from the vm hardware config. The vm would not boot otherwise. Now with the gpu, i get get a “Windows has stopped this device because of an error”. If i remove it in device mangler, windows finds it and shows it installs fine, but i get no gpu tab in task manager. After a reboot, same thing. Little x in device mangler. I have shut down and restarted the vm several times, restarted the node. Clean device driver install. Uninstall… Read more »

January 25, 2024 8:06 am

Hello, I followed your guide and rebuild my kernel. However, I was not able to get my driver to work. After a few days of troubleshooting, I realized that my CPU (Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz) does not have the vGPU.

My quetion now is, how do I rollback these changes to get to my stock proxmox kernel?

> pveversion
pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-7-pve)

David LaPorte
January 28, 2024 9:18 am

Great tutorial, thank you! One issue I encountered is that the i915 driver crashes if you’re also using a discrete Intel GPU. It was necessary to add “softdep drm pre: vfio-pci” to /etc/modprobe.d/drm.conf and “options vfio-pci ids=8086:56a0” to /etc/modprobe.d/vfio-pci.conf for my Arc A770. Once the i915 didn’t have to deal with it, everything went as expected. A couple questions: I have an AlderLake-S (desktop) CPU – based on the table here, should I set “i915.enable_guc=2” instead? With or without SR-IOV enabled, I’m getting the dreaded “Code 43” in a Windows 11 guest. Does anyone have any idea what BIOS magic,… Read more »

David LaPorte
January 28, 2024 9:25 am

Just after writing my host message, I realized that I had not set the CPU type to “host” as you described. I did that, and it works!

The comment about the discrete intel card still stands, probably worth calling out in your tutorial.

Thanks again for putting this together.

January 28, 2024 11:38 am

I’ve proxmox 8.1 and I’m using this device from Nipogi AD08.

I’ve configured the passthrough, added the video card “Alder Lake-P GT1 [UHD Graphics]” as PCI (without the flag PCI otherwise Windows is returning error 43) but at the end HDMI is not working.

Any suggestion / path to follow (i executed the guide here reported and no issue detected)

February 6, 2024 6:25 am

I follow all your steps, but it doesnt work.
my CPU is 12100, Intel VT-d is enabled, secure boot turned off , kernel:6.5.11-8

root@www:~# uname -r

root@www:~# dmesg | grep i915
[  18.836824] i915 0000:00:02.0: [drm] fb0: i915drmfb frame buffer device
[  19.498743] i915 0000:00:02.0: not enough MMIO resources for SR-IOV
[  19.580350] i915 0000:00:02.0: [drm] *ERROR* Failed to enable 7 VFs (-ENOMEM)
[  19.663001] i915 0000:00:02.0: not enough MMIO resources for SR-IOV
[  19.743905] i915 0000:00:02.0: [drm] *ERROR* Failed to enable 7 VFs (-ENOMEM)

February 23, 2024 10:08 am
Reply to  hubert

Hi, I have got the same error and fixed it by enabling “Transparent Hugepages” in BIOS. Got a CWWK 13th gen i5 1335U and found it somewhere near in the CPU section in BIOS.

February 9, 2024 9:08 pm

When I run lspci | grep VGA I get the expected output: 00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.1 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.2 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.3 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.4 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.5 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.6 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe… Read more »

February 16, 2024 3:34 pm
Reply to  Hygaard


Last edited 10 days ago by James
February 19, 2024 11:22 am
Reply to  Hygaard

Hygaard and Derek- Many thanks for providing this tutorial. I am using this on a new Minisforum MS-01 with 13th Gen Raptor Lake iGPU. Like Hygaard, I have no output from “dmesg | grep i915”. Also like Hygaard, I can proceed and everything appears to work. In my case however, I am seeing unstable reboots in both the VM and the node hosing it. I am a hobbyist user of Proxmox so my understanding of issues can be limited. In the Syslog of the node I can see this 12x when starting the VM: Feb 19 12:56:36 pve01 QEMU[9707]: kvm:… Read more »

February 13, 2024 4:43 am

Hi All, Thank you for the detailed and updated article. I have a “8 x 12th Gen Intel(R) Core(TM) i3-1215U (1 Socket)” Installed and configured according to the article and everything is working excellent, meaning that i have two Windows VMs (10 and 11) both configured with vFR and running concurrently. My issue starts with the macOS VM i have which is configured (according to the article on installing mac on proxmox) with “Display: VMware compatible). when i start this VM, i see in the dmesg that proxmox “Disabled 7 VFs” , and from there on, there is no more… Read more »

February 14, 2024 3:43 am

Kudos! Great guide thanks.
I was able to enable GPU passthrough in my brand new Dell Optiplex Micro 7010 (Released in 2023) w/ Intel 13th i5 processor and UHD integrated video card. Everything seems working fine but sometimes my RDP gets disconnected and I find this trace in syslogs:

Feb 14 10:52:07 pve kernel: e1000e 0000:00:1f.6 enp0s31f6: Reset adapter unexpectedly
Feb 14 10:52:07 pve kernel: vmbr0: port 1(enp0s31f6) entered disabled state

No errors before enabling GPU passthrough.
Any clue?

February 16, 2024 12:37 am
Reply to  Ric

Adding other logs as separate msg due to limits on length:

Feb 14 10:52:03 pve kernel: e1000e 0000:00:1f.6 enp0s31f6: Detected Hardware Unit Hang:
  TDH                  <2e>
  TDT                  <69>
  next_to_use          <69>
  next_to_clean        <2d>
  time_stamp           <100023e7c>
  next_to_watch        <2e>
  jiffies              <100024100>
  next_to_watch.status <0>
MAC Status             <80483>
PHY Status             <796d>
PHY 1000BASE-T Status  <3800>
PHY Extended Status    <3000>
PCI Status             <10>

After days of intensive use I can confirm that setting Intel E1000 instead of VirtIO solves this issue.

February 14, 2024 12:03 pm

Hi Derek,
(for some reason, my post was deleted),
I installed and configured everything correctly, and able to run two windows VM, but when i start a macOS VM, i see in dmesg “Disabled 7 VFs”. the macOS is configured as “Display: VMware compatible” . how can i configure the macOS VM correctly ?

February 19, 2024 5:52 pm
Reply to  Daniel

I had the same issue. I found that I had a VM auto starting trying to use PCIe IDs 0.

As said above.. only use PCIe IDs 1-7

Using PCIe IDs 0 seems to disable the 7 VFs.

February 23, 2024 12:23 pm

Greetings, GPU passthrough in Optiplex 7010 (2023) i5 13th gen makes my VM unstable and unresponsive, shows often black screen or graphic glitches, so I need to kill/stop it. When starting it up I see in Syslogs (not sure it’s linked to this issue): Feb 23 20:00:36 pve QEMU[782076]: kvm: VFIO_MAP_DMA failed: Invalid argument Feb 23 20:00:36 pve QEMU[782076]: kvm: vfio_dma_map(0x557568db8460, 0x383800000000, 0x20000000, 0x7fcba0000000) = -22 (Invalid argument) Feb 23 20:00:36 pve QEMU[782076]: kvm: VFIO_MAP_DMA failed: Invalid argument Feb 23 20:00:36 pve QEMU[782076]: kvm: vfio_dma_map(0x557568db8460, 0x383800000000, 0x20000000, 0x7fcba0000000) = -22 (Invalid argument) The only way I’ve found to mitigate is… Read more »

February 26, 2024 5:12 am

Will this work with a 10th gen intel processor? or if the only vm that needs the gpu is windows is it possible to just pass it to that?

February 26, 2024 3:11 pm

Bloody legend you are! Works great on 6.5.13-1-pve. I now have GPU passthrough for both Windows 11 and ttek Plex LXC