Proxmox VE 8.1: Windows 11 vGPU (VT-d) Passthrough with Intel Alder Lake

Interested in sharing your Intel Alder Lake GPU between multiple Proxmox 8.x VMs? This post covers configuring Intel vGPU virtual functions on Proxmox 8.1/8.0 using an Intel 12th Generation CPU (Alder Lake) with Windows 11 Pro. I will walk you through the Proxmox 8.x kernel configuration, modifying GRUB, Installing Windows 11 Pro, and then setting up the Intel graphics inside Windows. 

I’ve completely updated the content from my Proxmox 8.0 vGPU post which appeared on my blog back in June. While the process is generally the same as before, I’ve added new sections and refreshed all the prior content. I’ve redirected the old blog post URL to this post. In case you need the old post you can view it here: Proxmox VE 8: Windows 11 vGPU (VT-d) Passthrough with Intel Alder Lake. It has a lot of user comments, which you might find helpful. However, use this post for the installation process on both Proxmox 8.0 and 8.1. DO NOT use this process on Proxmox 8.2, as DKMS is broken on kernel 6.8, which ships with Proxmox 8.2.

Update April 25, 2024: Proxmox 8.2 is now out, but ships with Kernel 6.8x. DKMS is broken on this kernel. DO NOT use this procedure on a vanilla Proxmix 8.2 install with Kernel 6.8. Either wait to upgrade to Proxmox 8.2, or pin the kernel to a prior 6.5 kernel that still works with DKMS and then upgrade.

Update Dec 7, 2023: Proxmox has released a minor kernel upgrade. Kernel 6.5.11-7-pve. If you update from the Proxmox UI, it looks like DKMS rebuilds the kernel and you should be good to go. However, in my experience your 7 VFs will vanish if you do nothing. As my article states, just re-run the all the GRUB steps which rebuild DKMS from scratch, then reboot. Now my 7 VFs are back. This is why it is wise to pin your kernel version and only upgrade as needed. I also added a bit to the troubleshooting section.

Update December 2, 2023: Added a screenshot for the secure boot MOK setup. 

Update November 27, 2023: Added an additional step for Proxmox 8.1 or later installs that use secure boot. Updated Plex information, as their stance has now changed. They will work on SR-IOV (vGPU) support for a future release. 

Update November 26, 2023: I’ve modified both the DKMS and GRUB sections so they are now responsive to the kernel version your Proxmox host is running. It also performs more system cleanup, and is pretty much just a copy/paste endeavor now. The changes were inspired by a Github comment, which I modified for even more automation.

Update November 25, 2023: The DKMS repo has been updated to address the minor tweak needed for Kernel 6.5. So I removed the step to modify one of the DKMS files. I also added a LXC vGPU VF section to address other services like Plex and their vGPU support.

GPU Passthrough vs Virtualization

Typical GPU passthrough passes the entire PCIe graphics device to the VM. This means only one VM can use the GPU resources. This has an advantage of being able to output the video display via the computer’s HDMI or DisplayPort ports to an external monitor. But, you are limited to one VM using the GPU. This can be useful if you want Proxmox on your desktop computer, but also want to fully use the GPU resources for your primary desktop OS with an external monitor. 

However there is a way to share the GPU among VMs. This technology called Intel VT-d (Intel Virtualization Technology for Directed I/O) which enables the virtualization of the GPU resources and a present VF (virtual function) to up to 7 VMs. This enables (up to) 7 VMs to concurrently use the GPU, but you lose the ability to use a physically connected monitor. Thus you are limited to Remote Desktop access only for these VMs.  

Full GPU passthrough and vGPU setups both have their place. It totally depends on what you want your setup to do. This post will focus on vGPU configuration and sharing your GPU with up to 7 Proxmox VMs. I’ve only tested this with Windows 11 VMs. Linux VMs, in the past, have had Intel driver issues that caused some issues with vGPUs. So test thoroughly if you want to try vGPU with Linux VMs. 

What GPU does Windows 11 see?

When using a vGPU VF the OS does not know you have virtualized the GPU. The screenshots below are from my Beelink i5-1240P Proxmox VE 8.1 host. The “root” GPU is at PCIe 02.0, and you can see 7 GPU VFs (virtual functions).

I’ve assigned one GPU VF to a Windows 11 Pro VM (00:02.1). From the Windows perspective the stock WHQL Intel driver works as normal. The GPU even shows up as an Intel Iris Xe. You can also look at the Intel Arc control application and view the various hardware details of the GPU. 

My Working Configuration

Not all Intel CPUs support VT-d with their GPU. Intel only supports it on 11th Generation and later CPUs. It considers prior generations “legacy”. I’ve seen forum posts that users have issues with Intel 11th Gen, so that may or may not work for you. My latest configuration is as follows:

  • Beelink SEi12 Pro (Intel 12th Gen Core i5-1240P)
  • Proxmox VE 8.1
  • Linux Kernel 6.5.11-4-pve
  • Windows 11 Pro (23H2)
  • Intel GPU Driver 31.0.101.4972

The setup I wrote about in June 2023 used Proxmox 8.0 and Linux Kernel 6.2.x. Both Proxmox 8.0 and 8.1 have been solid performers. Note that your Proxmox host must have Intel VT-d (or whatever they call it) enabled in your BIOS, and your motherboard must properly implement it. Not all motherboards have working VT-d for GPUs. 

Note: Several readers left comments on my 8.0 post about the Parsec app not working. Unfortunately, I can confirm even on Proxmox 8.1 and the latest Intel drivers that Parsec does not work. 

What about LXC vGPU VF Compatibility?

If you are nerdy enough to run a Windows 11 VM with a vGPU, you might also have some Linux LXCs that can use GPU resources as well (such as Plex). Linux LXC compatibility with vGPU VFs might be problematic. 

As of the publication date of this post, a Plex LXC with Ubuntu 22.04 has problems with HDR tone mapping. Basically the Linux Intel Media Driver (IMD) doesn’t like using a vGPU VF. If you enable HDR tone mapping on the Plex server and you are viewing HDR content on a device that needs server side HDR tone mapping, the video stream will likely be corrupted. 

However, Plex hardware transcoding will still work. Hardware transcoding uses the XE graphics module in the CPU (not GPU), whereas HDR tone mapping uses the GPU itself. Meaning, these two hardware offloads use different APIs and the GPU offload for HDR tone mapping is broken when using a vGPU VF. 

Chuck from Plex posted on the forums on that he consulted with his engineers, and they will work on making Plex compatible SR-IOV. However, the needed kernel mods or GRUB updates will be up to the user to make. Intel has mainline SR-IOV for the Linux 6.4 kernel, and is working on 6.5. You can follow the progress in SR-IOV Mainling.  

Bottom line, if you have LXCs on a Proxmox host that need GPU resources, they may not work with a vGPU VF. When Intel releases their official 6.5 package, it should offer wider compatibility. Configuring a LXC to use a vGPU VF requires modifying the LXC config file. Since this post is about Windows 11 vGPU, that procedure is out of scope for this post. I’ll cover the required LXC config changes in a separate post.

Proxmox Kernel Configuration

Note: The commands automatically detect the kernel version you are running and adjusts the commands accordingly. If you are running Proxmox 8.0 you should be using kernel 6.2x. Proxmox 8.1 comes with kernel 6.5x. Both should work. 

  1. On your Proxmox host open a shell and run the following commands. First we need to install Git, kernel headers, do a bit of cleanup, then set the kernel variable with the right version. 
				
					apt update && apt install git pve-headers mokutil -y
rm -rf /var/lib/dkms/i915-sriov-dkms*
rm -rf /usr/src/i915-sriov-dkms*
rm -rf ~/i915-sriov-dkms
KERNEL=$(uname -r); KERNEL=${KERNEL%-pve}
				
			

2. Now we need to clone the DKMS repo and modify the configuration file to set the kernel version. Check that the package name is i915-sriov-dkms and the package version matches your kernel version.

				
					cd ~
git clone https://github.com/strongtz/i915-sriov-dkms.git
cd ~/i915-sriov-dkms
cp -a ~/i915-sriov-dkms/dkms.conf{,.bak}
sed -i 's/"@_PKGBASE@"/"i915-sriov-dkms"/g' ~/i915-sriov-dkms/dkms.conf
sed -i 's/"@PKGVER@"/"'"$KERNEL"'"/g' ~/i915-sriov-dkms/dkms.conf
sed -i 's/ -j$(nproc)//g' ~/i915-sriov-dkms/dkms.conf
cat ~/i915-sriov-dkms/dkms.conf
				
			

3. Here we install DKMS, link the kernel source, and check the status. Verify that the kernel shows as added.

				
					apt install --reinstall dkms -y
dkms add .
cd /usr/src/i915-sriov-dkms-$KERNEL
dkms status
				
			

4. Let’s now build the new kernel and check the status. Validate that it shows installed.

				
					dkms install -m i915-sriov-dkms -v $KERNEL -k $(uname -r) --force -j 1
dkms status
				
			

5. For fresh Proxmox 8.1 and later installs, secure boot may be enabled. Just in case it is, we need to load the DKMS key so the kernel will load the module. Run the following command, then enter a password. This password is only for MOK setup, and will be used again when you reboot the host. After that, the password is not needed. It does NOT need to be the same password as you used for the root account.

				
					mokutil --import /var/lib/dkms/mok.pub
				
			

Proxmox GRUB Configuration

Note: Default installations of Proxmox use the GRUB boot loader. If that’s your situation, follow the steps in this section. If you are using ZFS or another config that uses systemd bootloader, skip down to the systemd section below. 

1. Back in the Proxmox shell run the following commands if you DO NOT have a Google Coral PCIe TPU in your Proxmox host. You would know if you did, so if you aren’t sure, run the first block of commands. If your Google Coral is USB, use the first block of commands as well. Run the second block of commands if your Google Coral is a PCIe module. 

				
					cp -a /etc/default/grub{,.bak}
sudo sed -i '/^GRUB_CMDLINE_LINUX_DEFAULT/c\GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7"' /etc/default/grub
update-grub
update-initramfs -u -k all
apt install sysfsutils -y
				
			

If your Proxmox host DOES have a Google Coral PCIe TPU and you are using PCIe passthrough to a LXC or VM, use this command instead. This will blacklist the Coral device at the Proxmox host level so that your LXC/VM can get exclusive access.

				
					cp -a /etc/default/grub{,.bak}
sudo sed -i '/^GRUB_CMDLINE_LINUX_DEFAULT/c\GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7 initcall_blacklist=sysfb_init pcie_aspm=off"' /etc/default/grub
update-grub
update-initramfs -u -k all
apt install sysfsutils -y
				
			

2. Now we need to find which PCIe bus the VGA card is on. It’s typically 00:02.0

				
					lspci | grep VGA
				
			

3. Run the following command and modify the PCIe bus number if needed. In this case I’m using 00:02.0. To verify the file was modified, cat the file and ensure it was modified.

				
					echo "devices/pci0000:00/0000:00:02.0/sriov_numvfs = 7" > /etc/sysfs.conf
				
			
				
					cat /etc/sysfs.conf
				
			

Proxmox SystemD Bootloader

Note: If your Proxmox host doesn’t use GRUB to boot (default), but rather uses systemd, then follow these steps. This is likely the case if you are using ZFS. Skip this section if you are using GRUB. 

  1. Let’s modify the kernel loader command line:
				
					nano /etc/kernel/cmdline
				
			

2. Add the following text to the END of the current line. Do NOT add a second line. 

				
					intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7
				
			

3. Run the following command to update the boot loader. 

				
					proxmox-boot-tool refresh
				
			

Finish PCI Configuration

1. Now we need to find which PCIe bus the VGA card is on. It’s typically 00:02.0

2. Run the following command and modify the PCIe bus number if needed. In this case I’m using 00:02.0. To verify the file was modified, cat the file and ensure it was modified.

				
					echo "devices/pci0000:00/0000:00:02.0/sriov_numvfs = 7" > /etc/sysfs.conf
				
			
				
					cat /etc/sysfs.conf
				
			

3. Reboot the Proxmox host. If using Proxmox 8.1 or later with secure boot you MUST setup MOK. As the Proxmox host reboots, monitor the boot process and wait for the Perform MOK management window (screenshot below). If you miss the first reboot you will need to re-run the mokutil command and reboot again. The DKMS module will NOT load until you step through this setup. 

Secure Boot MOK Configuration (Proxmox 8.1+)

4. Select Enroll MOK, Continue, Yes, <password>, Reboot. 

5. Login to the Proxmox host, open a Shell, then run the commands below. The first should return eight lines of PCIe devices. The second command should return a lot of log data. If everything was successful, at the end you should see minor PCIe IDs 1-7 and finally Enabled 7 VFs. If you are using secure boot and do NOT see the 7 VFs, then the DKMS module is probably not loaded. Troubleshoot as needed. 

				
					lspci | grep VGA
dmesg | grep i915
				
			

6. Now that the Proxmox host is ready, we can install and configure Windows 11. If you do NOT see 7 VFs enabled, stop. Troubleshoot as needed. Do not pass go, do not collect $100 without 7 VFs. If you are using secure boot and you aren’t seeing the 7 VFs, double check the MOK configuration. 

Windows 11 Installation

  1. Download the latest Fedora Windows VirtIO driver ISO from here
  2. Download the Windows 11 ISO from here. Use the Download Windows 11 Disk Image (ISO) for x64 devices option. 
  3. Upload both the VirtIO and Windows 11 ISOs to the Proxmox server. You can use any Proxmox storage container that you wish. I uploaded them to my Synology. If you don’t have any NAS storage mapped, you probably have “local“, which works. 

4. Start the VM creation process. On the General tab enter the name of your VM. Click Next.
5. On the OS tab select the Windows 11 ISO.  Change the Guest OS to Microsoft Windows, 11/2022. Tick the box for the VirtIO drivers, then select your Windows VirtIO ISO. Click Next. Note: The VirtIO drivers option is new to Proxmox 8.1. I added a Proxmox 8.0 step at the end to manually add a new CD drive and mount the VirtIO ISO.

6. On the System page modify the settings to match EXACTLY as those shown below. If your local VM storage is named differently (e.g. NOT local-lvm, use that instead).  

7. On the Disks tab, modify the size as needed. I suggest a minimum of 64GB. Modify the Cache and Discard settings as shown. Only enable Discard if using SSD/NVMe storage (not a spinning disk).

8. On the CPU tab, change the Type to host. Allocate however many cores you want. I chose 2.

9. On the Memory tab allocated as much memory as you want. I suggest 8GB or more. 
10. On the Network tab change the model to Intel E1000. Note: We will change this to VirtIO later, after Windows is configured.

11. Review your VM configuration. Click Finish. Note: If you are on Proxmox 8.0, modify the hardware configuration again and add a CD/DVD drive and select the VirtIO ISO image. Do not start the VM. 

Windows 11 Installation

  1. In Proxmox click on the Windows 11 VM, then open a console. Start the VM, then press Enter to boot from the CD.
  2. Select your language, time, currency, and keyboard. Click Next. Click Install now.
  3. Click I don’t have a product key
  4. Select Windows 11 Pro. Click Next.
  5. Tick the box to accept the license agreement. Click Next.
  6. Click on Custom install.
  7. Click Load driver.

8. Click OK.
9. Select the w11 driver. Click Next.

10. On Where do you want to install Windows click Next.
11. Sit back and wait for Windows 11 to install.

Windows 11 Initial Configuration

Note: I strongly suggest using a Windows local account during setup, and not your Microsoft cloud account. This will make remote desktop setup easier, as you can’t RDP to Windows 11 using your Microsoft cloud account. The procedure below “tricks” Windows into allowing you to create a local account by attempting to use a locked out cloud account. Also, do NOT use the same username for the local account as your Microsoft cloud account. This might cause complications if you later add your Microsoft cloud account.

  1. Once Windows boots you should see a screen confirming your country or region. Make an appropriate selection and click Yes.
  2. Confirm the right keyboard layout. Click Yes. Add a second keyboard layout if needed. 
  3. Wait for Windows to check for updates. Windows may reboot. 
  4. Enter the name of your PC. Click Next. Wait for Windows to reboot.
  5. Click Set up for personal use. Click Next. Click Sign in.
  6. To bypass using your Microsoft cloud account, enter no @ thankyou .com (no spaces), enter a random password, click Next on Oops, something went wrong
  7. On the Who’s going to use this device? screen enter a username. Click Next.
  8. Enter a password. Click Next.
  9. Select your security questions and enter answers.
  10. Select the Privacy settings you desire and click Accept.
  11. In Windows open the mounted ISO in Explorer. Run virtio-win-gt-x64 and virtio-win-guest-tools. Use all default options. 
  12. Shutdown (NOT reboot) Windows.

13. In Proxmox modify the Windows 11 VM settings and change the NIC to VirtIO.

14. Start the Windows 11 VM. Verify at least one IP is showing in the Proxmox console.

15. You can now unmount the Windows 11 and VirtIO ISOs. 

16. You will probably also want to change the Windows power plan so that the VM doesn’t hibernate (unless you want it to). 

17. You may want to disable local account password expiration, as RDP will fail when your password expires with no way to reset. You’d need to re-enable the Proxmox console to reset your password (see later in this post for a how to).

				
					wmic UserAccount set PasswordExpires=False
				
			

Windows 11 vGPU Configuration

1. Open a Proxmox console to the VM and login to Windows 11. In the search bar type remote desktop, then click on remote desktop settings.

2. Enable Remote Desktop. Click Confirm.

3. Open your favorite RDP client and login using the user name and credentials you setup. You should now see your Windows desktop and the Proxmox console window should show the lock screen.
4. Inside the Windows VM open your favorite browser and download the latest Intel “Recommended” graphics driver from here. In my case I’m grabbing 31.0.101.4972.
5. Shutdown the Windows VM. 
6. In the Proxmox console click on the Windows 11 VM in the left pane. Then click on Hardware. Click on the Display item in the right pane. Click Edit, then change it to none.

Note: If in the next couple of steps the 7 GPU VFs aren’t listed, try rebooting your Proxmox host and see if they come back. Then try adding one to your Windows VM again.

7. In the top of the right pane click on Add, then select PCI Device.
8. Select Raw Device. Then review all of the PCI devices available. Select one of the sub-function (.1, .2, etc..) graphics controllers (i.e. ANY entry except the 00:02.0). Do NOT use the root “0” device, for ANYTHING. I chose 02.1. Click Add. Do NOT tick the “All Functions” box. Tick the box next to Primary GPU. Click Add.

9. Start the Windows 11 VM and wait a couple of minutes for it to boot and RDP to become active. Note, the Proxmox Windows console will NOT connect since we removed the virtual VGA device. You will see a Failed to connect to server message. You can now ONLY access Windows via RDP. 
10. RDP into the Windows 11 VM. Locate the Intel Graphics driver installer and run it. If all goes well, you will be presented with an Installation complete! screen. Reboot. If you run into issues with the Intel installer, skip down to my troubleshooting section below to see if any of those tips help. 

Windows 11 vGPU Validation

1. RDP into Windows and launch Device Manager
2. Expand Display adapters and verify there’s an Intel adapter in a healthy state (e.g. no error 43).

3. Launch Intel Arc Control. Click on the gear icon, System Info, Hardware. Verify it shows Intel Iris Xe.

4. Launch Task Manager, then watch a YouTube video. Verify the GPU is being used.

Troubleshooting Intel Driver Installation

The first time I did this on my N100 Proxmox server the Intel drivers had issues installing. For some reason the RDP session would freeze mid way through the install, or would get disconnected and then fail to connect. I had to reboot the VM from the Proxmox UI and then re-start the Intel installer using their “clean” option. After a couple of re-installs, it ran just fine. It ran flawlessly the first time on my i5-1240P server. If after a VM reboot RDP can’t connect after a few minutes, reboot the VM and try again. 

On rare occasions if you reboot the Proxmox host and the Windows 11 VM gets a GPU device error, try rebooting the Proxmox host again and see if it clears. Re-installing the Intel drivers might help too. 

Also, if you see the following message in the dmesg logs, this likely means you have secure boot enabled and did not properly configure MOK or enter the MOK password after your first host reboot. If this is the case, re-run the mok utility command, connect a physical monitor/keyboard to your Proxmox host, reboot, and run through the MOK setup.

i915: module verification failed: signature and/or required key missing – tainting kernel

How to Use Intel GPU VFs

You can configure up to 7 VMs to use vGPU resources on the Proxmox host. Each VM MUST be assigned a unique PCIe VF. In addition, no VMs can use the “root” PCIe GPU device. If a running VM is assigned the root GPU, the VFs will not function properly. 

Some readers asked if they could connect a HDMI cable to their mini PC and access the Windows 11 desktop. As far as I know this is not possible, as the HDMI output is tied to the primary PCIe GPU device, which we are not using. You will be limited to using RDP to access your desktop(s). You would need to use full GPU PCIe passthrough for that. 

Future Proxmox Kernel Upgrades

As Proxmox gets updated over time with newer Linux Kernel versions, you WILL need to reconfigure DKMS to patch the new kernel. Thankfully this is a pretty simple process. Just follow the section Proxmox Kernel Configuration. This will rebuild the new kernel with the latest DKMS module. 

There are dependencies between the DKMS module and the Linux kernel. Sometimes the DKMS module breaks with newer kernels, or manual tweaks to the DKMS files are needed. DO NOT assume the very latest Proxmox kernel will be successfully patched by DKMS. You can check out the dkms GitHub Issues page and see if there are known issues, or report an issue if you are having problems. If you want to play it safe, after a new kernel comes out I would wait a few days or weeks to see if any issues pop up on the DKMS repo. 

There is a neat solution to prevent Proxmox updates from installing a new kernel. You can use the pin command, as shown below, to allow Proxmox to update everything EXCEPT your kernel. Of course you eventually do need to update the kernel, but this way you can update it on your schedule and after you’ve reviewed forums to see if anyone ran into an issue with the latest Proxmox kernel.

To use the pin command, run the command below to pin your current kernel version. 

				
					proxmox-boot-tool kernel pin $(uname -r)
				
			

When you are ready to upgrade your kernel, run the unpin command. Update your Proxmox host then re-pin the new kernel. 

				
					proxmox-boot-tool kernel unpin
				
			

Unable to connect via RDP

If for some reason you can’t connect via RDP to your VM, there is a way to regain a local Proxmox console. This can happen if your password expires, for example. To enable the Proxmox console (and disable vGPU):

  1. Shutdown the Windows VM.
  2. Remove the PCIe VF device attached to your GPU.
  3. Modify the Display hardware property and change it to Default.
  4. Start the VM and wait for the Proxmox console to connect.

Do whatever you need to do to troubleshoot the issue. To re-enable vGPU:

  1. Shutdown the VM.
  2. Change Display to None.
  3. Re-assign the PCIe GPU VF device.
  4. Start the VM.

Kernel Cleanup (Optional)

Over time as you run Proxmox and do routine upgrades, your system can get littered with old kernels. This isn’t a problem per say, but does waste storage space. If you want to clean up unused kernels, you can use the awesome script by tteck to remove old versions. No need to reboot after the cleanup. This is entirely optional and is only mentioned for good housekeeping. 

				
					bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/misc/kernel-clean.sh)"

				
			

Summary

The process for configuring Windows 11 for vGPU using VT-d on Proxmox VE 8.1 is a bit tedious, but it has worked very well for me. By using the virtual functions (VFs), you can share the Intel GPU with up to seven VMs at the same time. Remember that each VM on the Proxmox host that you want to use a vGPU must use a unique VF. Using GPU VFs means you can’t use a physically connected monitor to your Proxmox host and access your VM’s desktop. You have to use remote desktop to access your Windows 11’s desktop. 

Configuring your Proxmox host for vGPU with VFs may have a negative impact on Linux based VMs or LXCs that need GPU resources, such as Plex. If you are using other VM/LXCs that need GPU resources, be sure to test them thoroughly including HDR tone mapping. Also remember that when you do Proxmox host updates that install a newer kernel, you will have to re-patch the kernel with dkms or your vGPU VFs won’t function. Using the kernel pin command you upgrade kernels on your schedule, while allowing other Proxmox updates to install. 

Print Friendly, PDF & Email

Related Posts

Subscribe
Notify of
100 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Devedse
November 25, 2023 2:30 pm

Thanks for the update, it helped me update my cluster again.

Miles
November 25, 2023 4:08 pm

Hello, Nice rework of your previous post (june), that I had followed to get rid of the error 43 in intel GPU driver in my Win11 Pro install. In reading this post, I understand why I couldn’t get my connected screen to show something. It’s just not possible when using vGPU. Ok. But for my NUC, I didn’t manage to get the iGPU passthrough working nor having a display on the connected screen… I own a NUC, Geekom Mini-IT13, with an Intel i9-13900H with an Iris XE for iGPU. Do you have a working guide lie yours to follow for… Read more »

Last edited 5 months ago by Miles
Miles
December 10, 2023 1:48 am
Reply to  Derek Seaman

Hello,
sorry for the delay of my answer, was busy at work and at home with my toddler.

My NUC’s bios doesn’t really come with options… no cpu options…
but under windows (the one pre installed) HWInfo64 says that it’s ok for VT-D.

I’d be glad if you wrote a guide for full iGPU passthrough like this one.

Yiannis
February 7, 2024 3:53 pm
Reply to  Derek Seaman

I would be very glad to read about FUll GPU passthough.

Josep
March 6, 2024 10:22 am
Reply to  Derek Seaman

Derek, I have Nipogi mini PC with an Intel N95 Alder Lake-N with an Intel UHD Graphics with Proxmox 8.1.4 with Home Assistant and Windows 11 Pro and a container that I will use for testing. I also would like to use a full passthrough to be able to use the HD screen connected to the PC through Windows 11 VM. It would be amazing if you could write a post about how to do it. Many thanks.

Richard
November 27, 2023 12:20 pm

For others having an issue with a clean install of Proxmox v8.1 and SR-IOV setup, you’ll need to sign the module built with DKMS. To do this, you’ll need to enrol the key with mokutil –import /var/lib/dkms/mok.pub before you build/install it. You’ll also need to enter the password on 1st reboot, or it will not accept the driver. Hope this helps.

Cloud
November 28, 2023 1:46 am

Amazing guide!
Can it be use for Amd or Nvidia cards?

Eric
November 28, 2023 2:19 am

Hello Derek, Thank you for this updated write up and you others. I find them all easy to follow for what is a very technical concept. Based on your review of the Beelink products I decided it was time I upgraded my whole setup. My aim is to go low power. I usually use Windows for most work but am happy with Linux. I also run a Home Assistant node. I went for the Beelink EQ12 Pro as it will more than cope with the load I need. Added Proxmox 8.0 and installed the 2 VM’s . Initially I thought… Read more »

December 2, 2023 4:27 pm

Can confirm this works on 14th gen chips (14600k). It’s a bit dodgy (as you mention in the article) where sometimes a VM won’t really come online or the drivers don’t install the first time. But other than that it’s great. Hopefully Parsec gets patched (or the driver, whatever is broken) to work since that’s mostly what I want it for. My fresh install of Proxmox 8.1 uses the systemd UEFI bootloader instead of GRUB. To update it run: nano /etc/kernel/cmdline Add “intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7” (without quotes) to the end of the first and only line. Then run: proxmox-boot-tool… Read more »

cyvro
December 2, 2023 7:17 pm

Thanks for the writeup! There’s some really interesting tech here. Would this work for intel arc gpu’s? If so, are there features that the (non-intel) cpu and motherboard would need to support in order for this to work?

sam
December 4, 2023 9:30 am

Hello! I have tested on an Intel NUC 13 (i-1340P), Proxmox 8.1.3 (kernel 6.5.11-4), so thought I’d share for people struggling with this / googling: ZFS, secure boot turned off in BIOS -> leads to systemd-boot loader installed by Proxmox -> doesn’t work, I went through the instructions multiple times, but in the end can’t see the 7 VF functions with dmesg | grep i915. Though based on other dmesg output looks like the module gets loaded successfully otherwise, ZFS, secure boot turned on in BIOS -> GRUB loader -> works ext4, secure boot off -> GRUB loader -> works… Read more »

Don Joe
January 10, 2024 7:49 pm
Reply to  sam

Hi Sam, I struggled with this as well. But then I read this:

https://forum.proxmox.com/threads/newly-mirrored-zfs-drive-not-bootable-on-secured-boot.138731/#post-619376

and this:

https://pve.proxmox.com/wiki/Host_Bootloader

For EFI Systems installed with ZFS as the root filesystem systemd-boot is used, unless Secure Boot is enabled. All other deployments use the standard grub bootloader 

So it seems that ZFS + Secure Boot enabled in BIOS = GRUB Loader (NOT systemd-boot)

Vinny
February 27, 2024 1:17 pm
Reply to  sam

Came here to bump up this solution. thanx for this tip. It solved my issue. Had to reinstall proxmox with ext4 without secure boot to have my VF’s up and running.

adm
December 4, 2023 7:10 pm

If anyone wants to use parsec and sunshine, one option is to use an older version of Intel drive.
https://www.intel.com/content/www/us/en/download/741626/780560/intel-arc-pro-graphics-windows.html

Tommzi
January 17, 2024 10:23 am
Reply to  adm

This doesn’t work unfurtunately

Tanner
February 10, 2024 10:44 am
Reply to  adm

Worked for me! Thank you!

Stan S
December 6, 2023 8:44 pm

I am getting an error

Building module:
Cleaning build area…
make -j1 KERNELRELEASE=6.2.16-5-pve -C /lib/modules/6.2.16-5-pve/build M=/var/lib/dkms/i915-sriov-dkms/6.2.16-5/build KVER=6.2.16-5-pve……………………………(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.2.16-5-pve (x86_64)
Consult /var/lib/dkms/i915-sriov-dkms/6.2.16-5/build/make.log for more information.

Stan S
December 7, 2023 6:22 pm
Reply to  Stan S

I was able to pass this point after i upgraded Proxmox to 8.1 with 6.5 kernel. Now after I finish all the configuration I get this root@proxmox:~# lspci | grep VGA dmesg | grep i915 00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04) 0a:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 52) [  0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-6.5.11-7-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7 [  0.067792] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.5.11-7-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7 [  4.618943] i915: module verification failed: signature and/or required key missing – tainting kernel [  66.410260] snd_hda_codec_hdmi hdaudioC0D2:… Read more »

Stan S
December 7, 2023 8:48 pm
Reply to  Derek Seaman

Thank you for your reply. The secure boot is not enabled on my proxmox. In the summary, under boot mode I have EFI. How can i change it to secure boot?

youfly
March 5, 2024 10:26 am
Reply to  Derek Seaman

me too
—————————–
xxx@tiao:~# mokutil –sb-state 
SecureBoot disabled
Platform is in Setup Mode

—————————————-

[  0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-6.5.13-1-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7
[  0.061695] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.5.13-1-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7
[  9.124334] i915: module verification failed: signature and/or required key missing – tainting kernel
[  9.711208] i915 0000:00:02.0: [drm] VT-d active for gfx access

John T Davis
April 25, 2024 3:10 pm
Reply to  Derek Seaman

I’m seeing this same behavior: Secure boot is disabled on my system, Proxmox is NOT booting in secure mode, and it’s still tainting the kernel because it can’t pass module verification. Turning on Secure Boot, even with a fresh install, is not an option on my system. It’s an HP mini PC, and the BIOS makes correctly enabling secure boot an absolute nightmare. I’m going to see if there’s a github issue on this, or open one if there’s not. I suspect the build script is hard coded to generate modules requiring signing (it always generates the MOK keys), and… Read more »

hunter
December 11, 2023 12:29 am

Go to the operation
dkms install -m i915-sriov-dkms -v $KERNEL -k $(uname -r) -force -j 1
dkms status
Hints
Error! Kernel headers for kernel 6.5.11-4-pve could not be found.
Please install the linux-headers-6.5.11-4-pve package,
or use the –kernelsourcedir option to tell DKMS where it is located.

I don’t know what to do about this error

Pierre
December 18, 2023 12:03 pm
Reply to  hunter

You have to reboot pve after updating the kernel.

Modi
February 5, 2024 2:19 pm
Reply to  hunter
apt-get install linux-headers-6.5.11-4-pve
alice
December 11, 2023 2:50 pm

Out of curiosity; is there a specific reason why the number of virtual functions was chosen to be 7? Is there any specific detriment to setting it to the specific number of vm’s you wish to use or putting it into a gpu pool, and then just assigning the pool to multiple vms?

alice
December 16, 2023 2:53 am
Reply to  Derek Seaman

I think my question might actually be better posed to the i915 dkms driver creator, I can see in the sriov parameter, it just returns the value specified(possibly limiting to u16?). I think the direction where I wanted to head with the question is what/where in the hardware determines the optimal number of vfs. I recognize that since the driver creator ran it with 7, that the number is probably already optimized.

shailesh
December 12, 2023 9:02 am

God bless you.
Thank you so much for this guide. Please keep it updated.

Shailesh
December 13, 2023 1:38 pm

Thanks. Worked like a charm.

Can you please tell how to use this vGPU with ubuntu server VM ?

Gianpaolo
February 27, 2024 12:13 am
Reply to  Shailesh

I’d like to know this as well. Thanks!

xiao
December 14, 2023 8:29 pm
lantern
December 21, 2023 5:53 am

Thank you. Following your method, I succeeded see 7 VFs enabled,but this VFs cannot be used on Synology NAS,Is there any solution?

DOM
December 25, 2023 3:32 am

Hello and thank you for the great instructions.
However, I have a strange error.
After setting up and restarting everything seems to work.
I can see the 7 cards, but after a while they are deactivated.

root@pve:~# dmesg | grep i915
[   0.000000] Command line: BOOT_IMAGE=/vmlinuz-6.5.11-7-pve root=ZFS=/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on iommu=pt initcall_blacklist=sysfb_init pcie_aspm=off pcie_port_pm=off i915.enable_guc=3 i915.max_vfs=7
[   0.186420] Kernel command line: BOOT_IMAGE=/vmlinuz-6.5.11-7-pve root=ZFS=/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on iommu=pt initcall_blacklist=sysfb_init pcie_aspm=off pcie_port_pm=off i915.enable_guc=3 i915.max_vfs=7
[  15.622358] i915 0000:00:02.5: [drm] PMU not supported for this GPU.
[  15.629766] i915 0000:00:02.0: Enabled 7 VFs
[  31.432536] i915 0000:00:02.0: Disabled 7 VFs

Bill
February 17, 2024 6:18 am
Reply to  DOM

Where you ever able to solve this? I’m also working with a 12th gen (and 11th, and 10th…) but I’m having this exact issue with Disabled 7 VFs and then nothing’s available anymore.

TeRraNoX
December 25, 2023 11:15 am

Thanks for the guide, just point that going through the systemd path, sysfsutils install command is missing. In my case, without it, lspci only displayed 1 VGA.

greenie
January 5, 2024 2:22 am

do we need to have secure boot on for this process? the guide implied IF secure boot was enabled this is what needs to be done. im on 8.1 without secure boot enabled – all process pass other than the reboot at the end. no MOK enrollment screen appears and after the reboot lspci | grep VGA reports just one listing of the gpu. im on i5-12400, vt enabled and secure boot disabled.

greenie
January 6, 2024 6:40 am

Would this guide work with passing a 12th gen chip into a linux VM? Ive followed this guide right up until adding the hardware to the linux VM however the device is not detected or shown within /dev/dri

Ben
February 7, 2024 8:15 pm
Reply to  greenie

Did you get this working?

Tuan
January 11, 2024 9:07 pm

Thx for the post!

It goes with out saying, VT-d and SR-IOV should be enabled in BIOS. I spent sometime troubleshooting before i realized SR-IOV was not enabled. After i made sure both were enabled, it all worked great!

Paul
January 24, 2024 12:32 am
Reply to  Tuan

What was the symptom of SR-IOV not being enabled?

Thomas
March 13, 2024 5:00 am
Reply to  Derek Seaman

Exactly. In my case, SR-IOV is not an available option in my old ThinkServer TS440. With VT-d enabled, I am able to passthrough GPU but VFs does not show up.

Tommzi
January 17, 2024 10:26 am

This is great article! I followed it for my Proxmox 8.1 kernel 6.5.11-7-pve with Windows 10 VM setup. At first I couldn’t get the VFs work but I realized that I started the procedure on fresh Proxmox install so as soon as I ran “apt update && apt upgrade” the kernel version changed after the next reboot which takes place at later point! I think such a warning/remark could be added to this intruction. Eventually I reinstalled Proxmox, updated packages and kernel, REBOOTED the server and only then I started this whole procedure. Everything works fine for me! PS Indeed… Read more »

Paul
January 24, 2024 5:58 pm

The instructions were superb. The only deviation i made was having to install the iris drivers before removing the display from the vm hardware config. The vm would not boot otherwise. Now with the gpu, i get get a “Windows has stopped this device because of an error”. If i remove it in device mangler, windows finds it and shows it installs fine, but i get no gpu tab in task manager. After a reboot, same thing. Little x in device mangler. I have shut down and restarted the vm several times, restarted the node. Clean device driver install. Uninstall… Read more »

Rob
January 25, 2024 8:06 am

Hello, I followed your guide and rebuild my kernel. However, I was not able to get my driver to work. After a few days of troubleshooting, I realized that my CPU (Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz) does not have the vGPU.

My quetion now is, how do I rollback these changes to get to my stock proxmox kernel?

> pveversion
pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-7-pve)

David LaPorte
January 28, 2024 9:18 am

Great tutorial, thank you! One issue I encountered is that the i915 driver crashes if you’re also using a discrete Intel GPU. It was necessary to add “softdep drm pre: vfio-pci” to /etc/modprobe.d/drm.conf and “options vfio-pci ids=8086:56a0” to /etc/modprobe.d/vfio-pci.conf for my Arc A770. Once the i915 didn’t have to deal with it, everything went as expected. A couple questions: I have an AlderLake-S (desktop) CPU – based on the table here, should I set “i915.enable_guc=2” instead? With or without SR-IOV enabled, I’m getting the dreaded “Code 43” in a Windows 11 guest. Does anyone have any idea what BIOS magic,… Read more »

David LaPorte
January 28, 2024 9:25 am

Just after writing my host message, I realized that I had not set the CPU type to “host” as you described. I did that, and it works!

The comment about the discrete intel card still stands, probably worth calling out in your tutorial.

Thanks again for putting this together.

Luca
January 28, 2024 11:38 am

Hi,
I’ve proxmox 8.1 and I’m using this device from Nipogi AD08.

I’ve configured the passthrough, added the video card “Alder Lake-P GT1 [UHD Graphics]” as PCI (without the flag PCI otherwise Windows is returning error 43) but at the end HDMI is not working.

Any suggestion / path to follow (i executed the guide here reported and no issue detected)

hubert
February 6, 2024 6:25 am

Hi,
I follow all your steps, but it doesnt work.
my CPU is 12100, Intel VT-d is enabled, secure boot turned off , kernel:6.5.11-8

root@www:~# uname -r
6.5.11-8-pve

root@www:~# dmesg | grep i915
[  18.836824] i915 0000:00:02.0: [drm] fb0: i915drmfb frame buffer device
[  19.498743] i915 0000:00:02.0: not enough MMIO resources for SR-IOV
[  19.580350] i915 0000:00:02.0: [drm] *ERROR* Failed to enable 7 VFs (-ENOMEM)
[  19.663001] i915 0000:00:02.0: not enough MMIO resources for SR-IOV
[  19.743905] i915 0000:00:02.0: [drm] *ERROR* Failed to enable 7 VFs (-ENOMEM)

Costello
February 23, 2024 10:08 am
Reply to  hubert

Hi, I have got the same error and fixed it by enabling “Transparent Hugepages” in BIOS. Got a CWWK 13th gen i5 1335U and found it somewhere near in the CPU section in BIOS.

JRizzo
April 3, 2024 9:51 pm
Reply to  Costello

Hello, for AMI Bios you must enable *Above 4GB MMIO* under chipset settings.

Hygaard
February 9, 2024 9:08 pm

When I run lspci | grep VGA I get the expected output: 00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.1 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.2 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.3 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.4 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.5 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04) 00:02.6 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe… Read more »

James
February 16, 2024 3:34 pm
Reply to  Hygaard

Ignore

Last edited 2 months ago by James
James
February 19, 2024 11:22 am
Reply to  Hygaard

Hygaard and Derek- Many thanks for providing this tutorial. I am using this on a new Minisforum MS-01 with 13th Gen Raptor Lake iGPU. Like Hygaard, I have no output from “dmesg | grep i915”. Also like Hygaard, I can proceed and everything appears to work. In my case however, I am seeing unstable reboots in both the VM and the node hosing it. I am a hobbyist user of Proxmox so my understanding of issues can be limited. In the Syslog of the node I can see this 12x when starting the VM: Feb 19 12:56:36 pve01 QEMU[9707]: kvm:… Read more »

Fabio
March 18, 2024 10:26 am
Reply to  James

How did you configure i915 on MS-01? I can’t get the VFs in lspci

[    4.817548] i915 0000:00:02.0: 7 VFs could be associated with this PF
[    4.854429] fbcon: i915drmfb (fb0) is primary device
[    4.943160] i915 0000:00:02.0: [drm] fb0: i915drmfb frame buffer device
Hygard
March 18, 2024 10:33 pm
Reply to  Fabio

You may need to do the MOK setup part again, or if you’ve updated the Kernel, you’ll have to redo the build steps again. When I had that happen it was because I had updated my kernel and just had to restart from the beginning of the guide.

Hygard
March 18, 2024 10:30 pm
Reply to  James

For what it’s worth, I am also using an MS-01 with 13900H.
I have not run in to the same issues, so I can’t help much.

If it’s any help I am using Proxmox with Kernel Version:

Linux 6.5.11-8-pve (2024-01-30T12:27Z)

Daniel
February 13, 2024 4:43 am

Hi All, Thank you for the detailed and updated article. I have a “8 x 12th Gen Intel(R) Core(TM) i3-1215U (1 Socket)” Installed and configured according to the article and everything is working excellent, meaning that i have two Windows VMs (10 and 11) both configured with vFR and running concurrently. My issue starts with the macOS VM i have which is configured (according to the article on installing mac on proxmox) with “Display: VMware compatible). when i start this VM, i see in the dmesg that proxmox “Disabled 7 VFs” , and from there on, there is no more… Read more »

Ric
February 14, 2024 3:43 am

Kudos! Great guide thanks.
I was able to enable GPU passthrough in my brand new Dell Optiplex Micro 7010 (Released in 2023) w/ Intel 13th i5 processor and UHD integrated video card. Everything seems working fine but sometimes my RDP gets disconnected and I find this trace in syslogs:

Feb 14 10:52:07 pve kernel: e1000e 0000:00:1f.6 enp0s31f6: Reset adapter unexpectedly
Feb 14 10:52:07 pve kernel: vmbr0: port 1(enp0s31f6) entered disabled state

No errors before enabling GPU passthrough.
Any clue?

Ric
February 16, 2024 12:37 am
Reply to  Ric

Adding other logs as separate msg due to limits on length:

Feb 14 10:52:03 pve kernel: e1000e 0000:00:1f.6 enp0s31f6: Detected Hardware Unit Hang:
  TDH                  <2e>
  TDT                  <69>
  next_to_use          <69>
  next_to_clean        <2d>
buffer_info[next_to_clean]:
  time_stamp           <100023e7c>
  next_to_watch        <2e>
  jiffies              <100024100>
  next_to_watch.status <0>
MAC Status             <80483>
PHY Status             <796d>
PHY 1000BASE-T Status  <3800>
PHY Extended Status    <3000>
PCI Status             <10>

After days of intensive use I can confirm that setting Intel E1000 instead of VirtIO solves this issue.

Daniel
February 14, 2024 12:03 pm

Hi Derek,
(for some reason, my post was deleted),
I installed and configured everything correctly, and able to run two windows VM, but when i start a macOS VM, i see in dmesg “Disabled 7 VFs”. the macOS is configured as “Display: VMware compatible” . how can i configure the macOS VM correctly ?

Chris
February 19, 2024 5:52 pm
Reply to  Daniel

I had the same issue. I found that I had a VM auto starting trying to use PCIe IDs 0.

As said above.. only use PCIe IDs 1-7

Using PCIe IDs 0 seems to disable the 7 VFs.

Ric
February 23, 2024 12:23 pm

Greetings, GPU passthrough in Optiplex 7010 (2023) i5 13th gen makes my VM unstable and unresponsive, shows often black screen or graphic glitches, so I need to kill/stop it. When starting it up I see in Syslogs (not sure it’s linked to this issue): Feb 23 20:00:36 pve QEMU[782076]: kvm: VFIO_MAP_DMA failed: Invalid argument Feb 23 20:00:36 pve QEMU[782076]: kvm: vfio_dma_map(0x557568db8460, 0x383800000000, 0x20000000, 0x7fcba0000000) = -22 (Invalid argument) Feb 23 20:00:36 pve QEMU[782076]: kvm: VFIO_MAP_DMA failed: Invalid argument Feb 23 20:00:36 pve QEMU[782076]: kvm: vfio_dma_map(0x557568db8460, 0x383800000000, 0x20000000, 0x7fcba0000000) = -22 (Invalid argument) The only way I’ve found to mitigate is… Read more »

Peter
February 26, 2024 5:12 am

Will this work with a 10th gen intel processor? or if the only vm that needs the gpu is windows is it possible to just pass it to that?

Ryan
February 26, 2024 3:11 pm

Bloody legend you are! Works great on 6.5.13-1-pve. I now have GPU passthrough for both Windows 11 and ttek Plex LXC

Subduplicate
February 27, 2024 5:20 pm

NUC 12 with a 12700H So I’m getting stuck just before the Secure Boot MOK Configuration step. When I reboot the system hangs at: i915 0000:00:02.0: 7 VFs could be associated with this PF i915 0000:03:00.0: [drm] VT-d active for gfx access And I don’t even get the option to complete the MOK step. It’s happened to me on two kernels, 6.5.11-8-pve and 6.5.13-1-pve. I end up having to hold the power button to turn off, then when it boots gain it does so normally, but I don’t get video out past EFI stub: Loaded initrd from LINUX_EFI_INITRD_MEDIA_GUID device path… Read more »

Zabbli
March 1, 2024 4:18 am

Thanks for the great guide! Small note for others. The Display Adapter shows up as Intel(R) UHD Graphics instead of Intel(R) Iris(R) Xe Graphics if you’re not running a dual-memory channel configuration.

Ethan Scott
March 16, 2024 2:23 pm

Hey Derek,

Have you seen this github
https://github.com/intel/Display-Virtualization-for-Windows-OS
With this following Driver from Intel.
https://www.intel.com/content/www/us/en/download/737311/iotg-display-virtualization-drivers.html

This white paper coauthored by intel, lead me down this path.
https://us.dfi.com/Uploads/DownloadCenter/5631e304-28b2-4256-975a-5689750b5636/Intel%20iGPU%20(Integrated%20Graphics)%20SR-IOV%20-%20The%20Catalyst%20for%20IoT%20Virtualization%20in%20Factory%20Automation.pdf?timestamp=1676441838.9072

The Rundown on the first link is how to pass a physical display port (no more than 4 virtually) upon vms. This can be useful for people wishing to have physical display access their vms. more importantly i believe this will fix the parsec issues other are reporting along with sunshine issues I am having, when deployed with a display dummy plug.

Thank You for your existing work. Its helped me soo much.

Steven
March 18, 2024 7:25 am

Hi, I am using Proxmox 8 with Grub, all is good, except after reboot I get:

unknown parameter 'max_vfs' ignored

Any help in trouble shooting? I actually only need 1 for a Windows, dont think I need the 7

bbdf df
March 19, 2024 12:11 pm

Just saying thanks broke my install with the new kernel and used your guide and it fixed it perfectly. Previously used the guide from last summer.

FL92
March 31, 2024 5:01 pm
Reply to  bbdf df

hi guys, I ran into a bad problem, after trying the guide without success (Intel n305) when I try to format proxmox with liveusb, or simply to install another distro I get a strange message that prevents me from booting another one system, any advice on how to fix it?

The message Is: mokmanager not found
Something has gone seriously wrong: import_mok_state() failed

FL92
March 31, 2024 5:03 pm

salve ragazzi, mi sono imbattuto in un brutto problema, dopo aver provato la guida senza successo (Intel n305) quando provo a formattare proxmox con liveusb, o semplicemente ad installare un’altra distro mi appare uno strano messaggio che mi impedisce di avviare un altro sistema, qualsiasi consigli su come sistemarlo? hi guys, I ran into a bad problem, after trying the guide without success (Intel n305) when I try to format proxmox with liveusb, or simply to install another distro I get a strange message that prevents me from booting another one system, any advice on how to fix it? The… Read more »

FL92
April 1, 2024 7:03 pm
Reply to  FL92

Praticamente ho risolto eliminando alcune voci da efi Shell

Fe Sch
April 3, 2024 3:18 pm

Is there a documented procedure or best practice for reversing these steps?

dva411
April 6, 2024 8:56 pm

Just like several others, I was unable to bifurcate into 7 VFs using zfs without secure boot enabled. However, after trying, my existing Jellyfin LXC was pegging CPU with multiple transcodes (usually doesn’t break a sweat). I tried to reverse out the key changes, but was unable to solve the new issue. Therefore, I went ahead and enabled secure boot in my bios and reinstalled proxmox. I’m tempted to give it a whirl with my new grub configuation, but before I do, I have a couple of questions: Derek mentioned that this could negatively impact LXC’s and may not work… Read more »

Scyto
April 8, 2024 10:56 am
Reply to  dva411

secure boot is not required, also it is possible to convert an existing install to secure boot if you need it (especially if you already use grub/efi)

Scyto
April 8, 2024 10:55 am

If you are getting code 43 errors on the GPU after following this guide (or worse automatic repair loops) note that if you enable WSL or anything that configured windows hello (using a microsoft account at install time, using AAD join) then this will cause issues after a few reboots of the VM.

Make sure you use only a local account install and never AAD join / install WSL / bind to a microsoft account.

Edemir
April 12, 2024 7:16 am

Thanks for the great article, I first had a lot of problems because of newer kernel but since I discovered I had to use older kernle everything went as expected.

I have one doubt, MY gpu is discovered by windows is working normal, but when I run dmesg | grep i915 I see this error in each virtual GPU:

[  5.386761] i915 0000:00:02.3: [drm] *ERROR* GT0: IOV: Unable to confirm version 1.9 (0000000000000000)
[  5.386931] i915 0000:00:02.3: [drm] *ERROR* GT0: IOV: Found interface version 0.1.9.0

Is this something I have to worry about or its normal ?

THanks in advance

John T Davis
April 21, 2024 3:21 pm

Thank you so much for this guide.

Unfortunately, I didn’t make it past step 4. I Got the “bad return status” error trying to run the “install” command.

What do I need to do to make sure I’ve got the system back in the state where I started?
I’ve run the cleanup commands at the beginning of the article, but I’m not sure if I need to do anything else.

Thanks!

visual_delight
April 23, 2024 9:02 pm

Excellent guide. I was able to get the VFs setup and use it on a windows VM. However, I want to connect local monitor to the HDMI output, and seems like there is no way to do that with a vGPU. How do I go back to using GPU passthrough? Is fresh proxmox install the only way?

Jelger Haanstra
April 24, 2024 10:43 am

Tried following this guide but it does not seem to work with kernel version 6.8.4-2. As mentioned above the dkms install command exits with the following error.

make -j1 KERNELRELEASE=6.8.4-2-pve -C /lib/modules/6.8.4-2-pve/build M=/var/lib/dkms/i915-sriov-dkms/6.8.4-2/build KVER=6.8.4-2-pve.....(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.8.4-2-pve (x86_64)
Consult /var/lib/dkms/i915-sriov-dkms/6.8.4-2/build/make.log for more information.
i915-sriov-dkms/6.8.4-2: added

The mentioned log contains errors the following. I can paste it all here but here is a part of it. Anyone know how to proceed?

/var/lib/dkms/i915-sriov-dkms/6.8.4-2/build/drivers/gpu/drm/i915/intel_device_info.c:220:9: error: implicit declaration of function ‘INTEL_MTL_M_IDS’; did you mean ‘INTEL_MTL_IDS’? [-Werror=implicit-function-declaration]
  220
Jelger Haanstra
April 25, 2024 2:06 pm
Reply to  Derek Seaman

Thanx for the heads up. Already switched back to kernel 6.5 and got everything set up. Thanx for this guide, much appreciated.

Miles
April 25, 2024 11:33 am

Hello Derek,
Do you think the new version 8.2 proxmox VE will be able to get this tutorial through ?

Miles
April 25, 2024 3:11 pm
Reply to  Derek Seaman

Thanks for the answer. That’s conforting me to not do vGPU for my Plex server as I read everywhere that vGPU broke the driver implementation in the Plex server, and broke the hardware transcoding including hdr tone mappling that I need ! So I think I’ll do a full iGPU pass through to one VM and use inside it the docker PMS and Nextcloud+Memories that both need access to /dev/dri. I hope I’ll manage to to this. You didn’t wrote something for full intel iGPU passthrough by chance ? I read the article on the LXC but I want to… Read more »

Miles
April 25, 2024 11:45 am

Hello again ,
I was thinking about to have the screen working for the proxmox host in case of something get wrong with the vGPU?
Is it possible to have a backup kernel version to boot too in such case?
I don’t know if it’s even possible…
but I hope there is a way to get the proxmox console to be accessible directly with a screen connected to the host.

Thanks for the help 😊

John T Davis
April 25, 2024 3:05 pm
Reply to  Miles

The easiest thing to do, if you have an available PCIe slot, is to install a serial port card and enable it for TTY access in the kernel command line. That way, you can use a serial cable to get in, and see the entire boot-up process. With a USB Serial adapter, you won’t see any of the startup until the login prompt appears, so if it crashes before it gets to the login prompt, you won’t be able to see anything. LIkewise, SSH will only work if the system makes it far enough to activate the SSH server. If… Read more »