- 3 Posts
- 35 Comments
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·6 days ago
So this looks good then?
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·7 days agoYes, just using the iGPU. Thought about an Nvidia card, but setting it up sounded like torture so just whatever is on the i5-13500 for now
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
2·7 days agoIn case you want to keep following, I did make that post in c/jellyfin
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·7 days agoSo I starting this post with many intertwining issues, but most of them have been resolved thanks to extensive help. At this point, most of my issues are Jellyfin-specific so I made a new post in c/jellyfin. But thank you, I’ll be trying your method if mine continues to fail me
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
2·7 days agoYeah, it seems like the transplanting of LXCs, VMs, and docker is fairly pain-free…where I really shot myself in the foot is starting on an underpowered NAS and network transfers are clearly not my friend.
I’m not familiar with the backup stuff, but I remember hearing about it being added recently. I’ll look into it, thanks for the recommendation.
You taught me a lot of stuff in just a couple days. The overwhelming/anxious part of dealing with Proxmox for me is still the pass-through of data from outside devices. VMs aren’t bad at all, but everything else seems like a roll of the dice to see if the machine will allow the connection or not
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·7 days agoI tried taking a screenshot of the full page to show you, but yes it’s set to QSV and /dev/dri/renderD128. I’ve tried QSV and VAAPI with similar results, I’m sticking with QSV for now as it’s Jellyfin’s official recommendation. I’ve enabled decoding for H264, HEVC, VP9, and AVI. I’ve enabled hardware encoding for H264 and HEVC. If I disable transoding completely it works fine, but some of the streaming devices need 720p functionality (ideally to transcode down to 4:3 480i).
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·7 days agoGreat point actually, time for c/jellyfin I think. Would you mind helping me with the transferal of config and user data? Is “NFS mount NAS docker data to host” > “pass NFS to jelly LXC” > “copy data from NAS folder to LXC folder” the right idea?
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·7 days agoSo should I be disabling some hardware decoding options then?
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·7 days agoQSV and ‘/dev/dri/renderD128’. I’ll switch to VAAPI and see… Edit: no luck, same error
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·7 days agoOk, consider it done! My concern is this section of the admin settings:

I followed Intel’s decode/encode specs for my CPU, but there’s no feedback on my selection. I’m still getting “Playback failed due to a fatal player error.”
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·7 days agoLXC is fine with me, the “new Jellyfin” instance is mostly working anyway. It just has a few issues:
- Config and user data from “old Jellyfin” isn’t there and doesn’t want to connect. I tried connecting my NAS’s docker data to Prox host like the previous mount, but it doesn’t like it.
- Aforementioned HWA errors (I’m guessing I checked an incorrect box)
- Most data from the NAS isn’t showing up. I added all libraries and did a full rescan and reboot, but most of the media still isn’t there. I’m hoping passing config data will fix that
And yes, I see card0 and renderD128 entries. ‘vainfo’ shows VA-API version: 1.20 and Driver version: Intel iHD driver…24.1.0
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
2·8 days agoI used the community script’s lxc for jelly. With that said, the docker compose I’ve been using is great, and I wouldn’t mind just transferring that over 1:1 either…whichever has the best transcoding and streaming performance. Either way, I’m unfortunately going to need a bit more hand-holding
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·8 days agoI solved the LXC boot error; there was a typo in the mount (my keyboard sometimes double presses letters, makes command lines rough).
So just to recap where I am: main NAS data share is looking good, jelly’s LXC seems fine (minus transcoding, “fatal player error”), my “docker” VM seems good as well. Truly, you’re saving the day here, and I can’t thank you enough.
What I can’t make sense of is that I made 2 NAS shares: “A” (main, which has been fixed) and “B” (currently used docker configs). “B” is correctly connected to the docker VM now, but “B” is refusing to connect to the Proxmox host which I think I need to move Jellyfin user data and config. Before I go down the process of trying to force the NFS or SMB connection, is there any easier way?
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
2·8 days agoThanks! Its not easy, but I don’t give up. Took me 3 years to get the *arrs running, but I eventually got it
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·8 days agoWell, now the jelly lxc is failing to boot "run_buffer: 571 Script exited with status 2 Lxc_init: 845 failed to run lxc.hook.pre-start for container “101"”
But the mount seems stable now. And the VM is Debian 12
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·8 days agoAh, that distinction makes sense…I should’ve thought of that
So for the record, my Jellyfin-lxc is 101 (SMB mount, problematic) and my catch-all Docker VM is 102 (haven’t really connected anything, and I don’t care how it’s done as long as performance is fine)
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·8 days agoYes! I do
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·8 days agoAre there different rules for a VM with that command? I made a 2nd NAS share point as NFS (SMB has been failing, I’m desperate, and I don’t know the practical differences between the protocols), and Proxmox accepted the NFS, but the share is saying “unknown.” Regardless, I wanted to see if I could make it work anyway so I tried ‘pct set 102 -mp1 /mnt/pve/NAS2/volume2/docker,mp=/docker’
102 being a VM I set up for docker functions, specifically transferring docker data currently in use to avoid a lapse in service or user data.
Am I doing this in a stupid way? It kinda feels like it
LazerDickMcCheese@sh.itjust.worksOPto
Selfhosted@lemmy.world•[Proxmox] Jellyfin w/ NAS mount + iGPU passthroughEnglish
1·8 days agoI restarted everything like you suggested, same ‘showmount’ result unfortunately…I double checked the SMB mount in the datacenter, and the settings look correct to me. The NAS’s storage icon shows that it’s connected, but it seems like that doesn’t actually mean it’s *firmly *connected
Yeah, I’m about to start the process of trashing the system and starting anew with Ubuntu Server. Even if I had 24/7 community support, I think I’d still dread dealing with Proxmox. The whole reason I hopped on the Prox train was that videos make it seem like an alternative to deep-diving into cli…but everything I’ve been doing is cli, so screw it