Y'all are trans, and I know how much you trans Zoomers code (no idea if that's a Linux thing or not tho), so I thought I might try Hexbear for some tech help. I'm fried, I've watched so damn many tutorials and read so many threads on this but I can't seem to work it out. I barely know a few basic commands in the console, and the alphabet soup of different directories, programs, and the very language that's used to discuss Linux is too much for me to process. I've learned a lot, but I need some help.

I've built a pretty nice server that's not doing much right now besides NAS storage. I'm running a TrueNAS scale VM on Proxmox, I'm filling it up with all my pirate booty, and I want to watch it through Jellyfin, which I have installed to an LXC container (unprivileged for security, tried it both ways and I can't get it). The problem is, how do I get jelly to see the NAS drive? I don't know how to map it one way or another. I'm running the storage through an HBA in ZFS mirror with an SMB data set that I can see just fine and access in Windows, but jelly seems to just be stuck in it's own little world.

I've seen things about creating users within jelly, which I tried, and it just tells me that the user I supposedly created with SMB credentials doesn't exist. Tried using the GUI to find the NAS via IP, no dice. I'm fucking tired, I've been at it for a week or so now, I just want to watch a movie this weekend.

  • tactical_trans_karen [she/her, comrade/them]
    hexagon
    ·
    edit-2
    1 year ago

    Alright, I took a few days break to start fresh on it again. Lot of info here and I don't understand a lot of it, so I hope you don't mind but I need to kind of go line by line.

    consider running ZFS directly on the host rather than from the VM.

    I'm doing that too. I had two 500gb 2.5" drives from old laptops that I use. The VM has pass through access through the HBA to two 16tb, one 2tb, one 1tb, and I have an 8tb and 500gb that I'll plug in later. As for running RAIDZ2, it'll be a while before I have the funds. The data on these things is pirated and replaceable, plus I don't need to maintain up time. The two 16tb drives are in mirror right now never the less. I should note too, I'm running a i5-4570 with 32gb of 1600 ddr3.

    The downside here would be that you'd be using ZFS on Linux rather than the slightly more feature-rich upstream ZFS running on BSD, but imo that tradeoff is worth it.

    What is BSD?

    Paragraph 3

    What is qemu? What is cloud-init and what does being upstream mean?

    Sorry, lots of jesse-wtf

    Paragraph 4

    Got it, I understand virtualization pretty well.

    Valid answers

    Okay, so I've already set up TrueNAS Scale with a SMB share, I can access it all over my network from windows and load and play things from it without issue, loads pretty quick with the 2.5gb Ethernet too! 😀 Setting up permissions in TrueNAS is a little weird to me though, but I've been figuring it out. I followed a tutorial that had me do a lot of what seemed to be unnecessary stuff to the ACL at first. Also, this is my second go around, I tried TrueNAS Core first and couldn't get things quite right; also tried jellyfin in TrueNAS jail and it wasn't working right. Anyway, I digress.

    If your videos are on a single zfs volume/dataset, then using zfs' built-in support for NFS is my recommendation. If you only want to share a subdirectory, then I'd still use NFS but I'd set up /etc/exports (or equivalent) myself. Ideally you'd put a password on the share.

    Wut? I don't really know what NFS is, that's like a file sharing protocol like SMB? I'm pretty sure I want to stick with SMB for ease of use from my windows based clients. Is there a more layman's term for volume/dataset?

    Once shared, you can set up your lxc container to read-only mount the NFS share. Do this by pointing it at the (hopefully static!) IP address of your NAS using proxmox tools + docs. If both are running via proxmox, then the bridge interface should already be set up for you on the host, so you just need to make sure both virtualized environment IPs are on your LAN CIDR, so like 192.178.0.[1-255] or whatever (there are other options but this is simplest). This is under the network tab for the VM or lxc container. You may need to restart the lxc container for the NFS share to be mounted.

    I don't know how to actually change settings to subnet and or assign static IPs. I'm running DD-WRT on my router that is set as my gateway and I have DHCP on. I don't know where to click and what to put in to these different platforms to make those changes. Initially I had to look on my router to see what it assigned my Jellyfin container to and then change the IP in Proxmox in order to access the web interlace.. And I don't know how to mount that NFS share, let alone the SMB that I have set up. Like, you say proxmox tools + docs and I don't know what that is. I also don't know how to check if they're on the same LAN CIDR (what is that btw?).

    Then just point jellyfin at the local path (on the lxc container) that the NFS share is mounted on.

    How, where?

    One thing to look out for: don't put your jellyfin database (in its config dir) on a rw NFS share. It's an SQLite database and due to how NFS works there is a risk of database corruption. By default the database will still be in your lxc container and not on any NFS share, just wanted to mention this potential future pitfall.

    Speaking Greek to me here comrade. I think you're saying don't install jellyfin on the share drive? in which case I haven't, its on the mirrored 500gb drives that I run proxmox off of.

    Edit: I personally use iSCSI or much weirder things for media volume sharing and forgot that NFS (and Samba) doesn't send inotify events by default. So you may want to look into iSCSI or doing specific workarounds that stimulate inotify. inotify is helpful because it tells jellyfin when there are new files available.

    Sounds like a nice feature... would I scrap SMB for iSCSI? If it's a lot more technical stuff to learn I'm not confident it'll be worth it for me. I thought iSCSI was a virtualized pass through on proxmox?

    At any rate, thanks for your input, I mostly just know enough to mess things up a lot of times. I don't completely even understand the file systems of linux.

    Edit: I used this helper script to install the JF LXC. https://tteck.github.io/Proxmox/ Feel free to jump in on the other comment string too.

    • Maoo [none/use name]
      ·
      1 year ago

      Hi! Hopefully I can make some things make more sense.

      What is BSD?

      Ah well I thought you were using TrueNAS Core which is based on FreeBSD rather than Linux. TrueNAS Scale is Linux though so nevermind!

      What is qemu? What is cloud-init and what does being upstream mean?

      These are just some lower-level tools that Proxmox is using to run VMs. qemu manages the virtual machines and cloud-init is a standardized configuration system for setting up VMs when they boot up. When you look at a VM's settings in proxmox, under the hood some of them are qemu settings and cloud-init settings. Number of cores is a qemu thing. Virtualized Ethernet card settings are a cloud-init thing. I'm only mentioning this because if proxmox things aren't making sense you might want to play around with these tools more directly until they make sense. Proxmox doesn't really make them easier to understand, just easier to discover and set in one interface.

      Wut? I don't really know what NFS is, that's like a file sharing protocol like SMB? I'm pretty sure I want to stick with SMB for ease of use from my windows based clients. Is there a more layman's term for volume/dataset?

      SMB could work just fine! I just default to NFS when all I want is to share a directory between Linux systems. I just mean "share" when I say volume or dataset in terms of these two tools.

      I don't know how to actually change settings to subnet and or assign static IPs. I'm running DD-WRT on my router that is set as my gateway and I have DHCP on.

      It'll be way easier to with with the lxc containers and VMs if they have static IPs, so definitely prioritize this! There are two perfectly valid ways to do this:

      1. Tell DD-WRT's DHCP server to only assign IPs to a limited range, leaving the rest for you to statically assign at the level of each lxc container or VM. For example, if you tell the DHCP server to only assign for address 192.168.0.50 and up, you could have a container configure itself statically at 192.168.0.10 and a VM could configure itself at say 192.168.0.26. I prefer this method because it means I need to do less network fiddling at the level of a router.
      2. Configure the DHCP server on the router to assign your lxc containers and VMs to static IPs. They'll automatically get the right IPs. They're recognized via MAC address so you'd want to verify that your lxc containers and VMs have unique ones for the virtual network cards.

      I use different software than dd-wrt (I use used low power enterprise stuff) so I don't know the exact way to do these in its settings, but it should be able to do either.

      I also don't know how to check if they're on the same LAN CIDR (what is that btw?).

      I really just mean subnet. CIDR is a way of describing IP ranges, doesn't really matter except I think proxmox might want you to use CIDR notation sometimes when doing some network configuration?

      Basically your lxc containers / VMs have network settings somewhere on the proxmox web interface. Maybe under hardware? If you click to configure one of them, it'll pop up a dialog that will let you manually specify a static IP, gateway, and netmask. The static IP is an IP on your LAN. The gateway is the address of your router (like 192.168.0.1). The netmask is almost always 255.255.255.0. You might have to use CIDR notation for the static IP - I forget whether it's at this step or when installing proxmox itself. You'll know because it will raise a little input error warning that says your static IP setting is invalid. Slap a /24 on the end and it'll go away. Like 192.168.0.10/24.

      How, where?

      It depends on how you installed and configured Jellyfin. You may be able to use the admin dashboard built into jellyfin.

      I think you're saying don't install jellyfin on the share drive?

      Jellyfin stores a lot of settings and data in an SQLite database, which is just a file in the jellyfin config directory. You'll be able to avoid an entire category of ways that database can get corrupted if you don't put it on a network share.

      Sounds like a nice feature... would I scrap SMB for iSCSI?

      iSCSI is a bit more challenging to use than smb. It's very good at what it does but if you can make smb work then I'd stick with that. But if you ever get an itch to try something different, including providing other kinds of shared storage for VMs, look into iSCSI.