Y'all are trans, and I know how much you trans Zoomers code (no idea if that's a Linux thing or not tho), so I thought I might try Hexbear for some tech help. I'm fried, I've watched so damn many tutorials and read so many threads on this but I can't seem to work it out. I barely know a few basic commands in the console, and the alphabet soup of different directories, programs, and the very language that's used to discuss Linux is too much for me to process. I've learned a lot, but I need some help.

I've built a pretty nice server that's not doing much right now besides NAS storage. I'm running a TrueNAS scale VM on Proxmox, I'm filling it up with all my pirate booty, and I want to watch it through Jellyfin, which I have installed to an LXC container (unprivileged for security, tried it both ways and I can't get it). The problem is, how do I get jelly to see the NAS drive? I don't know how to map it one way or another. I'm running the storage through an HBA in ZFS mirror with an SMB data set that I can see just fine and access in Windows, but jelly seems to just be stuck in it's own little world.

I've seen things about creating users within jelly, which I tried, and it just tells me that the user I supposedly created with SMB credentials doesn't exist. Tried using the GUI to find the NAS via IP, no dice. I'm fucking tired, I've been at it for a week or so now, I just want to watch a movie this weekend.

  • YearOfTheCommieDesktop [they/them]
    ·
    edit-2
    1 year ago

    I'm assuming this is (one of) the threads you were looking at? https://forum.proxmox.com/threads/tutorial-unprivileged-lxcs-mount-cifs-shares.101795/

    I know little about proxmox/LXC but lots about linux and containers in general so I might be able to help

    Seems like you should be able to go into the LXC container and run (as root user, or with sudo):

    groupadd -g 10000 lxc_shares

    usermod -aG lxc_shares jellyfin

    then shut down the LXC

    And then on the proxmox host run (also all with root):

    mkdir -p /mnt/lxc_shares/nas_rwx

    echo '//NAS/nas/ /mnt/lxc_shares/nas_rwx cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0770,file_mode=0770,user=smb_username,pass=smb_password 0 0' | tee -a /etc/fstab

    Where //NAS/nas is the IP or hostname of the share (possibly localhost?) followed by the share folder name, and the username and password are set appropriately

    mount /mnt/lxc_shares/nas_rwx

    and if that succeeds,

    echo 'mp0: /mnt/lxc_shares/nas_rwx/,mp=/mnt/nas' | tee -a /etc/pve/lxc/LXC_ID.conf

    replacing LXC_ID with the VM ID of the LXC container (not sure how you get this but if you only have the one you could probably ls /etc/pve/lxc and see what options are there

    Then you should be able to start the LXC back up, and when you get to the point in jellfyfin of setting the media folder, just type /mnt/nas and press enter (or a subdir of it ig) and it should find it

    if you did all that and it didn't work then would need more info to help troubleshoot

    • tactical_trans_karen [she/her, comrade/them]
      hexagon
      ·
      1 year ago

      Yes, that was one of them that I looked at. Got to the point of adding a user (second command) and it didn't work. Also, I don't understand commands very well, the short hand for what you're telling it to do and the labels of files is confusing af to me.

      • YearOfTheCommieDesktop [they/them]
        ·
        edit-2
        1 year ago

        usermod modifies a user. In this case -aG means it is appending to the list of groups that a user is in (in this case so that said user can access files owned by that group). So you're adding the jellyfin user to the lxc_shares group that was just created in command #1. If it failed, my first guess would be that the jellyfin user doesn't exist. without the error message the command spit out, or more details on the setup of the jellyfin LXC, it's hard to say

        I could go into more detail but likely some troubleshooting back and forth would be needed to get it working

        • tactical_trans_karen [she/her, comrade/them]
          hexagon
          ·
          edit-2
          1 year ago

          Good to go: here's what I got so far. I was able to do the first two commands without error so I moved on to the PVE host and started plugging away.

          root@pve:~# mkdir -p /mnt/lxc_shares/nas_rwx root@pve:~# echo '//NAS/nas/ /mnt/lxc_shares/nas_rwx cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0770,file_mode=0770,user=smb_username,pass=smb_password 0 0' | tee -a /etc/fstab //NAS/nas/ /mnt/lxc_shares/nas_rwx cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0770,file_mode=0770,user=smb_username,pass=smb_password 0 0 root@pve:~# mount /mnt/lxc_shares/nas_rwx

          mount error: could not resolve address for NAS: Unknown error mount: (hint) your fstab has been modified, but systemd still uses the old version; use 'systemctl daemon-reload' to reload.

          Edit: if it's relevant, I used a helper script to install JF from this site: https://tteck.github.io/Proxmox/

          • YearOfTheCommieDesktop [they/them]
            ·
            edit-2
            1 year ago

            Okay!

            so, where it says //NAS/nas that should be replaced by the IP address (or hostname) of the NAS that you've been using to mount the SMB share on other computers, followed by the folder you want to mount. e.g. //192.168.0.5/mypiratedshit You ought to be able to get this from the windows clients you've successfully mounted it on I would think

            And where it says smb_username and smb_password those should be replaced with the username and password for the SMB share (feel free to redact them if you post here of course just make sure they're filled out right)

            do you have a text editor installed you can use on the PVE host? nano is relatively easy to use if it has it installed... The "echo ... | tee ..." thing the original command is doing will append a line to the /etc/fstab file but since you want to edit a line that's now already there you should use a text editor instead.

            try running nano /etc/fstab (also as root on the pve host). if it doesn't complain that it doesn't exist, go down to the line that starts with //NAS/nas, and change the parts mentioned above to the correct details for your SMB share. Then press Ctrl-X to quit, y to save changed, and then Enter to save at the same filename you loaded it from.

            (and yeah, nano is clunky but it's more user friendly than other options in that you don't have to remember the shortcuts to do different things, they're displayed on screen)

            then once you've got the /etc/fstab line fixed (if you aren't sure if it saved correctly you can always print the contents of the file with cat /etc/fstab), you can try the mount /mnt/lxc_shares/nas_rwx command again

            • tactical_trans_karen [she/her, comrade/them]
              hexagon
              ·
              1 year ago

              Okay, so it's giving me an error:

              root@pve:~# mount /mnt/lxc_shares/nas_rwx mount: /etc/fstab: parse error at line 3 -- ignored mount: /mnt/lxc_shares/nas_rwx: can't find in /etc/fstab. mount: (hint) your fstab has been modified, but systemd still uses the old version; use 'systemctl daemon-reload' to reload.

              I edited the /fstab file like you said and this is the printout:

              proc /proc proc defaults 0 0 //192.168.X.XXX/mnt/XXXX/XXXX /mnt/lxc_shares/nas_rwx cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0770,file_mode=0770,user=XXXX,pass=XXXX 0 0

              If I then enter the echo command, this is what I get:

              root@pve:~# echo 'mp0: /mnt/lxc_shares/nas_rwx/,mp=/mnt/nas' | tee -a /etc/pve/lxc/100.conf mp0: /mnt/lxc_shares/nas_rwx/,mp=/mnt/nas

              Looks okay?

              I was able to then go on to jelly and the only directory option it gives me to map is "/mnt/nas" and that's not working.

              What do?

              • YearOfTheCommieDesktop [they/them]
                ·
                edit-2
                1 year ago

                hmm okay...

                the parse error is a bit odd. do any of the values substituted for XXXX contain spaces by any chance. /etc/fstab uses spaces to separate the 6 different config parameters, so if any of them contain a space (including the folder names) it messes things up.

                If that's the issue, try substituting the space with \040 i.e. //192.168.1.1/My\040Folder/otherfolder /mnt/lxc_shares/nas_rwx cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0770,file_mode=0770,user=XXXX,pass=XXXX 0 0

                you might also need to run systemctl daemon-reload and try the mount again like the message suggests, honestly don't recall how that interaction works...

                • tactical_trans_karen [she/her, comrade/them]
                  hexagon
                  ·
                  1 year ago

                  Okay, so I did that. My directory is [IP]/mnt/M14 - Media/Media, so I changed it to /mnt/M14\040-\040Media/Media. Is that correct? it hasn't worked yet, getting this error:

                  mount /mnt/lxc_shares/nas_rwx mount error(2): No such file or directory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg) mount: (hint) your fstab has been modified, but systemd still uses the old version; use 'systemctl daemon-reload' to reload.

  • Maoo [none/use name]
    ·
    edit-2
    1 year ago

    Hi comrade! I hope it's cool to give other advice as well. I'll start with the other advice.

    If you have more then 3 (roughly identical) disks for your NAS, then you may want to consider RAIDZ2 as it will be slightly more robust. If you have at least 6 drives it will also give you substantially more useable storage. If neither of these apply or you just don't wanna, it's not a big deal.

    You may also want to consider running ZFS directly on the host rather than from the VM. ZFS likes to have full management of raw disks and can be CPU and memory-intensive at times, really benefiting from various CPU features your actual processor may have. The downside here would be that you'd be using ZFS on Linux rather than the slightly more feature-rich upstream ZFS running on BSD, but imo that tradeoff is worth it. This is not a huge deal either so long as your VM is providing truly raw access to the underlying disks (just a performance tweak).

    Okay so now proxmox stuff.

    Proxmox just makes it slightly easier to install and monitor and configure various open source software on Debian. Every thing you run or concept you use is actually some lower-level thing that you could do separately if you wanted to. For example, you could create lxc containers using lxc on plain Debian or virtual machines with qemu on plain Debian. This can be helpful when learning, because proxmox just shows you all the knobs you can fiddle but forgets to tell you that, say, cloud-init is an upstream thing with its own documentation that explains how to use it.

    So, in your case, it may be helpful to forget that it's proxmox and instead think of it this way: you have a NAS and an lxc container and you want to share some files on the NAS with the lxc container (videos and stuff from the NAS to jellyfin running on the lxc container).

    There are actually many valid answers:

    • Mapping a NAS directory to the shared host and then mapping it read-only to the lxc container.
    • Using an NFS on the NAS (using zfs or directly with NFS)
    • Using a Samba share on the NAS (using zfs or directly with NFS)
    • Using iSCSI on the NAS (with zfs and relevant iSCSI tools)

    I would personally recommend thinking of your NAS and lxc container as if they are two separate computers on the same network, so your job would be to figure out how to share over the network (so not using the first option).

    If your videos are on a single zfs volume/dataset, then using zfs' built-in support for NFS is my recommendation. If you only want to share a subdirectory, then I'd still use NFS but I'd set up /etc/exports (or equivalent) myself. Ideally you'd put a password on the share.

    Once shared, you can set up your lxc container to read-only mount the NFS share. Do this by pointing it at the (hopefully static!) IP address of your NAS using proxmox tools + docs. If both are running via proxmox, then the bridge interface should already be set up for you on the host, so you just need to make sure both virtualized environment IPs are on your LAN CIDR, so like 192.178.0.[1-255] or whatever (there are other options but this is simplest). This is under the network tab for the VM or lxc container. You may need to restart the lxc container for the NFS share to be mounted.

    Then just point jellyfin at the local path (on the lxc container) that the NFS share is mounted on.

    One thing to look out for: don't put your jellyfin database (in its config dir) on a rw NFS share. It's an SQLite database and due to how NFS works there is a risk of database corruption. By default the database will still be in your lxc container and not on any NFS share, just wanted to mention this potential future pitfall.

    Edit: I personally use iSCSI or much weirder things for media volume sharing and forgot that NFS (and Samba) doesn't send inotify events by default. So you may want to look into iSCSI or doing specific workarounds that stimulate inotify. inotify is helpful because it tells jellyfin when there are new files available.

    • tactical_trans_karen [she/her, comrade/them]
      hexagon
      ·
      edit-2
      1 year ago

      Alright, I took a few days break to start fresh on it again. Lot of info here and I don't understand a lot of it, so I hope you don't mind but I need to kind of go line by line.

      consider running ZFS directly on the host rather than from the VM.

      I'm doing that too. I had two 500gb 2.5" drives from old laptops that I use. The VM has pass through access through the HBA to two 16tb, one 2tb, one 1tb, and I have an 8tb and 500gb that I'll plug in later. As for running RAIDZ2, it'll be a while before I have the funds. The data on these things is pirated and replaceable, plus I don't need to maintain up time. The two 16tb drives are in mirror right now never the less. I should note too, I'm running a i5-4570 with 32gb of 1600 ddr3.

      The downside here would be that you'd be using ZFS on Linux rather than the slightly more feature-rich upstream ZFS running on BSD, but imo that tradeoff is worth it.

      What is BSD?

      Paragraph 3

      What is qemu? What is cloud-init and what does being upstream mean?

      Sorry, lots of jesse-wtf

      Paragraph 4

      Got it, I understand virtualization pretty well.

      Valid answers

      Okay, so I've already set up TrueNAS Scale with a SMB share, I can access it all over my network from windows and load and play things from it without issue, loads pretty quick with the 2.5gb Ethernet too! 😀 Setting up permissions in TrueNAS is a little weird to me though, but I've been figuring it out. I followed a tutorial that had me do a lot of what seemed to be unnecessary stuff to the ACL at first. Also, this is my second go around, I tried TrueNAS Core first and couldn't get things quite right; also tried jellyfin in TrueNAS jail and it wasn't working right. Anyway, I digress.

      If your videos are on a single zfs volume/dataset, then using zfs' built-in support for NFS is my recommendation. If you only want to share a subdirectory, then I'd still use NFS but I'd set up /etc/exports (or equivalent) myself. Ideally you'd put a password on the share.

      Wut? I don't really know what NFS is, that's like a file sharing protocol like SMB? I'm pretty sure I want to stick with SMB for ease of use from my windows based clients. Is there a more layman's term for volume/dataset?

      Once shared, you can set up your lxc container to read-only mount the NFS share. Do this by pointing it at the (hopefully static!) IP address of your NAS using proxmox tools + docs. If both are running via proxmox, then the bridge interface should already be set up for you on the host, so you just need to make sure both virtualized environment IPs are on your LAN CIDR, so like 192.178.0.[1-255] or whatever (there are other options but this is simplest). This is under the network tab for the VM or lxc container. You may need to restart the lxc container for the NFS share to be mounted.

      I don't know how to actually change settings to subnet and or assign static IPs. I'm running DD-WRT on my router that is set as my gateway and I have DHCP on. I don't know where to click and what to put in to these different platforms to make those changes. Initially I had to look on my router to see what it assigned my Jellyfin container to and then change the IP in Proxmox in order to access the web interlace.. And I don't know how to mount that NFS share, let alone the SMB that I have set up. Like, you say proxmox tools + docs and I don't know what that is. I also don't know how to check if they're on the same LAN CIDR (what is that btw?).

      Then just point jellyfin at the local path (on the lxc container) that the NFS share is mounted on.

      How, where?

      One thing to look out for: don't put your jellyfin database (in its config dir) on a rw NFS share. It's an SQLite database and due to how NFS works there is a risk of database corruption. By default the database will still be in your lxc container and not on any NFS share, just wanted to mention this potential future pitfall.

      Speaking Greek to me here comrade. I think you're saying don't install jellyfin on the share drive? in which case I haven't, its on the mirrored 500gb drives that I run proxmox off of.

      Edit: I personally use iSCSI or much weirder things for media volume sharing and forgot that NFS (and Samba) doesn't send inotify events by default. So you may want to look into iSCSI or doing specific workarounds that stimulate inotify. inotify is helpful because it tells jellyfin when there are new files available.

      Sounds like a nice feature... would I scrap SMB for iSCSI? If it's a lot more technical stuff to learn I'm not confident it'll be worth it for me. I thought iSCSI was a virtualized pass through on proxmox?

      At any rate, thanks for your input, I mostly just know enough to mess things up a lot of times. I don't completely even understand the file systems of linux.

      Edit: I used this helper script to install the JF LXC. https://tteck.github.io/Proxmox/ Feel free to jump in on the other comment string too.

      • Maoo [none/use name]
        ·
        1 year ago

        Hi! Hopefully I can make some things make more sense.

        What is BSD?

        Ah well I thought you were using TrueNAS Core which is based on FreeBSD rather than Linux. TrueNAS Scale is Linux though so nevermind!

        What is qemu? What is cloud-init and what does being upstream mean?

        These are just some lower-level tools that Proxmox is using to run VMs. qemu manages the virtual machines and cloud-init is a standardized configuration system for setting up VMs when they boot up. When you look at a VM's settings in proxmox, under the hood some of them are qemu settings and cloud-init settings. Number of cores is a qemu thing. Virtualized Ethernet card settings are a cloud-init thing. I'm only mentioning this because if proxmox things aren't making sense you might want to play around with these tools more directly until they make sense. Proxmox doesn't really make them easier to understand, just easier to discover and set in one interface.

        Wut? I don't really know what NFS is, that's like a file sharing protocol like SMB? I'm pretty sure I want to stick with SMB for ease of use from my windows based clients. Is there a more layman's term for volume/dataset?

        SMB could work just fine! I just default to NFS when all I want is to share a directory between Linux systems. I just mean "share" when I say volume or dataset in terms of these two tools.

        I don't know how to actually change settings to subnet and or assign static IPs. I'm running DD-WRT on my router that is set as my gateway and I have DHCP on.

        It'll be way easier to with with the lxc containers and VMs if they have static IPs, so definitely prioritize this! There are two perfectly valid ways to do this:

        1. Tell DD-WRT's DHCP server to only assign IPs to a limited range, leaving the rest for you to statically assign at the level of each lxc container or VM. For example, if you tell the DHCP server to only assign for address 192.168.0.50 and up, you could have a container configure itself statically at 192.168.0.10 and a VM could configure itself at say 192.168.0.26. I prefer this method because it means I need to do less network fiddling at the level of a router.
        2. Configure the DHCP server on the router to assign your lxc containers and VMs to static IPs. They'll automatically get the right IPs. They're recognized via MAC address so you'd want to verify that your lxc containers and VMs have unique ones for the virtual network cards.

        I use different software than dd-wrt (I use used low power enterprise stuff) so I don't know the exact way to do these in its settings, but it should be able to do either.

        I also don't know how to check if they're on the same LAN CIDR (what is that btw?).

        I really just mean subnet. CIDR is a way of describing IP ranges, doesn't really matter except I think proxmox might want you to use CIDR notation sometimes when doing some network configuration?

        Basically your lxc containers / VMs have network settings somewhere on the proxmox web interface. Maybe under hardware? If you click to configure one of them, it'll pop up a dialog that will let you manually specify a static IP, gateway, and netmask. The static IP is an IP on your LAN. The gateway is the address of your router (like 192.168.0.1). The netmask is almost always 255.255.255.0. You might have to use CIDR notation for the static IP - I forget whether it's at this step or when installing proxmox itself. You'll know because it will raise a little input error warning that says your static IP setting is invalid. Slap a /24 on the end and it'll go away. Like 192.168.0.10/24.

        How, where?

        It depends on how you installed and configured Jellyfin. You may be able to use the admin dashboard built into jellyfin.

        I think you're saying don't install jellyfin on the share drive?

        Jellyfin stores a lot of settings and data in an SQLite database, which is just a file in the jellyfin config directory. You'll be able to avoid an entire category of ways that database can get corrupted if you don't put it on a network share.

        Sounds like a nice feature... would I scrap SMB for iSCSI?

        iSCSI is a bit more challenging to use than smb. It's very good at what it does but if you can make smb work then I'd stick with that. But if you ever get an itch to try something different, including providing other kinds of shared storage for VMs, look into iSCSI.