Title is TLDR. More info about what I'm trying to do below.

My daily driver computer is Laptop with an SSD. No possibility to expand.

So for storage of lots n lots of files, I have an old, low resource Desktop with a bunch of HDDs plugged in (mostly via USB).

I can access Desktop files via SSH/SFTP on the LAN. But it can be quite slow.

And sometimes (not too often; this isn't a main requirement) I take Laptop to use elsewhere. I do not plan to make Desktop available outside the network so I need to have a copy of required files on Laptop.

Therefor, sometimes I like to move the remote files from Desktop to Laptop to work on them. To make a sort of local cache. This could be individual files or directory trees.

But then I have a mess of duplication. Sometimes I forget to put the files back.

Seems like Laptop could be a lot more clever than I am and help with this. Like could it always fetch a remote file which is being edited and save it locally?

Is there any way to have Laptop fetch files, information about file trees, etc, located on Desktop when needed and smartly put them back after editing?

Or even keep some stuff around. Like lists of files, attributes, thumbnails etc. Even browsing the directory tree on Desktop can be slow sometimes.

I am not sure what this would be called.

Ideas and tools I am already comfortable with:

  • rsync is the most obvious foundation to work from but I am not sure exactly what would be the best configuration and how to manage it.

  • luckybackup is my favorite rsync GUI front end; it lets you save profiles, jobs etc which is sweet

  • freeFileSync is another GUI front end I've used but I am preferring lucky/rsync these days

  • I don't think git is a viable solution here because there are already git directories included, there are many non-text files, and some of the directory trees are so large that they would cause git to choke looking at all the files.

  • syncthing might work. I've been having issues with it lately but I may have gotten these ironed out.

Something a little more transparent than the above would be cool but I am not sure if that exists?

Any help appreciated even just idea on what to web search for because I am stumped even on that.

  • Max-P@lemmy.max-p.me
    ·
    6 months ago

    Easiest for this might be NextCloud. Import all the files into it, then you can get the NextCloud client to download or cache the files you plan on needing with you.

    • linuxPIPEpower@discuss.tchncs.de
      hexagon
      ·
      6 months ago

      hmm interesting idea. I do not get the idea that nextcloud is reliably "easy" as it's kind of a joke how complex it can be.

      Someone else suggested WebDAV which I believe is the filesharing Nextcloud uses. Does Nextcloud add anything relevant above what's available from just WebDAV?

      • Max-P@lemmy.max-p.me
        ·
        6 months ago

        I'd say mostly because the client is fairly good and works about the way people expect it to work.

        It sounds very much like a DropBox/Google Drive kind of use case and from a user perspective it does exactly that, and it's not Linux-specific either. I use mine to share my KeePass database among other things. The app is available on just about any platform as well.

        Yeah NextCloud is a joke in how complex it is, but you can hide it all away using their all in one Docker/Podman container. Still much easier than getting into bcachefs over usbip and other things I've seen in this thread.

        Ultimately I don't think there are many tools that can handle caching, downloads, going offline, reconcile differences when back online, in a friendly package. I looked and there's a page on Oracle's website about a CacheFS but that might be enterprise only, there's catfs in Rust but it's alpha, and can't work without the backing filesystem for metadata.

  • Joël de Bruijn@lemmy.ml
    ·
    6 months ago

    One of the few times I miss Files-on-demand for Win11. Connect an Office365 library with 500 GB to my laptop with an 128 GB harddrive. Integrates with file explorer, only caches local what you open, after a while you can "free space", meaning deleting local cache version. NextCloud has the same on Win11 because its an OS feature.

  • bloodfart@lemmy.ml
    ·
    edit-2
    6 months ago

    You have two problems.

    Transferring between your laptop and desktop is slow. There’s a bunch of reasons that this could be. My first thought is that the desktops got a slow 100mbps nic or not enough memory. You could also be using something that’s resource intensive and slow like zfs/zpools or whatever. It’s also possible your laptops old g WiFi is the bottleneck or that with everything else running at the same time it doesn’t have the memory to hold 40tb worth of directory tree.

    Plug the laptop into the Ethernet and see if that straightens up your problems.

    You want to work with the contents of desktop while away from its physical location. Use a vpn or overlay network for this. I have a complex system so I use nebula. If you just want to get to one machine, you could get away with just regular old openvpn or wireguard.

    E: I just reread your post and the usb is likely the problem. Even over 2.0 it’s godawful. See if you can migrate some of those disks onto the sata connectors inside your desktop.

    • linuxPIPEpower@discuss.tchncs.de
      hexagon
      ·
      6 months ago

      Thanks!

      I elaborated on why I'm using USB HDDs in this comment. I have been a bit stuck knowing how to proceed to avoid these problems. I am willing to get a new desktop at some point but not sure what is needed and don't have unlimited resources. If I buy a new device, I'll have to live with it for a long time. I have about 6 or 8 external HDDs in total. Will probably eventually consolidate the smaller ones into a larger drive which would bring it down. Several are 2-4TB, could replace with 1x 12TB. But I will probably keep using the existing ones for backup if at all possible.

      Re the VPN, people keep mentioning this. I am not understanding what it would do though? I mostly need to access my files from within the LAN. Certainly not enough to justify the security risk of a dummy like me running a public service. I'd rather just copy files to an encrypted disk for those occasions and feel safe with my ports closed to outsiders.

      Is there some reason to consider a VPN for inside the LAN?

      • bloodfart@lemmy.ml
        ·
        6 months ago

        You’re getting a lot of advice in this thread and it’s all pretty good, but not all of it seems to answer the problems you have in your order of priority or under your constraints. I’ll try to give an explanation of why I think my advice will do so then give it.

        Getting off usb will speed up file access and increase the number of operations you can do from the laptop on your lan. Some stuff will still need to be copied over locally, normal people like us just can’t afford the kind of infrastructure that lets you do everything over the lan. For those things, rsync is perfectly good, and they’re most likely going to be enough of an edge case that it won’t be very often.

        When you’re ready, and from your responses in this thread I’d say you are, a vpn doesn’t expose you to much security risk if any. There are caveats to that, but if you’re doing something like openvpn or wireguard it’s all encrypted and key based and basically ain’t nobody getting into it unless they were to get a key off an old computer you use and didn’t wipe before throwing out or something. That would solve your remote access bonus problem. No pressure and in your own time, of course.

        You are me twenty years ago.

        Cobbling together solutions from what’s available at the cost of the parts from the hardware store. Serial experiments lain but shot in the trailer park boys set. Hackers with the cast of my name is earl.

        Don’t ever change.

        So you want to kick the bad habit but don’t have enough physical space in your desktops case or enough sata ports! You have a bigger tower case but don’t know if it’ll really hold the drives you want.

        The best bet is to transplant the motherboard and power supply from your sff desktop into the big case. If the big case has at least three 5 1/4 bays you can use a bracket to go from 3 big bays to 4 or 5 smaller 3.5” hdd bays. I’d recommend 4 instead of 5, more on that later.

        If the big tower case has the little dangly 2x 3.5 bay cage hanging down from its cd cage, you can use four strips of sheet metal and a carpenters square (or the square corner of some copy paper) to make a column of hdd mounting space all the way to the floor of the chassis. Just remember to use vibration damping grommets.

        Make sure when you’re filling your tower up with drives to put some fans blowing on em. Drives need to be kept cool for maximum life. Those 3 cd bay to 4 hdd bay adapter brackets are nice for that because they usually have a fan mount or one included.

        Now you need sata (or maybe ide) ports to plug all these in. Someone else already said to use those cheap little sata expanders and those are great. I used an old cheap pc mounted in a salvaged case just like you might with four of em back in the day.

        You’ll actually want to use the towers power supply if it has one and it works and matches the sff desktops connectors because it probably has more power capacity than the sff desktops supply. You may need some molex or sata splitters to get power to all your drives.

        Consider mergerfs and snapraid once you have this wheezing Frankensteins monster operational.

        Mergerfs displays all the drives as one big file system to the system and to users. So if drive one had /pics/dog.jpg and drive two had /pics/cat.jpg then the mergerfs of the two of them would have both pictures in the /pics directory when you open it.

        Snapraid is like if raid5 (really zpools because you can have many parity devices) but it only runs once a day or whatever and is basically a snapshot.

        Anyway sorry for the tangent. Post some pics of the tower case or its model number or whatever and I can give better advice about filling it with drives.

  • Corgana@startrek.website
    ·
    6 months ago

    I have a very similar setup to you, and I use SyncThing without issue for the important files (which I keep in my Documents directory to make it easy to remember).

  • flan [they/them]
    ·
    edit-2
    6 months ago

    i can imagine some kind of LRU cache being reasonably useful for this situation, assuming you have some latency hierarchy. For example if the desktop has an SSD, HDD, and some USB HDDs attached I can imagine you having a smaller cache that keeps more frequently accessed files on the SSD, followed by a bigger one on the internal HDD, and followed again by USB HDDs as the ultimate origin of the data. Or even just have the SSD as cache and everything else is origin. I don't know if there's software that would do this kind of thing already though.

    You may want to consider zipping files for transfer though, especially if the transfer protocol is creating new tcp connections for every file.

  • Sims@lemmy.ml
    ·
    6 months ago

    A few ideas/hints: If you are up for some upgrading/restructuring of storage, you could consider a distributed filesystem: https://wikiless.org/wiki/Comparison_of_distributed_file_systems?lang=en.

    Also check fuse filesystems for weird solutions: https://wikiless.org/wiki/Filesystem_in_Userspace?lang=en

    Alternatively perhaps share usb drives from 'desktop' over ip (https://www.linux.org/threads/usb-over-ip-on-linux-setup-installation-and-usage.47701/), and then use bcachefs with local disk as cache and usb-over-ip as source. https://bcachefs.org/

    If you decide to expose your 'desktop', then you could also log in remote and just work with the files directly on 'desktop'. This oc depends on usage pattern of the files.

  • bartlbee@lemmy.sdf.org
    ·
    6 months ago

    zerotier + rclone sftp/scp mount w/ vfs cache? I haven't tried using vfs cache with anything other than a cloud mount but it may be worth looking at. rclone mounts work just as well as sshfs; zerotier eliminates network issues

    • linuxPIPEpower@discuss.tchncs.de
      hexagon
      ·
      6 months ago

      What would be the role of Zerotier? It seems like some sort of VPN-type application. What do I need that for?

      rclone is cool and I used it before. I was never able to get it to work really consistently so always gave up. But that's probably use error.

      That said, I can mount network drives and access them from within the file system. I think GVFS is doing the lifting for that. There are a couple different ways I've tried including with rclone, none seemed superior performance-wise. I should say the Desktop computer is just old and slow; there is only so much improvement possible if the files reside there. I would much prefer to work on my Laptop directly and move them back to Desktop for safe keeping when done.

      "vfs cache" is certainly an intriguing term

      Looks like maybe the main documentation is rclone mount > vfs-file-caching and specifically --vfs-cache-mode-full

      In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.

      So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

      I'm not totally sure what this would be doing, if it is exactly what I want, or close enough? I am remembering now one reason I didn't stick with rclone which is I find the documentation difficult to understand. This is a really useful lead though.

      • bartlbee@lemmy.sdf.org
        ·
        6 months ago

        Zerotier + sshfs is something I use consistently in similar situations to yours - and yes, zerotier is similar to a vpn. Using it for a constant network connection makes it less critical to have everything mirrored locally. . . But I guess this doesn't solve your speed issue.

        I"m not an expert in rclone. I use it for connecting to various cloud drives and have occasionally used it for an alternative to sshfs. I"ve used vfs-cache for cloud syncs but not quite in the manner you are trying. I do see there is a vfs cache read-ahead option that might he|p? Agreed on the documentation, sometimes their forum helps.

  • bloodfart@lemmy.ml
    ·
    6 months ago

    Hey I’m replying again directly to your post in the hopes that I can push against some of the advice you’re getting. My intent is to do an end run around arguing with the people making these suggestions because they’re very smart and made them for good reasons but their ideas aren’t necessarily good for you and I don’t want you to have to go through a troublesome recovery like I did and many people on the internet have.

    Do not under any circumstances set up raid or zpools for your data drives once you get them inside a case and on the pcie bus somehow.

    In these configurations accessing a file requires spinning up all the drives in the array or pool. Not only is that putting wear and tear on your drives, it increases the temperature of the case and draws much more power. Those conditions lead to drive failure. When your drive fails and you have a spare to use in its place, resilvering (the process of using extra data called parity to rebuild the contents of the failed drive on the spare one) will put those exact conditions on your remaining drives.

    For people like us, who may not have a hot spare, or great cooling, or an offsite backup, an array like that will set us up for failure rather than resilience.

    Please consider using mergerfs or something like it and a snapshot parity system like snapraid instead.

    There are very good use cases for the raid and zpool systems that have been brought up, but you aren’t there. I got there at moderate expense and moved away from them.

    • linuxPIPEpower@discuss.tchncs.de
      hexagon
      ·
      6 months ago

      thanks I appreciate it. I've been around the block enough times to expect maximalist advice in places like this. people who are motivated to be hanging around in a forum just waiting for someone to ask a question about hard drives are coming from a certain perspective. Honestly, it's not my perspective. But the information is helpful in totality even though I'm unlikely to end up doing what any one person suggests.

      RAID is something I've seen mentioned over and over again. Every year or two I go reading about them more intentionally and never get the impression it's for me. Too elaborate to solve problems I don't have.

        • linuxPIPEpower@discuss.tchncs.de
          hexagon
          ·
          6 months ago

          TBD

          I've been struggling with syncthing for a few weeks... It runs super hot on every device. Need to figure out how to chill it out a bit.

          Other than that I'll look at both NFS and WebDAV some more. Then will come back to this page to re read the more intricate suggestions.

  • MNByChoice@midwest.social
    ·
    6 months ago

    NFS and ZeroTier would likely work.

    When at home NFS will be similar to a local drive, though a but slower. Faster than SSHFS. NFS is often used to expand limited local space.

    I expect a cache layer on NFS is simple enough, but that is outside my experience.

    The issue with syncing, is usually needing to sync everything.

    • linuxPIPEpower@discuss.tchncs.de
      hexagon
      ·
      6 months ago

      What would be the role of Zerotier? It seems like some sort of VPN-type application. I don't understand what it's needed for though. Someone else also suggested it albeit in a different configuration.

      Just doing some reading on NFS, it certainly seems promising. Naturally ArchWiki has a fairly clear instruction document. But I am having a ahrd time seeing what it is exactly? Why is it faster than SSHFS?

      Using the Cache with NFS > Cache Limitations with NFS:

      Opening a file from a shared file system for direct I/O automatically bypasses the cache. This is because this type of access must be direct to the server.

      Which raises the question what is "direct I/O" and is it something I use? This page calls direct I/O "an alternative caching policy" and the limited amount I can understand elsewhere leads me to infer I don't need to worry about this. Does anyone know otherwise?

      The issue with syncing, is usually needing to sync everything.

      yes this is why syncthing proved difficult when I last tried it for this purpose.

      Beyond the actual files ti would be really handy if some lower-level stuff could be cache/synced between devices. Like thumbnails and other metadata. To my mind, remotely perusing Desktop filesystem from Laptop should be just as fast as looking through local files. I wouldn't mind having a reasonable chunk of local storage dedicated to keeping this available.

      • MNByChoice@midwest.social
        ·
        6 months ago

        If there is sufficient RAM on the laptop, Linux will cache a lot of metadata in other cache layers without NFS-Cache.

      • MNByChoice@midwest.social
        ·
        6 months ago

        ZeroTier allows for a mobile, LAN-like experience. If the laptop is at a café, the files can be accessed as if at home, within network performance limits.

      • MNByChoice@midwest.social
        ·
        6 months ago

        NFS-Cache is a specific cache for NFS, and does not represent all caching that can be done of files over NFS. "Direct I/O" is also a specific thing, and should not be generalized in the meanings of "direct" and "I/O".

        Let's skip those entirely for now as I cannot simply explain either. I doubt either will matter in your use case, but look back if performance lags.

        One laptop accessing one NFS share will have good performance on a quite local network.

        NFS is an old protocol that is robust and used frequently. NFSv3 is not encrypted. NFSv4 has support for encryption. (ZeroTier can handle the encryption.)

        SSHFS is a pseudo file system layered over SSH. SSH handles encryption. SSHFS is maybe 15 years old and is aimed at convenience. SSH is largely aimed at moving streams of text between two points securely. Maybe it is faster now than it was.