I'm writing a program that wraps around dd to try and warn you if you are doing anything stupid. I have thus been giving the man page a good read. While doing this, I noticed that dd supported all the way up to Quettabytes, a unit orders of magnitude larger than all the data on the entire internet.

This has caused me to wonder what the largest storage operation you guys have done. I've taken a couple images of hard drives that were a single terabyte large, but I was wondering if the sysadmins among you have had to do something with e.g a giant RAID 10 array.

  • Urist@lemmy.ml
    ·
    30 days ago

    I obviously downloaded a car after seeing that obnoxious anti-piracy ad.

  • fuckwit_mcbumcrumble@lemmy.dbzer0.com
    ·
    30 days ago

    Entire drive/array backups will probably be by far the largest file transfer anyone ever does. The biggest I've done was a measly 20TB over the internet which took forever.

    Outside of that the largest "file" I've copied was just over 1TB which was a SQL file backup for our main databases at work.

  • Neuromancer49@midwest.social
    ·
    30 days ago

    In grad school I worked with MRI data (hence the username). I had to upload ~500GB to our supercomputing cluster. Somewhere around 100,000 MRI images, and wrote 20 or so different machine learning algorithms to process them. All said and done, I ended up with about 2.5TB on the supercomputer. About 500MB ended up being useful and made it into my thesis.

    Don't stay in school, kids.

  • ramble81@lemm.ee
    ·
    30 days ago

    I’ve done a 1PB sync between a pair of 8-node SAN clusters as one was being physically moved since it’d be faster to seed the data and start a delta sync rather than try to do it all over a 10Gb pipe. M

  • neidu2@feddit.nl
    ·
    edit-2
    29 days ago

    I don't remember how many files, but typically these geophysical recordings clock in at 10-30 GB. What I do remember, though, was the total transfer size: 4TB. It was kind of like a bunch of .segd, and they were stored in this server cluster that was mounted in a shipping container for easy transport and lifting onboard survey ships. Some geophysics processors needed it on the other side of the world. There were nobody physically heading in the same direction as the transfer, so we figured it would just be easier to rsync it over 4G. It took a little over a week to transfer.

    Normally when we have transfers of a substantial size going far, we ship it on LTO. For short distance transfers we usually run a fiber, and I have no idea how big the largest transfer job has been that way. Must be in the hundreds of TB. The entire cluster is 1.2PB, bit I can't recall ever having to transfer everything in one go, as the receiving end usually has a lot less space.

  • Decency8401@discuss.tchncs.de
    ·
    29 days ago

    A few years back I worked at a home. They organised the whole data structure but needed to move to another Providor. I and my colleagues moved roughly just about 15.4 TB. I don't know how long it took because honestly we didn't have much to do when the data was moving so we just used the downtime for some nerd time. Nerd time in the sense that we just started gaming and doing a mini LAN party with our Raspberry and banana pi's.

    Surprisingly the data contained information of lots of long dead people which is quiet scary because it wasn't being deleted.

  • JerkyChew@lemmy.one
    ·
    29 days ago

    My Chia crypto farm at its peak had about 1.5 PB of plots, each plot was I think about 100ish gigs? I'd plot them on a dedicated machine and then move them to storage for farming. I think I'd move around 10TB per night.

    It was done with a combination of powershell and bash scripts on Windows, Linux, and the built in Windows Services for Linux.

  • Llituro [he/him, they/them]
    ·
    30 days ago

    i've transferred 10's of ~300 GB files via manual rsyncs. it was a lot of binary astrophysical data, most of which was noise. eventually this was replaced by an automated service that bypassed local firewalls with internet-based transfers and aws stuff.

  • delirious_owl@discuss.online
    ·
    29 days ago

    Upgraded a NAS for the office. It was reaching capacity, so we replaced it. Transfer was maybe 30 TB. Just used rsync. That local transfer was relatively fast. What took longer was for the NAS to replicate itself with its mirror located in a DC on the other side of the country.

    • CrabAndBroom@lemmy.ml
      ·
      29 days ago

      Yeah it's kind of wild how fast (and stable) rsync is, especially when you grew up with the extremely temperamental Windows copying thing, which I've seen fuck up a 50mb transfer before.

      The biggest one I've done in one shot with rsync was only about 1tb, but I was braced for it to take half a day and cause all sorts of trouble. But no, it just sent it across perfectly first time, way faster than I was expecting.

      • delirious_owl@discuss.online
        ·
        29 days ago

        Never dealt with windows. rsync just makes sense. I especially like that its idempotent, so I can just run it twice or three times and it'll be near instant on the subsequent run.

  • HappyTimeHarry@lemm.ee
    ·
    29 days ago

    I downloaded that 200gb leak from national public data the other day, maybe not the biggest total but certainly the largest single text file ive ever messed with

  • TedvdB@feddit.nl
    ·
    30 days ago

    Today I've migrated my data from my old zfs pool to a new bigger one, the rsync of 13.5TiB took roughly 18 hours. It's slow spinning disks storage so that's fine.

    The second and third runs of the same rsync took like 5 seconds, blazing fast.

  • Yeahboiiii@lemm.ee
    ·
    30 days ago

    Largest one I ever did was around 4.something TB. New off-site backup server at a friends place. Took me 4 months due to data limits and an upload speed that maxed out at 3MB/s.

  • Someonelol@lemmy.dbzer0.com
    ·
    30 days ago

    Manually transferred about 7TBs to my new Rpi4 powered NAS. It took a couple of days because I was lazy and transferred 15 GBs at a time which slowed down the speed for some reason. It could handle small sub 1 GB files in half a minute otherwise.

    • milicent_bystandr@lemm.ee
      ·
      29 days ago

      Could the slowdown be down to HDDs that cache on a section of - I think it's single layer? - and slowly rewrite that cache onto the denser (compound layer?) storage?