Permanently Deleted

  • CanYouFeelItMrKrabs [any, he/him]
    ·
    4 years ago

    in short: yeah the numbers will keep going up probably

    I think the limits for hard drives are being reached but ssds are regularly improving.

    https://datarecovery.com/rd/rapid-growth-storage-technologies-data-capacities-changed-theyll-keep-changing/

    This article mentions that currently storage is done with transistors as small as 14nm. Right now the latest processors are being made with 5nm transistors and factories are being built for 2/3nm transistors. But eventually there will be a technical limit to how small they can be made.

    Also there are techniques like 3d stacking that get more storage without shrinking anything by changing the way the memory is arranged.

    Typical NAND technologies arrange memory cells side by side; more cells equate to a higher capacity for the drive. To make a NAND drive larger, you’d need to reduce the size of the cells, but due to Moore’s law, there’s a limit to how small the cells can get. Smaller cells are also more prone to memory leak—we won’t get into that here, but the takeaway is that smaller cells are generally less dependable.

    So, how do you store more data in the same amount of physical space without shrinking the cells? Simple (even if it’s not really simple): You stack them. Intel’s 3D NAND technology, as its name implies, allows the memory cells to be stacked on top of one another. This has the obvious advantage of expanding the number of memory cells on each block of semiconducting material—called a die—without reducing the NAND drive’s dependability.

      • Zoift [he/him]
        ·
        edit-2
        4 years ago

        Kind of, its more hitting a barrel wall rather than floor. There's enough research being done & just-over-the-horizon tech that the trend will probably hold for next generation or two.

        Moore's law is just a name though. Eventually yeah, we're going to start hitting the edges of practicality if nothing else.

      • CanYouFeelItMrKrabs [any, he/him]
        ·
        edit-2
        4 years ago

        Yes. For the next few years they'll be able to keep shrinking but I'm not sure if that is going to come to an end in like 5 years. Or maybe 0.5nm is possible. But that's a problem for processors more than storage because the cutting tech is used for processors and GPUs first.

    • SirLotsaLocks [he/him]
      ·
      4 years ago

      yeah from what I understand the problem with HDDs is that the fasted possible (or reasonable) transfer speed has been reached so now the more storage you pack on a disk becomes more risky because recovering data from it in case of failure could take actual days at the fastest they can go.

      • CanYouFeelItMrKrabs [any, he/him]
        ·
        4 years ago

        Yeah so even if the increase the capacity it'll take forever to fill up/empty the drives. Right now hard drives are still being used to archive things but this is major issue.

    • eduardog3000 [he/him]
      ·
      4 years ago

      but due to Moore’s law, there’s a limit to how small the cells can get

      That seems like bad wording. It's not "due to" Moore's Law. It's a physical limitation that will be the death of Moore's Law.

  • Mardoniush [she/her]
    ·
    4 years ago

    There are ultra-high capacity enterprise 100TB SSDs and 18TB HDDs commercially available (though they will cost you). A 1 PB drive is well within physical limits of the next couple of process iterations but 10PB is probably unlikely because read/write issues come into play.

    I seem to remember looking into the physical limitations of storage and something like pure Diamonoid Computronium is in the 100s of exa bytes (good luck reading it at speed tho), and the theoretical physical limits of information storage is about 10^66 bytes per cubic cm or something like that. We've got space to grow.

    • post_trains [he/him]
      ·
      4 years ago

      This makes me curious: Is this because MTBF of any given cell makes an array that large impossible with current processes, or because you start bumping the into write endurance vs capacity issues?

      • Mardoniush [she/her]
        ·
        4 years ago

        Can't make cells smaller than a certain number of electrons with current tech (quantum dot based processes are trying to work around this) Write endurance doesn't seem to be a real issue at this point but I'm only a minor in Materials Science so Shrug

  • late90smullbowl [they/them]
    ·
    4 years ago

    Prices fluctuate, but the trend has been more GB for your $ as time goes on, generally.

    They could make a petabyte drive now, probably. It would just be as big as a house or something.

      • late90smullbowl [they/them]
        ·
        4 years ago

        Shit, didn't realise. I would never have to delete anything ever agin. paradise. Hoarding into eternity.

        • thefunkycomitatus [he/him,they/them]
          ·
          4 years ago

          That's how I felt about having 500GB years ago. I was like "wow TB hard drives? One drive will last me forever" Then I started collecting movies and TV shows. It's hard to imagine a PB now but I can see a person accumulating at least 1PB in data over their lives. Maybe not the average person any time soon, but a good data hoarder would make good use of 1PB.

          • late90smullbowl [they/them]
            ·
            4 years ago

            But what about when everything gets re released in 4000K? 360Vision?

            A movie will be a petabyte and we'll still be complaining about storage lol

            • thefunkycomitatus [he/him,they/them]
              ·
              4 years ago

              There's definitely going to be an increase in quality and stuff that will make file sizes larger. Doing the math, I could store about 10000 movies in 1PB if they were 100GB each (50GB is the upper limit on uncompressed Blu-Ray). I currently have about 1200 movies. So even if all my movies were Blu-Ray quality I would still be well under 1PB. At 4k or 8k, I don't know.

              There's definitely going to be an upper limit on image resolution. Probably not because of a Moore's-law thing, but just because after a certain point it'll be a waste. If your monitor is 35 inches, there's no sense in having a 20K image on it. At that point, all those extra pixels would be too tiny for you to see anyways. At some point there will be diminishing returns in quality. And the same thing for images on bigger screens. You could have a 100" TV but a 40K image would still be way too much for it. High resolution images like that would have a purpose in science, engineering, or novelty entertainment. But it wouldn't be the norm.

              Plus even if storage is large enough, you have to transmit those 40K images over the internet or play them as frames in a movie or game. So even if storage is big enough to handle them, our networks and hardware might be too weak to transfer or display them.

              One of the things that's happened is that instead of producing a true 4k image, lower resolution images are just upscaled. This is likely the kind of stuff you'll see that'll bring about 16K images rather than true 16K. We'll get better at making high quality images from lower ones, the cost of producing images will be shared between storage of the lower res files with computing power to turn them into high res, and we'll coast on that for a long while. TVs already do this in some way. Games are starting to do this both when producing the final image on your screen and to conserve disk space for textures.

              But you're right to point out VR type stuff. Because then you're talking about instead of a single image as a movie frame, or a single picture, you essentially have 6 images (top, bottom, left, right, front, back) at each frame. So now your movies and home videos and whatever else is that much bigger. That would definitely make it easier to get to 1PB.

              • late90smullbowl [they/them]
                ·
                edit-2
                4 years ago

                Great post. It's interesting to consider. Foveated rendering is another consideration, which might allow handling ultra high res frames on hardware or previously thought to be too weak.

          • late90smullbowl [they/them]
            ·
            4 years ago

            😀 also depends on whether you're talking about spinning drives, or solid state, or hybrid, or some other exotic tech.

            the other poster made a much more useful post tbh in reply to you.

    • eduardog3000 [he/him]
      ·
      4 years ago

      the machine in your home is nothing more than a terminal to connect to your real computational resources in the cloud

      Please no.

  • darkcalling [comrade/them, she/her]
    ·
    4 years ago

    Moore's law does not apply to storage. It refers to microprocessors.

    That said, storage is getting cheaper as they figure out how to store more on the same space or cram more stuff in the same size package. There are certain limitations to the storage density on say magnetic platter drives of the 3.5" variety (obviously if you just create giant housings you can fit what you want but almost all PC cases, server cases, storage cases, rack enclosures, etc have standard sizing so that isn't appealing). Mainly the sensitivity of the read and write heads and the ability to reliably store information below certain densities owing to the nature of the magnetic changes to the substrate to store data. One recent development has been so called shingled magnetic recording where it overlaps the wider write tracks over one another partially like roof shingles. This works because the amount of the track that must be present to be read is smaller than the amount used for writing. These have serious disadvantages for many normal uses though as the drives are forced to remove and re-write adjoining tracks whenever you write to one track which if you're not just writing once to the drive then reading (if you're a creative professional or someone otherwise using a large amount of the drive and regularly changing data) can make performance absolutely abysmal. Article on differences. In terms of flash storage/solid state there are theoretical limits to the amount of bits you can set in the substrate due to the necessity of a certain amount of material being present between bits to separate them (much like the issue with processors where we're now dealing with separations expressed in molecules). One big development was beginning to store things in vertical nand so you weren't just operating on a 2D plane. Another is storing more bits in a given cell but this reduces the lifespan and reliability (this is also why prices for SSDs have fallen, if you look at SLC NAND it is still very expensive (although vertical NAND is helping), MLC which is pretty high in reliability is also more expensive, the cheaper drives use TLC or triple level cells and there are now drives with QLC or 4 level. Reference on types of flash storage

    I do not believe you will be able to buy a petabyte drive in the standard form factors of 3.5" and 2.5" or smaller anytime that soon as we are coming up against limits in the materials and ways we store information, we still have some tricks yet left to wring more storage out of things but not the kind that get us from 14TB drives to petabyte or even half IMO. I do believe you will see cheaper and cheaper SSD drives with larger capacities. I think in non-enterprise settings you can expect to see 10TB SSD drives within a few years. If SSDs were to move to a larger form factor and cram more NAND flash platters in them and larger ones there's no reason you couldn't see much larger than that if they went up to say 3.5". But the thing is most ordinary people don't have need of huge amounts of storage so that's a factor. With the rise of streaming services and the cloud there are real questions about how many people will not be served by say a 1TB SSD and a 4TB spinning disk drive or perhaps another SSD.

    There are practical questions about drives above a certain size as well. Putting all yours eggs in one basket kind of questions. There's also issues of corruption and bitrot and of course seek time to getting information. From the perspective of a business that has critical data the current transfer speeds on existing interface standards are too slow to deal with petabyte sized drives in terms of replacements. Imagine if you will a petabyte drive failing, unless they have another exact copy (which for some applications you don't want to have because it's a database of some sort or something and you build it from data you have elsewhere) it would take days to fill it up. RAID arrays means losing one drive in your petabyte storage cluster doesn't matter much. Compared to now where it's extremely unlikely a petabyte of drives would all fail at once but if they did you could fill their replacements much more quickly. There's also the data access speed issue with current standards. Right now a petabyte of data would be spread out across say 110 10TB drives, that's 110 x (the speed of the interfaces, minus any raid controller or storage network limitations). But the biggest issue is the stability.