Permanently Deleted

  • darkcalling [comrade/them, she/her]
    ·
    4 years ago

    Moore's law does not apply to storage. It refers to microprocessors.

    That said, storage is getting cheaper as they figure out how to store more on the same space or cram more stuff in the same size package. There are certain limitations to the storage density on say magnetic platter drives of the 3.5" variety (obviously if you just create giant housings you can fit what you want but almost all PC cases, server cases, storage cases, rack enclosures, etc have standard sizing so that isn't appealing). Mainly the sensitivity of the read and write heads and the ability to reliably store information below certain densities owing to the nature of the magnetic changes to the substrate to store data. One recent development has been so called shingled magnetic recording where it overlaps the wider write tracks over one another partially like roof shingles. This works because the amount of the track that must be present to be read is smaller than the amount used for writing. These have serious disadvantages for many normal uses though as the drives are forced to remove and re-write adjoining tracks whenever you write to one track which if you're not just writing once to the drive then reading (if you're a creative professional or someone otherwise using a large amount of the drive and regularly changing data) can make performance absolutely abysmal. Article on differences. In terms of flash storage/solid state there are theoretical limits to the amount of bits you can set in the substrate due to the necessity of a certain amount of material being present between bits to separate them (much like the issue with processors where we're now dealing with separations expressed in molecules). One big development was beginning to store things in vertical nand so you weren't just operating on a 2D plane. Another is storing more bits in a given cell but this reduces the lifespan and reliability (this is also why prices for SSDs have fallen, if you look at SLC NAND it is still very expensive (although vertical NAND is helping), MLC which is pretty high in reliability is also more expensive, the cheaper drives use TLC or triple level cells and there are now drives with QLC or 4 level. Reference on types of flash storage

    I do not believe you will be able to buy a petabyte drive in the standard form factors of 3.5" and 2.5" or smaller anytime that soon as we are coming up against limits in the materials and ways we store information, we still have some tricks yet left to wring more storage out of things but not the kind that get us from 14TB drives to petabyte or even half IMO. I do believe you will see cheaper and cheaper SSD drives with larger capacities. I think in non-enterprise settings you can expect to see 10TB SSD drives within a few years. If SSDs were to move to a larger form factor and cram more NAND flash platters in them and larger ones there's no reason you couldn't see much larger than that if they went up to say 3.5". But the thing is most ordinary people don't have need of huge amounts of storage so that's a factor. With the rise of streaming services and the cloud there are real questions about how many people will not be served by say a 1TB SSD and a 4TB spinning disk drive or perhaps another SSD.

    There are practical questions about drives above a certain size as well. Putting all yours eggs in one basket kind of questions. There's also issues of corruption and bitrot and of course seek time to getting information. From the perspective of a business that has critical data the current transfer speeds on existing interface standards are too slow to deal with petabyte sized drives in terms of replacements. Imagine if you will a petabyte drive failing, unless they have another exact copy (which for some applications you don't want to have because it's a database of some sort or something and you build it from data you have elsewhere) it would take days to fill it up. RAID arrays means losing one drive in your petabyte storage cluster doesn't matter much. Compared to now where it's extremely unlikely a petabyte of drives would all fail at once but if they did you could fill their replacements much more quickly. There's also the data access speed issue with current standards. Right now a petabyte of data would be spread out across say 110 10TB drives, that's 110 x (the speed of the interfaces, minus any raid controller or storage network limitations). But the biggest issue is the stability.