I finally have the budget to build my first NAS and upgrade my desktop PC. I have used Linux for quite some time, but am far from an expert.

One of the steps is to move my M.2 NVME system drive (1TB) from my desktop to my NAS. I want to replace it with a bigger NVME drive (2TB). My current motherboard only has a single M.2 slot, that's why I bought a M.2 enclosure.

My goal is to put my new drive into the enclosure, clone my whole system disk onto it and then replace the old drive. At first I found several posts about using clonezilla to clone the whole drive, but some posts mentioned it not working well with btrfs (/ and /home subvolume), which is the bulk of my drive.

I have some ideas how I might to pull it off. My preliminary idea is:

  1. clone my boot partition with clonezilla
  2. use btrfs-clone or moving my butter to transfer the btrfs partition
  3. resize the partitions with gparted (and add swap?)

The two aspects I'm uncertain about are:

  1. UUIDs
  2. fstab

I plan to replace the old drive, so the system will not have two drives with the same UUID. If the method results in a new UUID I need to edit fstab.

As you can see I'm not sure how to proceed. Maybe I can just use clonezilla or dd to clone my whole drive? If someone has experience with such a switch or is just a lot for familiar with the procedures, I would love some tips and insight.

Thanks for reading.

////////////////////////////////////////////////////////////////////////////////////////////////////////////

EDIT: Thinking about how to do it, might have actually taken longer than the procedure itself. For anyone in a similar situation, I was able to replace the drive with these steps:

  1. clone the whole drive (new drive has a bigger capacity) with clonezilla
  2. physically switch the drives
  3. boot into a live medium and resized the btrfs partition on the new drive with gparted
  4. boot into the main system and adjust the filesystem size with sudo btrfs filesystem resize max /

With two NVME drives (even though one was in a USB M.2 enclosure) everything took about 30 minutes. About 300 gigs of data were transferred. I haven't found any problems with the btrfs partition thus far. Using dd like others recommended might work as well, but I didn't try that option.

  • TheOubliette@lemmy.ml
    ·
    3 months ago

    I would recommend using this as an opportunity to build out and use a backups system. Whenever I get a new laptop, for example, I just make a(nother) backup on the old laptop and restore whatever I want to the new one. If there are any files I want that are normally excluded from backups, I either tweak my rules to include those files/put them in a different directory and repeat the process or just make a new manual external backup copy temporarily.

    If you have good backups then your new drive can be populated from them after creating new partitions. Optionally, you can also take this opportunity to reinstall the OS, which I personally prefer to do because it tends to clean up cruft.

    Also, if you go this route, your data on your old drive is 100% intact throughout the process. You can verify and re-verify that all the files you want are backed up + restored properly before finally formatting the old drive for use in the NAS.

  • rotopenguin@infosec.pub
    ·
    3 months ago

    Do you have pci-e slots? An nvme to pcie card is cheap - it's pretty much just passing from one connector shape to another.

    • minimalfootprint@discuss.tchncs.de
      hexagon
      ·
      3 months ago

      Do you have pci-e slots?

      I had to decide between a M.2 enclosure and a PCIe card. Since I plan to build a new system (with more M.2 slots) I will have more slots in the future. And maybe I will not like the M.2 enclosure and return it. wink

  • Sickos [they/them, it/its]
    ·
    3 months ago

    Personally, if the NAS is up and running, I'd migrate the home directory and anything else important from the desktop to that, and intend to network host those folders; set aside the 1TB, install the 2 TB, and do a fresh install and see if I can still get to everything happily.

    Alternatively--if you want to preserve stuff locally--new drive in an enclosure, attach to desktop, boot from an install USB, fresh install to 2TB, reboot from 2TB, mount 1TB, migrate data, install 2TB. I don't think there should be a UUID problem doing that, but even if there was you could still boot from the install stick and try manually fix it

  • bastion@feddit.nl
    ·
    edit-2
    3 months ago

    If you're feeling adventurous:

    • You can use a thumb drive to boot.
    • Verify the device path for your normal boot disk and for your new drive using gnome disks or similar. In this example I'll call them /dev/olddisk0n1 and /dev/newdisksda
    • really, really don't mix up the in file and out file. In file (if) is the source. Out file (of) is the destination.
    • sudo dd if=/dev/olddisk0n1 of=/dev/newdisksda bs=128M
    • or, of you want a progress indicator: sudo pv /dev/olddisk0n1 > /dev/newdisksda
    • wait a long time

    Not that this is the recommended method if you're new to the terminal, but it's totally viable if you have limited tools or are comfortable on the command prompt.

    Unless you're using three new disk on the same system, you don't have to worry about UUIDS, though they will be identical on both drives.

    Your system is likely using UUIDs in fstab. If so, you don't have to worry about fstab. If not, there's still a damned good chance you won't have to worry about fstab.

    To be sure, check fstab and make sure it's using UUIDs. If it's not, follow a tutorial for switching fstab over to using UUIDs.