So I have this external 2.5" drive salvaged from an old laptop of mine. I was trying to use it to backup/store data but the transfer to the drive fails repeatedly at the ~290GB mark leading me to believe that maybe there is a bad sector on the drive. I tried to inspect the drive using smartmontools and smartctl but since it is an external drive, i was not allowed to do so. Is there anyway for me to inspect and fix this drive? I am on fedora ublue-main. The HDD is a 1TB seagate drive.
Edit : I am a linux noob so some hand holding will be appreciated. Also i am looking to use this drive only for low priority media files which i dont mind losing so please help even though it is not the greatest idea to use a failing drive
Edit 2 : It seems my post is not clear of what i am doing. I dont want to recover data from the drive. I want to try to use more of the drive for storing data
I recommend to throw away this drive because blocks that are readable and writeable now, may fail soon. But if you want to use it anyway, it is possible to collect a list of unaccessible blocks usong
badblocks
and pass it tomkfs
to create a filesystem that ignores that blocks. IIRC this is described inman badblocks
.That's not looking good, usually on a bad sector the drive will write it to a spare sector transparently and mark it as bad internally. That means they've probably all already been used up.
smartctl should work just fine over USB, unless your USB adapter for the drive is really bad. Make sure you're using sudo as well. Worst comes to worst, try using it in a different computer.
Your next goal would be to get it to do a full self test with smartctl. A low level format might help clear some bad state and it might be okay afterwards with a fresh format accounting for whatever defect it built up over time.
I wouldn't recommend it. It might work for a bit and then just die completely.
Also interested ! But I read through my long search that bad sectors on a drive... Is a sign that your drive is failing and that there is nothing you can do about it.
Your drive will probably accumulate more and more bad sectors until it becomes unusable (there is some threshold).
There is however a way to "mark" them but thats just a temporary solution. I wouldn't put important/critical data on it (pictures, backups, OS...)
If you can’t check smart data over usb, plug it up to something internal.
Use the command badblocks -o sus_blocks.txt /dev/your_drive to make a file of your bad blocks. Be 100% sure you’re running bad blocks on the correct drive. Then partition with fdisk or whatever and use mkfs.ext4 -l sus_blocks.txt /dev/your_device to make a file system on there that knows about the bad blocks you found.
Make 100% sure you’re doing those operations on the target drive.
I checked that this still works using a drive with bad blocks last night. I did not check if mkfs.exfat supports that list though.
You ought to be able to use those tools on an external drive.
You can’t “fix” bad sectors. A long time ago you could run badblocks on the drive, pipe the output to a file a d feed that file to your mkfs to map around those blocks. Idk if that still works. If you do it on a drive with data it’ll destroy the data I think.
You can look at your logs to see what’s failing at 290.
Use ddrescue to copy to a working disk, if I remember it will try a number of times and eventually skip the broken sectors so that at least you have a working filesystem on the copy.
I think there is misunderstanding because of my phrasing. i dont want to recover data from the drive. Instead i want to repair the drive to use for low priority external storage.
bad block are not fix able. however you can create a bacd block map to make your fs skip the bad block. if you have data on your disk currently, i would suggest to use ddrescue to dump a image of your disk, and recover file from it
Tell the drive to do a secure erase. If there are still bad blocks after that, it is absolutely garbage
Frankly you should never see bad blocks, but sometimes minor bad things happen and the drive has to tell you that this data is gone forever. If you write over those bad blocks at some point, the drive is supposed to remap them to spare blocks and carry on as if everything is okay. If it has run out of spare blocks, then the bad blocks stay forever. A secure erase might give the drive more wiggle room to re-allocate around a larger bad spot, IDK.
Hey did you find a solution? I maybe found something that could interest you !
Complementing @thebrain anwser, I totally wiped and fixed bad sectors on a old SSD drive I fought was borked because of alot of unallocated pending sectors. (In/out errors)
Keep in mind this is advanced stuff and could not work in your case and EVEN brick your hard drive. You will lost all your data and everything will be rewriten.
Manually rewrite sectors
https://leo.leung.xyz/wiki/Hdparm
Full wipe with "--security-erase-enhanced"
https://tinyapps.org/docs/wipe_drives_hdparm.html
This can take some time (3hours in my case) and it looks like your terminal is stuck, don't worry just wait until it finishes !
Again this can be DANGEROUS ! Only attempt such mesure if you don't care to lose your hard drive.
No I am still working on this. Thanks for the advice. I was having trouble with hdparm because I didn't have enough information about with sectors are bad. Was trying to use ddrescue to make a map. Thanks for the resources