How to *quickly* get a list of files that have bad sectors/blocks/clusters/whatever?

490

Solution 1

As no one ever actually answered your question, the following not-exactly-lightning-fast method may be the quickest way to get what you are looking for.

  1. The utilities you will need work under Linux, so you first need to create a USB key or CD that you can use to boot into Linux (or to attach the disk to a Linux machine).

  2. You then need to run ddrescue from the Gnu ddrescue package. This will create a "mapfile", which is basically a list of the bad sectors on your disk. There are many different options to ddrescue, which among other things vary in how hard it works to read/recover data from a bad sector. If you want to consider any sector which gives trouble as "bad", and don't want to really recover anything with ddrescue, you can use the "-n" option and specify /dev/null as the target, and this will be pretty fast (ddrescue will just read once through all the sectors of the disk in order, and the mapfile output will contain a list of sectors where the read failed).

  3. You then need to run a utility called ddru_ntfsfindbad on the mapfile and the disk, and this will output what you want: a list of the files on the disk that have parts in one of the bad sectors.

NOTE however: 1. If a drive is failing, reading it at all is very likely to make it fail worse. So it is quite possible (some would say "close to certain") that some/many/lots of sectors that were good before you read the disk twice via the procedure are now bad. The smart thing to do would be to have a good drive and do both steps above actually recovering data. IF you do this, of course, you might want to use ddrescue's ability to try extra hard to get the data off of hard-to-read sectors.

  1. ddru_ntfsfindbad's manual says that you CANNOT run it on the original bad drive UNLESS the file system is/was NTFS. So you're ok in your case, but it will almost certainly be faster if you run it on a ddrescue-recovered drive and not the original. And if the bad sectors are in certain filesystem metadata, you really will need to do this.

I realize that the original question is very old, but I had this problem recently and thought that others might want to know the answer to the original question.

Solution 2

When it comes to bad sectors on a disk, if there is no backup then what I do is get a backup image of it using a tool called Drive Snapshot:

  Drive Snapshot
  http://www.drivesnapshot.de/

When this tool encounters bad sectors, it keeps track of them in a separate text file (one bad sector per line, so you can simply count the number of lines in the file to determine the total number of bad sectors), which is also used as a cross-reference to find out which files used those sectors.

Solution 3

I had the same question and did some research: http://www.disktuna.com/finding-out-which-file-is-affected-by-a-bad-sector/.

I am assuming Windows OS and NTFS file system.

So, a bad sector can be part of:

  • Unallocated space. We can ignore this.

  • File system structures. Normally chkdsk should take care of this. It is possible that depending on where file system damage is that chkdsk won't run at all. In which case you'd run a surface scan on the hard disk itself.

  • System Files affected: You could use the Windows System File Checker (SFC.exe). At the command prompt, type the following command, and then press ENTER: sfc /scannow.

  • User data: The Microsoft support tool NFI.exe can be used to convert a LBA sector address to a file path. This way you can determine which files need to be restored from backup after sector reallocation.

    Example:

    C:\Users\admin\Downloads>nfi \Device\Harddisk0\DR0 28521816
    NTFS File Sector Information Utility.
    Copyright (C) Microsoft Corporation 1999. All rights reserved.
    
    
    ***Physical sector 28521816 (0x1b33558) is in file number 5766 on drive C.
    \IMAGES\win7HDD.vhd
    
  • The easiest way is probably HD Sentinel. After running a surface scan HD Sentinel will display a list of files affected by bad sectors.

Solution 4

If you already have a list of bad sectors, the most convenient tool I found to determine which are the potentially affected files is Piriform's Defraggler. When clicking on a given block on the volume's map, it will display a list of the files contained in that same block (even non-fragmented files). And when clicking on a file name in the “File list” tab (which only displays fragmented files), it will highlight all the blocks containing at least one sector belonging to that file. Unfortunately there is no numerical indication of offset / sector / cluster intervals, and no way to directly type a particular offset / sector / cluster value. (I wrote the company about two years ago to request a few enhancements which would make this great feature more practically usable in that kind of situations, and had a kind reply, thanking me for my comments and suggestions; I haven't updated Defraggler in a while, perhaps some of my suggestions have been implemented since then.)

I provided some more methods here : How do I find if there are files at a specific bad sector?

nfi.exe X: [sector number]

– fsutil volume querycluster X: [cluster number]

With both of those command line tools it should be relatively easy to write a script so as to load each line of a list of sectors as input and get a list of files as output.

– HD Sentinel, but with a major caveat: it will actually try to access each requested sector and display its contents, which may temporarily freeze the system and worsen the drive's condition.

– R-Studio (same issue as HD Sentinel)

– WinHex (same issue as HD Sentinel)

Share:
490

Related videos on Youtube

user3417528
Author by

user3417528

Updated on September 17, 2022

Comments

  • user3417528
    user3417528 almost 2 years

    I have first web service which is used to send messages into the aws sqs, this web service is deployed on a separate ec2 instance. Web service is running under IIS 8. This web service is able to handle 500 requests per second from two machines meaning 1000 requests per second. It can handle more requests.

    I have second web service deployed on another ec2 instance of the same power/configuration. This web service will be used to process the messages stored in the Sqs. For testing purpose currently, I am only receiving the message from Sqs and just deleting that.

    I have a aws Sns service which tells the second web service that a message has come in the sqs, go and receive that message to process.

    But I observe that my second web service is not as fast as my first web service, every time I run the test, messages are left in the sqs, but ideally no message should remain in the sqs.

    Please guide me what are the possible reasons of this and area on which I should focus.

    Thanks in advance.

    • endolith
      endolith over 8 years
      "is there some tool or procedure that will try reading each file, and upon hitting a bad block, just tell me about it and skip to the next file?" This is exactly what ddrescue does gnu.org/software/ddrescue you can run it from a linux live USB stick like system rescue cd. it will skip bad sectors and read everything it can first, then go back and retry the bad sectors repeatedly
  • Moab
    Moab over 13 years
    +1, After this is done run Spinrite as suggested by happy_soil, it may repair bad sectors or recover data and move it to good sectors, when spinrite is done make another image using Drive Snapshot.
  • user3417528
    user3417528 almost 10 years
    Thanks for reply. but I don't have as high volume for right now, as I mentioned there are only 1000 requests per seconds are coming. Even I have observed processor is not able to handle 500 requests per sec.
  • DavidPostill
    DavidPostill over 7 years
    Please quote the essential parts of the answer from the reference link(s), as the answer can become invalid if the linked page(s) change.
  • Joep van Steen
    Joep van Steen over 7 years
    Ok, will do. Stand by ;)
  • Chloe
    Chloe over 6 years
    This doesn't answer the question: How to get a list of files with bad sectors? A list of bad sectors is not the same as a list of file names.
  • Chloe
    Chloe over 6 years
    This doesn't explain how to get the files with bad sectors.
  • GabrielB
    GabrielB about 5 years
    CHKDSK should NEVER be used on a HDD suspected of having physical issues. It only fixes logical issues as far as the NTFS filesystem is concerned. It might further increase the amount of damage without recovering a single byte in the process. See the reply by Scott Petrack for the best course of action in that kind of situation. As to regularly assessing the condition of storage devices on Windows systems, I highly recommand HD Sentinel, commercial software but worth every penny considering that it can warn of an impending disaster at the first signs of trouble, and thus prevent it.
  • GabrielB
    GabrielB about 5 years
    Huge drawback of HD Sentinel for that particular feature: it will actually try to access the defective sectors and display their contents when requested to display which files they belong to. Piriform's Defraggler is excellent for that purpose, in combination with the LBA values of bad sectors provided by HD Sentinel during its scan: point a block on the map and it lists which files are in that area. But the first step should be cloning/imaging, and as Scott Petrack mentioned, it's possible to get a list of files affected by bad sectors by using ddru_ntfsfindbad in combination with ddrescue.
  • GabrielB
    GabrielB about 5 years
    DO NOT USE SPINRITE ON A FAILING HDD. Its ability to "refresh", let alone "fix", is dubious at best, and while it's running, which is highly stressful for an already defective drive, not a single byte of data is actually recovered. It may be used at the very end, once a full clone has been made with ddrescue / HDDSuperClone, to attempt to salvage some of the sectors that were skipped (but don't count on it, most likely it will just force them to be reallocated, hence the original data will be lost anyway). See Scott Petrack's reply for the best course of action in such a situation.
  • GabrielB
    GabrielB about 5 years
    nfi.exe does not have this issue, it gives its results by analysing the MFT and does not attempt to access the requested sectors, but there seems to be an issue with values beyond 2^31 or 2147483648. superuser.com/questions/1267334/… The native multi-purpose Windows tool fsutil doesn't seem to have that issue, and the output is more streamlined, but it's slower (can be a problem if there are many clusters to request). With nfi use sector numbers, but with fsutil use cluster number.
  • mFeinstein
    mFeinstein almost 5 years
    Can this be run in the new WSL on Windows?
  • Klaidonis
    Klaidonis over 4 years
    lol, for operations like this, you want the safest approach possible in terms of software stability and potential bugs.