Bad hard disk drive and robocopy

9,022

Solution 1

As you've noticed, /r:3 will do three retries instead of 1 million(!)

/w:10 will wait 10 seconds between retries (the default is 30).

If you're doing a local copy, I'd do two retries with a 1 second wait between them (I keep the 30 second wait when doing network copies in case I knock the cable out).

Also do not use /mir as it will delete files in the destination that don't exist in the source (which might matter if you run this command several times) - /E is sufficient.

If you're using /copyall, remember to run the command prompt as administrator or it probably won't be able to set all the ACLs properly.

Afterwards, chkdsk /r will try to recover files from bad sectors.

Solution 2

I wrote a little PowerShell script to ignore the device I/0 errors on copy (and thus replace unreadable data with zero's). An updated and more advanced version is available here. The code that I have placed here below is one of the first versions and as simple as possible:

## .SYNOPSIS
#########################
## This script copies a file, ignoring device I/O errors that abort the reading of files in most applications.
##
## .DESCRIPTION
#########################
## This script will copy the specified file even if it contains unreadable blocks caused by device I/O errors and such. The block that it can not read will be replaced with zeros. The size of the block is determined by the buffer. So, to optimize it for speed, use a large buffer. T optimize for accuracy, use small buffer, smallest being the cluster size of the partition where your source file resides.
##
## .OUTPUTS
#########################
## Errorcode 0: Copy operation finished without errors.
##
## .INPUTS
#########################
## Blabla..
##
## .PARAMETER SourceFilePath
## Path to the source file.
##
## .PARAMETER DestinationFilePath
## Path to the destination file.
##
## .PARAMETER Buffer
## I makes absolutely no sense to set this less than the cluster size of the partition. Setting it lower than cluster size might force rereading a bad sector in a cluster multiple times. Better is to adjust the retry option. Also, System.IO.FileStream buffers input and output for better performance. (http://msdn.microsoft.com/en-us/library/system.io.filestream.aspx).
##
## .EXAMPLE
## .\Force-Copy.ps1 -SourceFilePath "file_path_on_bad_disk" -DestinationFilePath "destinaton_path"
##
#########################

[CmdletBinding()]
param(
   [Parameter(Mandatory=$true,
              ValueFromPipeline=$true,
              HelpMessage="Source file path.")]
   [string][ValidateScript({Test-Path -LiteralPath $_ -Type Leaf})]$SourceFilePath,
   [Parameter(Mandatory=$true,
              ValueFromPipeline=$true,
              HelpMessage="Destination file path.")]
   [string][ValidateScript({ -not (Test-Path -LiteralPath $_) })]$DestinationFilePath,
   [Parameter(Mandatory=$false,
              ValueFromPipeline=$false,
              HelpMessage="Buffer size in bytes.")]
   [int32]$BufferSize=512*2*2*2 # 4096: the default windows cluster size.
)

Write-Host "Working..." -ForegroundColor "Green";

# Making buffer
$Buffer = New-Object System.Byte[] ($BufferSize);
$UnreadableBits = 0;

# Fetching source and destination files.
$SourceFile = Get-Item -LiteralPath $SourceFilePath;
$DestinationFile = New-Object System.IO.FileInfo ($DestinationFilePath);

# Copying starts here
$SourceStream = $SourceFile.OpenRead();
$DestinationStream = $DestinationFile.OpenWrite();

while ($SourceStream.Position -lt $SourceStream.Length) {
    try {

        $ReadLength = $SourceStream.Read($Buffer, 0, $Buffer.length);
        # If the read operation is successful, the current position of the stream is advanced by the number of bytes read. If an exception occurs, the current position of the stream is unchanged. (http://msdn.microsoft.com/en-us/library/system.io.filestream.read.aspx)

    }
    catch [System.IO.IOException] {

        Write-Warning "System.IO.IOException at $($SourceStream.position) bit: $($_.Exception.message)"
        Write-Debug "Debugging...";

        $ShouldReadSize = [math]::Min([int64] $BufferSize, $SourceStream.Length - $SourceStream.Position);

        $DestinationStream.Write((New-Object System.Byte[] ($BufferSize)), 0, $ShouldReadSize);
        $SourceStream.Position = $SourceStream.Position + $ShouldReadSize;

        $UnreadableBits += $ShouldReadSize;

        continue;
    }
    catch {
        Write-Warning "Unhandled error at $($SourceStream.position) bit: $($_.Exception.message)";
        Write-Debug "Unhandled error. You should debug.";
        throw $_;
    }

    $DestinationStream.Write($Buffer, 0, $ReadLength);
    # Write-Progress -Activity "Hashing File" -Status $file -percentComplete ($total/$fd.length * 100)
}

$SourceStream.Dispose();
$DestinationStream.Dispose();

if ($UnreadableBits -ne 0) {Write-Host "$UnreadableBits bits are bad." -ForegroundColor "Red";}

Write-Host "Finished copying $SourceFilePath!" -ForegroundColor "Green";
Share:
9,022
Thalys
Author by

Thalys

Updated on September 17, 2022

Comments

  • Thalys
    Thalys almost 2 years

    After my hard disk drive gave me CRC errors I wanted to copy one drive to another and picked up a new 1 TB hard disk drive.

    I am using the command:

    robocopy G: J: /MIR /COPYALL /ZB
    

    First it tried copying the file a few times (I didn't count, it's not in my window any more) and got an access denied error, error 5. Then it tried again and locked up. I tried copying that specific file (14 MB) and Windows says "can't read from source file or disk".

    I started robocopy again. Hopefully it will ignore it after a fail attempt or two, but what options can I use to say if it doesn't work continue to the next file? It looked like that is what it was doing, but for this last one it repeated more than four times and locked up finally.

    I'm open to other copy solutions. I do prefer built in solutions. I am using Windows 7.

    Also how might I do this without the /MIR option? Is /S /E good enough? flag reference here

    I see I can control retries with /R:<Amount>, but I am still open to alternative solutions.

    It seems to take a few minutes before it decides the try failed. Can I shorten it? The file has been stuck at 20.8% for quiet a while now.

    I tried a data recovery app. It tried to recover the data and DID NOT mark it as invalid or corrupted. Although I did get a message saying sector XYZ had an I/O error, continue? but that didn't give me the name of the corrupted file. I don't want this. The best solution for me is getting all good files + names of invalid files.

  • Zack
    Zack about 14 years
    Would it try recover the bad data and give me an invalid partially recovered file? I may rather not have that or at least marked as partial/invalid file.
  • Zack
    Zack about 14 years
    It looks like it will try to recover the file and will not mark it as invalid/corrupted. I dont want this. But good idea with data recovery app.
  • jacojburger
    jacojburger over 5 years
    Not much more, I included XO and using a text output file. Always good to verify copy jobs. Or if something did go wrong to check why. His drive is degraded, so restarting a copy job with XO will skip copied files.