Loading ...
Sorry, an error occurred while loading the content.

Re: dcfldd and bad sectors

Expand Messages
  • rossetoecioccolato
    ... [...] ... DDrescue with -d uses the O_DIRECT flag (unbuffered IO) that was introduced with Linux 2.4 kernels (google for O_DIRECT ): int do_rescue() {
    Message 1 of 2 , May 23, 2008
      BGrundy wrote:

      > The bs=512 option has no effect on this.
      > The test drive had 4 bad sectors. All the Linux based dd
      > tools missed between 200 and 232 sectors. Other tools
      > missed just the 4. When the dd commands were run with
      > the device associated with /dev/raw, they correctly
      > reported 4 bad sectors. The only Linux util that did
      > not suffer the problem was GNU ddrescue (not dd_rescue),
      > as long as the -d flag was used ("direct access" - like
      > using /dev/raw).
      > I really think this has to do with kernel caching. Hence
      > the correct output with /dev/raw. Just a theory. <

      DDrescue with '-d' uses the O_DIRECT flag (unbuffered IO) that was
      introduced with Linux 2.4 kernels (google for "O_DIRECT"):

      int do_rescue()
      const int ides =
      open( iname, O_RDONLY | o_direct );

      Most modern operating systems use "buffered" disk IO by default. In
      essence, the operating system reads from the drive using a default
      algorithm and then the application reads from the OS buffer rather
      than directly from the hardware. The DD block size (bs=512) has no
      effect on how data actually is read when using buffered IO. The
      operating system uses its own algorithm which, typically involves
      reading more than one sector at at time for performance reasons.

      Contemporary operating systems also permit you to override the
      default behavior by specifying a flag such as O_DIRECT or
      FILE_FLAG_NO_BUFFERING on Windows. With "direct" or "unbuffered" IO
      data is read directly into the application buffer exactly as it is
      requested (bs=512 affects how data actually is read from the

      Then there are the design decisions made by drive manufacturers. For
      the drives that I tested, if you request 4 sectors and 1 of the 4
      sectors is bad then you will successfully read 0 sectors. If you
      request each of the 4 sectors one at a time then you will get 3
      sectors and fail to read 1 sector. But a different drive
      manufacturer or architecture (e.g. a flash drive) could implement
      things differently.

      So what you have is a complex interaction between hardware (disk
      drive), OS and application design decisions. Buffered IO greatly
      simplifies things for application developers. But then you have to
      live with the default OS algorithm which is usually optimized for
      performance. Direct IO improves performance and provides greater
      control over how data is read from the drive; but there are special
      rules for read access that are imposed by the limitations of the
      underlying hardware.

      DD was written before the modern buffered vs. unbuffered IO
      distinction. I rather suspect that it predates the advent of
      buffered IO, if there is someone around who is able to remember back
      that far. Adapting DD to use unbuffered (direct) IO was no simple
      task, not the least of which is because DD is supposed also to be
      able to read regular files which may be encrypted or compressed or
      sparse. That is one reason why we chose to rewrite the current
      released version of FAU-DD starting from scratch.

      Using 'comp=noerror' (or 'comp=noerror,sync' on *nix) is the correct
      algorithm when properly implemented. But what is "proper" could
      change with the next hot fix or service pack or generation of
      drives. So we need to constantly test and retest. Thanks for taking
      the time to test this. Your efforts will benefit the entire


    Your message has been successfully submitted and would be delivered to recipients shortly.