Loading ...
Sorry, an error occurred while loading the content.

Bug in tempfile handling

Expand Messages
  • Todd A. Jacobs
    I m running nanoblogger/stable 3.1-3 on a Debian system, and wanted to report a bug that can cause data loss. The problem is that nb cleans up all files in
    Message 1 of 4 , Jan 6, 2006
    • 0 Attachment
      I'm running nanoblogger/stable 3.1-3 on a Debian system, and wanted to
      report a bug that can cause data loss.

      The problem is that nb cleans up all files in /tmp, even if there's been
      an error. In my case, the blog directory's permissions had been
      accidentally changed, and was not writable by the user running "nb -a"
      to create a new entry.

      However, nb attempted to update all files anyway, generating various
      bash errors (as it should). But it didn't check to ensure that the new
      entry had been written to blog_dir before deleting the files in /tmp,
      which resulted in the entire entry being lost and unrecoverable.

      It seems to me that nb should:

      1. Ensure that blog_dir is writable before running the update
      scripts.

      2. Verify that the entry was successfully written to blog_dir/data
      before removing any temporary files.

      3. Have a recovery option of some sort in the event that it bails
      out at some point.

      I haven't looked at the source in detail yet, but #1 is probably as easy
      as:

      if [ -w "$NB_DATA_DIR" ]; then
      # Copy tmpfile to data directory here. Exit with non-zero status
      # if it fails, or perhaps handle the error more gracefully
      # somehow (report name of temp file for manual recovery,
      # perhaps?).
      fi

      while #2 probably just needs to have 0 removed from the trap handler.

      --
      Re-Interpreting Historic Miracles with SED #141: %s/water/wine/g
    • Kevin W.
      Todd, Thank you for the bug report, but I wish you would ve submitted it to my sourceforge.net project s page instead or maybe even bugs.debian.org. I believe
      Message 2 of 4 , Jan 6, 2006
      • 0 Attachment
        Todd,

        Thank you for the bug report, but I wish you would've
        submitted it to my sourceforge.net project's page
        instead or maybe even bugs.debian.org.

        I believe this has already been addressed in more
        recent versions. My solution was simple; new entry's
        get written to $BLOG_DIR and moved to $BLOG_DIR/data
        when editing is complete. Failed editing sessions can
        be identified and recovered in files of the form
        nb_edit-newentry-YYYY-MMM-DDTHH_MM_SS.txt from
        $BLOG_DIR. Then the file can be edited directly and
        imported with the new --file option. This way the user
        can decided what's best to do, but at least they have
        the option of recovery.

        Using -w sounds like a good idea though.

        Thanks,
        Kevin

        --- "Todd A. Jacobs" <nospam@...> wrote:

        > I'm running nanoblogger/stable 3.1-3 on a Debian
        > system, and wanted to
        > report a bug that can cause data loss.
        >
        > The problem is that nb cleans up all files in /tmp,
        > even if there's been
        > an error. In my case, the blog directory's
        > permissions had been
        > accidentally changed, and was not writable by the
        > user running "nb -a"
        > to create a new entry.
        >
        > However, nb attempted to update all files anyway,
        > generating various
        > bash errors (as it should). But it didn't check to
        > ensure that the new
        > entry had been written to blog_dir before deleting
        > the files in /tmp,
        > which resulted in the entire entry being lost and
        > unrecoverable.
        >
        > It seems to me that nb should:
        >
        > 1. Ensure that blog_dir is writable before
        > running the update
        > scripts.
        >
        > 2. Verify that the entry was successfully
        > written to blog_dir/data
        > before removing any temporary files.
        >
        > 3. Have a recovery option of some sort in the
        > event that it bails
        > out at some point.
        >
        > I haven't looked at the source in detail yet, but #1
        > is probably as easy
        > as:
        >
        > if [ -w "$NB_DATA_DIR" ]; then
        > # Copy tmpfile to data directory here. Exit with
        > non-zero status
        > # if it fails, or perhaps handle the error more
        > gracefully
        > # somehow (report name of temp file for manual
        > recovery,
        > # perhaps?).
        > fi
        >
        > while #2 probably just needs to have 0 removed from
        > the trap handler.
        >
        > --
        > Re-Interpreting Historic Miracles with SED #141:
        > %s/water/wine/g
        >




        __________________________________________
        Yahoo! DSL – Something to write home about.
        Just $16.99/mo. or less.
        dsl.yahoo.com
      • Todd A. Jacobs
        ... I usually try and discuss things on lists first, just in case they ve already been addressed or are considered features. :) If it hasn t already been
        Message 3 of 4 , Jan 6, 2006
        • 0 Attachment
          On Fri, Jan 06, 2006 at 04:48:23PM -0800, Kevin W. wrote:

          > Thank you for the bug report, but I wish you would've submitted it to
          > my sourceforge.net project's page instead or maybe even
          > bugs.debian.org.

          I usually try and discuss things on lists first, just in case they've
          already been addressed or are considered "features." :) If it hasn't
          already been done, a Debian bug is probably a good idea too, although
          this problem probably doesn't warrant a new package for Sarge.

          > I believe this has already been addressed in more recent versions. My
          > solution was simple; new entry's get written to $BLOG_DIR and moved to
          > $BLOG_DIR/data when editing is complete.

          The problem here is the same, though. Try this:

          vim /tmp/foo/bar/baz

          Assuming that no such directories exist, it will *still* open the
          editing session, and you won't see any sort of an error until you
          attempt to write the file to disk. So, vim will still open $BLOG_DIR for
          writing, even if it doesn't have write-permissions there.

          --
          Re-Interpreting Historic Miracles with SED #141: %s/water/wine/g
        • Kevin W.
          Good point! I think your idea of checking write permissions combined with storing temporary entries to $BLOG_DIR will solve this issue. Thanks, Kevin ...
          Message 4 of 4 , Jan 6, 2006
          • 0 Attachment
            Good point! I think your idea of checking write
            permissions combined with storing temporary entries to
            $BLOG_DIR will solve this issue.

            Thanks,
            Kevin

            --- "Todd A. Jacobs" <nospam@...> wrote:

            > On Fri, Jan 06, 2006 at 04:48:23PM -0800, Kevin W.
            > wrote:

            > > I believe this has already been addressed in more
            > recent versions. My
            > > solution was simple; new entry's get written to
            > $BLOG_DIR and moved to
            > > $BLOG_DIR/data when editing is complete.
            >
            > The problem here is the same, though. Try this:
            >
            > vim /tmp/foo/bar/baz
            >
            > Assuming that no such directories exist, it will
            > *still* open the
            > editing session, and you won't see any sort of an
            > error until you
            > attempt to write the file to disk. So, vim will
            > still open $BLOG_DIR for
            > writing, even if it doesn't have write-permissions
            > there.




            __________________________________________
            Yahoo! DSL – Something to write home about.
            Just $16.99/mo. or less.
            dsl.yahoo.com
          Your message has been successfully submitted and would be delivered to recipients shortly.