Loading ...
Sorry, an error occurred while loading the content.

ds transfer yields larger file

Expand Messages
  • Charles White
    I attempted to run a ds_transfer on SW 3.2.1 this weekend and got a larger result file than the source file I was compressing. Has anyone seen this behaviour
    Message 1 of 3 , Oct 1, 2007
    View Source
    • 0 Attachment

      I attempted to run a ds_transfer on SW 3.2.1 this weekend and got a larger result file than the source file I was compressing.  Has anyone seen this behaviour before?  My last ds_transfer was around roughly 3 months ago.  This occoured on my rwo.ds superfile set.  The matching gdb superfile set shrank by 7 gig.  The rwo netted out 1.6 Gig larger, from 44 Gig to 45.6 after the transfer.  Being wary of the result I went back to my before compression cold backup.  Other datasets compressed as expected.  I’m assuming some sort of corruption or problem in some of my alternatives that hasn’t raised its ugly head yet.  The dst object I’m using is below, nothing special; mainly just setting up 4 additional compenents for the superfile structure and using singleuser_nf to speed the transfer up.  Users aren’t able to access the server during the operation.

       

      dst << ds_transfer.new(

                                                                         :hot?,hot_backup,                # <- this is set to false

                                                                         :from_ds_file,a_file,

                                                                         :to_ds_file,a_file,

                                                                         :searchpath, {"".concatenation(from_disk,a_dir)},

                                                                         :to_params, {:directory, "".concatenation(to_disk,a_dir),:concurrency_mode,:singleuser_nf},

                                                                         :superfiles, { { _unset, _unset, 16000,

                                                                                                      {  {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-1.ds"), 16000} , {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-2.ds"), 16000},

                       {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-3.ds"), 16000},

                       {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-4.ds"), 16000}

                       } } }

       

      Thanks,

      Charles

       

    • jwakefield@washgas.com
      I had the same experience with 3.3. The culprit were the checkpoints. John Charles White
      Message 2 of 3 , Oct 1, 2007
      View Source
      • 0 Attachment
        I had the same experience with 3.3. The culprit were the checkpoints.

        John



        "Charles White"
        <charles.white@se
        coenergy.com> To
        Sent by: <sw-gis@yahoogroups.com>
        sw-gis@yahoogroup cc
        s.com
        Subject
        [sw-gis] ds transfer yields larger
        10/01/2007 10:53 file
        AM


        Please respond to
        sw-gis@yahoogroup
        s.com






        I attempted to run a ds_transfer on SW 3.2.1 this weekend and got a larger
        result file than the source file I was compressing. Has anyone seen this
        behaviour before? My last ds_transfer was around roughly 3 months ago.
        This occoured on my rwo.ds superfile set. The matching gdb superfile set
        shrank by 7 gig. The rwo netted out 1.6 Gig larger, from 44 Gig to 45.6
        after the transfer. Being wary of the result I went back to my before
        compression cold backup. Other datasets compressed as expected. I’m
        assuming some sort of corruption or problem in some of my alternatives that
        hasn’t raised its ugly head yet. The dst object I’m using is below,
        nothing special; mainly just setting up 4 additional compenents for the
        superfile structure and using singleuser_nf to speed the transfer up.
        Users aren’t able to access the server during the operation.

        dst << ds_transfer.new(

        :hot?,hot_backup, # <- this is set to false

        :from_ds_file,a_file,

        :to_ds_file,a_file,

        :searchpath, {"".concatenation(from_disk,a_dir)},

        :to_params, {:directory,
        "".concatenation(to_disk,a_dir),:concurrency_mode,:singleuser_nf},

        :superfiles, { { _unset, _unset, 16000,
        { {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-1.ds"), 16000} ,
        {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-2.ds"), 16000},
        {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-3.ds"),
        16000},
        {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-4.ds"),
        16000}
        } } }

        Thanks,
        Charles
      • jwakefield@washgas.com
        Btw Charles. If there is one product that I would endorse it would be realworld dscompressor. I notice you still hard code your compresses. We have two
        Message 3 of 3 , Oct 1, 2007
        View Source
        • 0 Attachment
          Btw Charles. If there is one product that I would endorse it would be
          realworld dscompressor. I notice you still hard code your compresses. We
          have two licenses so we split the compress into two to save compress time.
          Dscompress is gui driven and makes compression sooooo simple.

          John



          jwakefield@washga
          s.com
          Sent by: To
          sw-gis@yahoogroup sw-gis@yahoogroups.com
          s.com cc

          Subject
          10/01/2007 10:59 Re: [sw-gis] ds transfer yields
          AM larger file


          Please respond to
          sw-gis@yahoogroup
          s.com






          I had the same experience with 3.3. The culprit were the checkpoints.

          John



          "Charles White"
          <charles.white@se
          coenergy.com> To
          Sent by: <sw-gis@yahoogroups.com>
          sw-gis@yahoogroup cc
          s.com
          Subject
          [sw-gis] ds transfer yields larger
          10/01/2007 10:53 file
          AM


          Please respond to
          sw-gis@yahoogroup
          s.com






          I attempted to run a ds_transfer on SW 3.2.1 this weekend and got a larger
          result file than the source file I was compressing. Has anyone seen this
          behaviour before? My last ds_transfer was around roughly 3 months ago.
          This occoured on my rwo.ds superfile set. The matching gdb superfile set
          shrank by 7 gig. The rwo netted out 1.6 Gig larger, from 44 Gig to 45.6
          after the transfer. Being wary of the result I went back to my before
          compression cold backup. Other datasets compressed as expected. I’m
          assuming some sort of corruption or problem in some of my alternatives that
          hasn’t raised its ugly head yet. The dst object I’m using is below,
          nothing special; mainly just setting up 4 additional compenents for the
          superfile structure and using singleuser_nf to speed the transfer up.
          Users aren’t able to access the server during the operation.

          dst << ds_transfer.new(

          :hot?,hot_backup, # <- this is set to false

          :from_ds_file,a_file,

          :to_ds_file,a_file,

          :searchpath, {"".concatenation(from_disk,a_dir)},

          :to_params, {:directory,
          "".concatenation(to_disk,a_dir),:concurrency_mode,:singleuser_nf},

          :superfiles, { { _unset, _unset, 16000,
          { {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-1.ds"), 16000} ,
          {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-2.ds"), 16000},
          {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-3.ds"),
          16000},
          {"".concatenation(to_disk,a_dir,"\",a_file_no_ds,"-4.ds"),
          16000}
          } } }

          Thanks,
          Charles






          Yahoo! Groups Links



          (See attached file: pic22813.gif)
        Your message has been successfully submitted and would be delivered to recipients shortly.