Loading ...
Sorry, an error occurred while loading the content.

httpd consumes 220mb of RAM when printing alot of output

Expand Messages
  • Jeff Ambrosino
    I m exporting a database table through a mod_perl2 handler. The problem is that for large tables, the size of the httpd process balloons to consume alot of
    Message 1 of 5 , Sep 22, 2005
    • 0 Attachment
      I'm exporting a database table through a mod_perl2 handler. The
      problem is that for large tables, the size of the httpd process
      balloons to consume alot of RAM. For example, a 299mb MySQL table
      (size of .MYD file), which creates a 35mb export, causes httpd to
      consume about 220mb of RAM!

      My code is fairly straightforward, so I must be missing something
      about buffering or

      while ( my $rowref = $sth->fetchrow_arrayref ) {
      $r->print $rowref->[0];
      # ...more $r->print statements for each field...
      }

      Thinking perhaps mod_perl is buffering the entire output, I've tried
      adding "$r->rflush" after each row's print, and I also tried setting
      "local $| = 1", but neither helped.

      Any ideas on how to keep memory utilization under control?

      thanks
      JB
    • Malcolm J Harwood
      ... Does it do the same thing if you don t print anything? I believe some of the DBD s cache the entire result set rather than getting it a chunk at a time.
      Message 2 of 5 , Sep 22, 2005
      • 0 Attachment
        On Thursday 22 September 2005 11:06 am, Jeff Ambrosino wrote:

        > I'm exporting a database table through a mod_perl2 handler. The
        > problem is that for large tables, the size of the httpd process
        > balloons to consume alot of RAM. For example, a 299mb MySQL table
        > (size of .MYD file), which creates a 35mb export, causes httpd to
        > consume about 220mb of RAM!
        >
        > My code is fairly straightforward, so I must be missing something
        > about buffering or
        >
        > while ( my $rowref = $sth->fetchrow_arrayref ) {
        > $r->print $rowref->[0];
        > # ...more $r->print statements for each field...
        > }

        Does it do the same thing if you don't print anything? I believe some of the
        DBD's cache the entire result set rather than getting it a chunk at a time.
        Which dbd are you using?




        --
        "You heard Mr. Garibaldi. You did the right thing."
        "Darling, I did the necessary thing. That is not always the same as
        the right thing."
        - Janice and Laura Rosen in Babylon 5:"The Quality of Mercy"
      • Geoffrey Young
        ... it s been a while since I played with this so I m taxing my memory a bit, but oracle supports cursor references which, if memory serves, keep the result
        Message 3 of 5 , Sep 22, 2005
        • 0 Attachment
          >>while ( my $rowref = $sth->fetchrow_arrayref ) {
          >> $r->print $rowref->[0];
          >> # ...more $r->print statements for each field...
          >>}
          >
          >
          > Does it do the same thing if you don't print anything? I believe some of the
          > DBD's cache the entire result set rather than getting it a chunk at a time.
          > Which dbd are you using?

          it's been a while since I played with this so I'm taxing my memory a bit,
          but oracle supports cursor references which, if memory serves, keep the
          result set on the server. does mysql support something similar?

          --Geoff
        • Jeff Ambrosino
          I m using: DBD-mysql-2.9004 DBI-1.47 (and Apache::DBI) Good question about MySQL cursors... I looked through the DBD::Mysql docs and didn t find anything
          Message 4 of 5 , Sep 22, 2005
          • 0 Attachment
            I'm using:

            DBD-mysql-2.9004
            DBI-1.47
            (and Apache::DBI)

            Good question about MySQL cursors... I looked through the DBD::Mysql
            docs and didn't find anything about cursors. So this means that if
            you query the whole table ("select * from ....") then you need to have
            as much RAM as the size of the table! (?!) I suppose a workaround
            would be to do multiple select's for different key ranges, and print
            out each subset iteratively.

            JB



            On 9/22/05, Malcolm J Harwood <mjhlist-modperl@...> wrote:
            > On Thursday 22 September 2005 11:06 am, Jeff Ambrosino wrote:
            >
            > > I'm exporting a database table through a mod_perl2 handler. The
            > > problem is that for large tables, the size of the httpd process
            > > balloons to consume alot of RAM. For example, a 299mb MySQL table
            > > (size of .MYD file), which creates a 35mb export, causes httpd to
            > > consume about 220mb of RAM!
            > >
            > > My code is fairly straightforward, so I must be missing something
            > > about buffering or
            > >
            > > while ( my $rowref = $sth->fetchrow_arrayref ) {
            > > $r->print $rowref->[0];
            > > # ...more $r->print statements for each field...
            > > }
            >
            > Does it do the same thing if you don't print anything? I believe some of the
            > DBD's cache the entire result set rather than getting it a chunk at a time.
            > Which dbd are you using?
          • Perrin Harkins
            ... No. See mysql_use_result in the DBD::mysql docs. Note that DBD::Oracle has a similar feature for setting the number of rows transferred on each fetch,
            Message 5 of 5 , Sep 22, 2005
            • 0 Attachment
              On Thu, 2005-09-22 at 12:02 -0400, Jeff Ambrosino wrote:
              > I'm using:
              >
              > DBD-mysql-2.9004
              > DBI-1.47
              > (and Apache::DBI)
              >
              > Good question about MySQL cursors... I looked through the DBD::Mysql
              > docs and didn't find anything about cursors. So this means that if
              > you query the whole table ("select * from ....") then you need to have
              > as much RAM as the size of the table! (?!)

              No. See "mysql_use_result" in the DBD::mysql docs.

              Note that DBD::Oracle has a similar feature for setting the number of
              rows transferred on each fetch, and that's much easier than using
              cursors.

              - Perrin
            Your message has been successfully submitted and would be delivered to recipients shortly.