Loading ...
Sorry, an error occurred while loading the content.

Re: Huge files SSD development

Expand Messages
  • ivanagugu
    Just be carefull, life cycle of ssd disks rapidly go down with writing and deleting files, check carefully docs of disk... At least, do not use that disk for
    Message 1 of 59 , Aug 1, 2009
    • 0 Attachment
      Just be carefull, life cycle of ssd disks rapidly go down with writing and deleting files, check carefully docs of disk...
      At least, do not use that disk for backups.

      --- In magicu-l@yahoogroups.com, magic@... wrote:
      >
      > Today, I started looking for a machine to handle the 400 million
      > records from which I will have to create 800 million ASCII files in
      > October. As the current program takes 2 days to read the 14 million
      > record database and create 28 million ASCII files, it would take me
      > 57 days to do the same for the 400 million records using the current machine.
      >
      > And since we have established the remaining performance issues to be
      > directly resulting from disc access and file manipulation, I started
      > inquiring into faster hard drives. To my surprise, I found out that
      > there are now available 250 GIG solid state drives (SSD)... which is
      > essentially a huge USB key. I was told the SSD drive access is
      > INSTANTANEOUS since there are no mechanical parts and therefore no
      > mechanical access, hence making them exponentially faster than
      > traditional drives. They are 900$ per 250 GIG at the moment, but I
      > was told that the prices are dropping fast.
      >
      > The new SSD drives coupled with a double quad processor and 64 GIG of
      > on board RAM should make the process fly. I will let you all know what happens.
      >
      > The new SSD drive technology will save ALLOT of time (I hope)!
      >
      > Andy
      >
    • Sai Krishna Bolisetti
      Dear Heidi,   I am looking for both .   I will not able to attend your class. Thanks for your information . Thanks & Regards, Sai ... From: Heidi
      Message 59 of 59 , Aug 2, 2009
      • 0 Attachment
        Dear Heidi,
         
        I am looking for both .
         
        I will not able to attend your class. Thanks for your information .

        Thanks & Regards,
        Sai
        --- On Thu, 7/30/09, Heidi Schuppenhauer <heidis13@...> wrote:


        From: Heidi Schuppenhauer <heidis13@...>
        Subject: Re: [magicu-l] Any body Have video file for Demo of edeveloper V10
        To: magicu-l@yahoogroups.com
        Date: Thursday, July 30, 2009, 12:49 PM


         



        I'm not clear what you are asking. By "Web based application" ,do you mean a
        RIA application? Or an HTML-based browser application?

        I don't have a video of either one, specifically, but can probably put one
        together after the first week in August. I do have a class on the HTML
        style applications you can buy, but it doesn't have have RIA, and it isn't
        in V10. However, the HTML style applications are build about the same
        in V10/UP as they were in V8/V9. They are rather fast to build, but
        a lot of the work is in putting together the HTML, which can be very
        text-intensive. In our applications, the HTML part is done by the
        art department of the company in question, while we provide the real-time
        data via UP. You won't get a lot of speed improvement with UP over V8,
        in terms of programming time. UP has better programming tools though,
        like better cross referencing, debugging, and source control.

        RIA web applications are a whole different animal. In that case, you are
        doing programming that is very much like client/server programming, but
        the deployment is a bit different (and some would say more challenging:
        but mainly it's a different set of challenges than client/server) . But the
        programming in RIA is a lot faster than programming the HTML merge
        style.

        The third category of "web deployment" in UP is to use Citrix,
        which works very well for some applications. Again, the Citrix style
        of deployment works the same between V8/V9/V10/UP.

        On Wed, Jul 29, 2009 at 6:57 PM, Sai Krishna Bolisetti <
        saikrishnab2001@ yahoo.com> wrote:

        >
        >
        > Dear Friends & Experts,
        >
        > Good morning.
        >
        > Hope you all doing good.
        >
        > I want to see how fast we can develop the Web Based application
        > with Magic Edeveloper V 10 or later versions.
        >
        > Also Provide How fast we can develop the Client Server application
        > development with
        > with Magic Edeveloper V 10 or later versions compare to older versions 8
        > or 7.
        >
        >
        > Thanks & Regards,
        > Sai
        >
        > --- On Wed, 7/29/09, Andy Jerison <ajerison@jerison. com<ajerison%40jerison .com>>
        > wrote:
        >
        > From: Andy Jerison <ajerison@jerison. com <ajerison%40jerison .com>>
        > Subject: RE: [magicu-l] Re: Huge files
        > To: magicu-l@yahoogroup s.com <magicu-l%40yahoogr oups.com>
        > Date: Wednesday, July 29, 2009, 11:18 PM
        >
        >
        >
        > Hi Andy,
        >
        > You'll get a dramatic additional improvement if you can replace the
        > subtasks
        > with Block Loops. Since you're no longer using I/O files, you don't have to
        > worry about the issue of opening an I/O file in the current task.
        >
        > Andy J
        >
        > -----Original Message-----
        > From: magicu-l@yahoogroup s.com [mailto:magicu- l@yahoogroup s.com] On
        > Behalf
        > Of magic@aquari. com
        > Sent: Wednesday, July 29, 2009 8:34 AM
        > To: magicu-l@yahoogroup s.com
        > Subject: Re: [magicu-l] Re: Huge files
        >
        > Hi Omar,
        >
        > That would not work because each record has its own 2 ASCII files, so
        > I have to create them in a lower task.
        >
        > Last night at midnight I ran the program after replacing the IO
        > creation of one of the ASCII files with a blb2file method and the
        > thing is at 12 million this morning. Although I was expecting it to
        > run a bit faster, 12 million records in 8 hours versus 14 days (which
        > is how long it took with the IO methid), is excellent.
        >
        > A big thanks to Keith for kicking my ego a bit and having me take a
        > step backwards and re-evaluate the way I wrote the program. After
        > over 15 years of using magic I took for granted that my first attempt
        > to write the program would be perfect and assumed that it could not
        > be my programming that slowed down the process. Boy was I wrong...
        > lesson learned!
        >
        > Essentially writing to an IO device (standard file) to sequentially
        > create the lines of the ASCII files slowed the process down
        > dramatically. By replacing this with a task that loops to create the
        > lines to a blob variable and then at the end of the task simply
        > creates the file in one line using blb2file (blob to file) sped up
        > the process incredibly.
        >
        > Thanks to Keith and everyone else for their help.
        > Andy
        >
        > At 07:56 AM 7/29/2009, you wrote:
        > >
        > >
        > >Hi Andy,
        > >
        > >As I read the thread this morning (for me), I do concur that it is
        > >likely that a more global look at what this is all about, maybe even
        > >outside what you have been submitted, could permit an approach that
        > >would be all together so much more instantaneous or much closer to it.
        > >
        > >But in the frame of what you are doing, the two Ascii files updated
        > >in two different sub-tasks is what captured my attention.
        > >
        > >Can you detail more as to what you are writing to these files?
        > >
        > >Why can't you 'Output Form' directly in the task where you are
        > >reading the main table? Using 'Bloc loop' if necessary.
        > >
        > >Omar,
        > >
        > >--- In <mailto:magicu- l%40yahoogroups. com>magicu-l@ yahoogroup s.com,
        > >magic@... wrote:
        > > >
        > > > Actually with Keith's help I have been able to speed up the process.
        > > > If all goes well I should be able to cycle through them in a couple
        > > > of hours, which means it really does not matter.
        > > >
        > > >
        > > > At 10:37 PM 7/28/2009, you wrote:
        > > > >
        > > > >
        > > > >
        > > > >Id be working on a way of summarising this also.....your just
        > > > >creating a nightmare expecting to process this much data using magic
        > > > >weekly..... .SQL Server stored prcedures maybe even then Id be some
        > > > >how summarising i...or storing jsut the chaged records for the
        > > > >period and somehow recalcing based on the changes.
        > > > >
        > > > >
        > > > >To:
        > >
        > <mailto:magicu- l%40yahoogroups. com><mailto: magicu-l% 40yahoogroups.
        > com>magicu
        > -l@yahoogroups. com
        > > > >From: <mailto:magic% <magic%25> 40aquari. com>magic@ ...
        > > > >Date: Tue, 28 Jul 2009 22:33:54 -0400
        > > > >Subject: Re: [magicu-l] Huge files
        > > > >
        > > > >Hi Steve,
        > > > >
        > > > >In theory you are correct, however, in this case, the small changes
        > > > >impact ALL records and as a result I am required to read them all to
        > > > >determine the combined outcome of the unchanged and changed. I would
        > > > >love to not have to do this.
        > > > >
        > > > >Thanks
        > > > >Andy
        > > > >
        > > > >At 10:29 PM 7/28/2009, you wrote:
        > > > > >
        > > > > >
        > > > > >Andy,
        > > > > >
        > > > > >I've read your exchange with Keith Canniff - good stuff. BUT...
        > > > > >
        > > > > >If none (or a VERY damn small number) of the records change from
        > week
        > > > > >to week to week to week, why - WHY?! - why read those same records
        > > > > >over and over and over again? Repeatedly performing the exact same
        > > > > >series of actions under the exact same group of conditions while
        > > > > >expecting a different outcome each time is the clinical definition
        > > > > >of INSANITY.
        > > > > >
        > > > > >I would submit that you should be looking at a completely different
        > > > > >paradigm wherein you create appropriate summary records so that,
        > > > > >instead of repeatedly reading hundreds of millions of UNCHANGED
        > > > > >records week after week after week, you read a significantly fewer
        > > > > >number of summary records.
        > > > > >
        > > > > >Steve Blank
        > > > > >
        > > > > >At 06:10 PM 7/28/2009, you wrote:
        > > > > > >None are inserted or deleted. In fact very few are even modified.
        > The
        > > > > > >database is almost static.
        > > > > > >
        > > > > > >The program cycles through the records from the main task, does
        > two
        > > > > > >lookups into two other tables which are memory tables, and then
        > calls
        > > > > > >two subtasks. Each of the two subtasks create a small text file
        > while
        > > > > > >using a memory file which contains a template. The sub tasks are
        > set
        > > > > > >as Resident tasks to speed things up and the two memory files used
        > in
        > > > > > >the sub tasks are opened above to speed things up.
        > > > > > >
        > > > > > >Currently the database contains 14 million records and it takes 24
        > > > > > >hours to cycle through 1 million records and generate the small
        > ASCII
        > > > > > >files. This means that it takes 14 days to cycle the database. The
        > > > > > >ASCII files (350 gig of them) are created to an external USB disc
        > > > > > >drive which is then sent off site to another center for more work.
        > > > > > >
        > > > > > >I was told that when I receive the 400 million records in October
        > I
        > > > > > >will have 6 days to do the work... too find a way to do it in 6
        > > > > > >days... no buts!
        > > > > > >
        > > > > > >Any suggestions!
        > > > > > >
        > > > > > >
        > > > > > >At 07:02 PM 7/28/2009, you wrote:
        > > > > > > >
        > > > > > > >
        > > > > > > >How many of these 400 million records are inserted, modified,
        > and
        > > > > > > >deleted during the intervening 6 days?
        > > > > > > >
        > > > > > > >At 04:47 PM 7/28/2009, you wrote:
        > > > > > > > >Hello,
        > > > > > > > >I have just been informed that I will need to load and
        > > manage a table
        > > > > > > > >with over 400 million records with my magic 9.4 app running
        > with
        > > > > > > > >Pervasive 10.10 starting in October. I will need to
        > > generate weekly
        > > > > > > > >reports which will involve accessing every record to compile
        > the
        > > > > > > > >results, and I will need to be able to produce the
        > > report for every
        > > > > > > > >monday morning so that means my processing time is maximum 6
        > days.
        > > > > > > > >
        > > > > > > > >My question is to those of you who currently work with very
        > large
        > > > > > > > >database tables... I would like to know what database
        > > system you are
        > > > > > > > >using and if you think pervasive can handle this size of
        > > a file in a
        > > > > > > > >speedy manner. I only have 6 days so I need the right database
        > to
        > > > > > > > >make this happen. Can pervasive do this? If not, which one
        > can?
        > > > > > > > >
        > > > > > > > >The app will be running on a Quad processor Windows XP Pro 32
        > bit
        > > > > > > > >machine with IIS 5.1. Should we switch to 64 bit and install
        > > > > > tons of RAM?
        > > > > > > > >
        > > > > > > > >Any help would be appreciated
        > > > > > > > >Thanks
        > > > > > > > >Andy
        >
        > [Non-text portions of this message have been removed]
        >
        >
        >

        --
        Heidi Schuppenhauer
        www.Magic-IUG. com

        [Non-text portions of this message have been removed]



















        [Non-text portions of this message have been removed]
      Your message has been successfully submitted and would be delivered to recipients shortly.