Loading ...
Sorry, an error occurred while loading the content.

HTML/Perl/NTP

Expand Messages
  • Ray Shapp
    HTML/Perl/NTP Mavens: I want to develop and test Perl scripts on my local PC before I FTP them to my web hosting service. I ll be writing about 40 such
    Message 1 of 5 , Apr 23, 2000
    • 0 Attachment
      HTML/Perl/NTP Mavens:

      I want to develop and test Perl scripts on my local PC before I FTP them to
      my web hosting service. I'll be writing about 40 such scripts (ten already
      finished).

      In use, specific HTML pages on my website use a FORM to send
      data to an associated Perl script. The script does a calculation and
      returns
      the result in an HTML page it creates on-the-fly for viewing in a browser
      on-line.

      I have no trouble writing and testing the calling HTML pages in my browser.
      I can even run Wayne VanWeerthuizen's clip, "Check Syntax of Current
      Perl Script". When I do, I get the "syntax ok" message on the Perl script
      I'm developing.
      My problem is I can't put it all together.

      How do I direct FORM data from the calling HTML page in my local browser to
      execute
      in my local Perl interpreter and view (in my local browser) the calculated
      results in an HTML
      page generated by the Perl script ?

      Thank you for your help,

      Ray Shapp
      Running Win98, NTP4.8, ActivePerl, and MS IE5 on a Pentium II
    • Marco Bernardini
      *This message was transferred with a trial version of CommuniGate(tm) Pro* ... Good thing! Internet is messy enough, and if everybody tests pages *before* to
      Message 2 of 5 , Apr 24, 2000
      • 0 Attachment
        *This message was transferred with a trial version of CommuniGate(tm) Pro*

        Alle 01.39 24/04/2000 -0400, Ray Shapp ha mandato a Marco questo messaggio:

        >I want to develop and test Perl scripts on my local PC before I FTP them to
        >my web hosting service. I'll be writing about 40 such scripts (ten already
        >finished).
        > <SNIP>
        >How do I direct FORM data from the calling HTML page in my local browser to
        >execute
        >in my local Perl interpreter and view (in my local browser) the calculated
        >results in an HTML
        >page generated by the Perl script ?

        Good thing!
        Internet is messy enough, and if everybody tests pages *before* to put them
        online things goes better.
        Moreover, Perl is "open source", so you don't need to deal with royalties,
        licenses and other expensive oddities if you need to share your work with
        all the world. You can use HTML to create an interface good for every
        platform (Win, Mac, Unix and more).
        And Perl is faster than VisualBasic! I "grep" 15 Meg of text, saving lines
        meeting my request in another file, just in 8 secs (Celeron 300/128 Mb RAM)
        with less than 30 lines of code.

        There is only a little drawback: if you're planning to use Perl to send
        mail (very common if you're using HTML forms), Win programs are different
        from Unix ones, so you can't mail yourself with the same code.
        However I use a nasty trick: if the script is executed on my machine it
        saves mail messages as text files, otherwise it send them out using
        directly the SMTP server (safer than SendMail).

        On Win95/98 I install Apache: it works fine.
        If you need to see "local domains" (e.g. www.me.com) you need a network
        card into your computer to "bind" virtual domains to your IP.
        If you don't need to use a network (if you have a single computer) an old
        used 10 Mbit can be good: I suppose you can find it at garage sales for $5
        or less.
        If you need to connect 2 computers a 100 Mbit is better (for more than 2
        computers you need a hub).

        Here is how it works (it's longer to write than to do!):

        Basic requirements:
        - add a network card to your computer and set your IP to 10.1.1.0
        - install ActivePerl (or old Perl 5.0x)
        - install Apache - http://www.apache.org
        - RTFM (of course you do!)

        Configuration:
        - with NoteTab open the file C:\WINDOWS\LMHOSTS (add it to NoteTab
        favorites: widely used)
        - add the domains you need in this way:
        10.1.1.0 www.me.com #PRE
        10.1.1.0 www.ray.org #PRE
        10.1.1.0 www.mydog.net #PRE
        you can't use more than 16 chars for the domain name, so
        www.mymotherinlawstinks.com can't be used.
        - save LMHOSTS
        - from a DOS box launch the command
        nbtstat -R (case sensitive but don't need a restart)
        - with NoteTab open the file C:\PROGRAM FILES\APACHE
        GROUP\APACHE\CONF\HTTPD.CONF (add it to NoteTab favorites: widely used)
        - add these lines to the end of HTTPD.CONF
        NameVirtualHost 10.1.1.0
        <VirtualHost www.me.com>
        DocumentRoot C:/mywebpages/mine
        ServerName www.me.com
        </VirtualHost>
        <VirtualHost www.ray.org>
        DocumentRoot C:/mywebpages/ray
        ServerName www.ray.org
        </VirtualHost>
        Add a VirtualHost block for every virtual domain you need to see.
        - save HTTPD.CONF
        - launch Apache
        - open your browser
        - go to the address www.ray.org
        - enjoy your own site and amaze friends ;-)

        To use CGI scripts in the style http://www.me.com/cgi-bin/program.pl you
        must add to HTTPD.CONF these lines:
        ScriptAlias /cgi-bin/ "C:/Program Files/Apache Group/Apache/cgi-bin/"
        AddHandler cgi-script .cgi
        AddHandler cgi-script .pl

        To use server-parsed HTML files (these with Server-Side Include) you must
        add to HTTPD.CONF these lines:
        AddType text/html .shtml
        AddHandler server-parsed .shtml

        To add other domains you must add them to LMHOSTS, launch nbtstat -R, and
        add them to httpd.conf: a 20 secs. job.

        If you need to have a custom 404 error add this line to httpd.conf:
        ErrorDocument 404 /err/404.html

        If you need to use a counter on your local pages go to
        http://www.muquit.com/ and download wwwcount for Windows: if you copy
        count.exe as Count.cgi you can have the same syntax used on Unix machines.
        Wwwcount is the most used counter on the web: in case your ISP don't use it
        ask him to install it.

        If you need to see if links are well connected you can use Xenu -
        http://www.snafu.de/~tilman/xenulink.html - a free utility checking all
        your links. It works fine with "local" sites.

        Just another thing about using Apache: analyzing the access.log with a
        statistic program like Analog - http://www.analog.cx - you can see how many
        time you spend developing a site, so you know how much to charge a customer.

        Remember to stop Apache *before* you connect to the Internet, or you see
        virtual domains on *your* disk rather than these on the Net.

        If you need Perl/CGI scripts mail me: I've a lot of them, for almost every
        common usage (and some odd ones).

        Trivia: to see Larry Wall's home page (one of Perl creators) go to
        http://kiev.wall.org/~larry/

        And now a question for everybody: why schools teach Basic rather than Perl?

        Bye!

        Marco Bernardini
      • Jody
        Hi Marco, ... Do you know where I can get a free copy of eGREP or the other favour, fGREP? Plain GREP does not seem to do what I want, at least the version I
        Message 3 of 5 , Apr 24, 2000
        • 0 Attachment
          Hi Marco,

          > I "grep" 15 Meg of text

          Do you know where I can get a free copy of eGREP or the other
          favour, fGREP? Plain GREP does not seem to do what I want, at
          least the version I have.

          If I use the -l switch I get the file which is good enough since
          I know the path. However, it strips the duplicates out.
          Although, now that I think out it, perhaps I do want the second
          one below. As you might notice I am trying to make a feasible
          search for a bible.

          ^$GetDosOutPut(F:\Grep32\grep.exe -l "^?[==In the beginning]" "^$GetDocumentPath$Bible\*.txt")$

          ^$GetDosOutPut(F:\Grep32\grep.exe -S "^?[==In the beginning]" "^$GetDocumentPath$Bible\*.txt")$

          ge.txt
          Jer.txt
          Joh.txt

          E:\NoteTab Pro\Documents\Bible\ge.txt: Ge 1:1 ¶ In the beginning God created the heaven and the earth.
          E:\NoteTab Pro\Documents\Bible\Jer.txt: 26:1 ¶ In the beginning of the reign of Jehoiakim the son of Josiah king of Judah came this word from the LORD, saying,
          E:\NoteTab Pro\Documents\Bible\Jer.txt: 27:1 ¶ In the beginning of the reign of Jehoiakim the son of Josiah king of Judah came this word unto Jeremiah from the LORD, saying,
          E:\NoteTab Pro\Documents\Bible\Joh.txt: Joh 1:1 ¶ In the beginning was the Word, and the Word was with God, and the Word was God.

          I guess ideally I should rename the files to their full name,
          strip the path and extension, keep the reference number (Joh
          1:1), get 5 words before and after the search phrase (if that
          many), and then bring up into a checkbox to choose which verse(s)
          to get.

          I would still like to see the other two versions of GREP if they
          are freeware, but I cannot find them.

          Thanks!
          Jody

          Clean-Funnies: click and send...
          mailto:CF@...?subject=Subscribe
        • Marco Bernardini
          *This message was transferred with a trial version of CommuniGate(tm) Pro* Alle 04.19 24/04/2000 -0500, Jody ha mandato a Marco questo messaggio: Hi Jody! ...
          Message 4 of 5 , Apr 24, 2000
          • 0 Attachment
            *This message was transferred with a trial version of CommuniGate(tm) Pro*

            Alle 04.19 24/04/2000 -0500, Jody ha mandato a Marco questo messaggio:

            Hi Jody!

            > > I "grep" 15 Meg of text
            >
            >Do you know where I can get a free copy of eGREP or the other
            >favour, fGREP? Plain GREP does not seem to do what I want, at
            >least the version I have.

            Hmm... I mean Perl 5.0x "grep" command.
            Here is my script: it reads *every* file into given directory, catch the
            lines containing required word or phrase and save them into another file.
            I use it to perform crossed analysis on my server logs.
            Some variables have an Italian name, so I don't mix variable names and
            keywords ;-)


            #!perl5/perl -w
            # set your Perl path here!

            use strict;

            # insert between "" the searched word
            my $query = "In the beginning";

            # set here your configuration
            my $dir ="C:/myfiles"; # directory containing your files
            my $fileout = "result.txt" ; # output file - path RELATIVE to $dir

            ###################################################

            my $adesso = time ;
            my $logfile = "" ;
            my @bigfile = "" ;
            my $linea = "" ;
            my @result = "" ;

            opendir DIR, $dir or die "Error opening directory $dir: $!\n";

            my @files=grep !/^\./, readdir(DIR);
            # use
            #my @files =("filename.txt");
            # for a single file, or
            #my @files =("file1.txt","file2.txt","file_etc.txt");
            # if you need just some files (but it's better to move
            # them to another directory)

            chdir $dir;

            foreach $logfile (@files) {
            print "reading $logfile\n";
            open (PAGE ,"<$logfile") || die $!;
            push ( @bigfile , <PAGE>);
            close PAGE;
            }
            chomp @bigfile;

            @result = grep /$query/,@bigfile;

            # to search for a ? use
            #@result = grep /\?/,@bigfile;

            # to search for all except $query use
            #@result = grep !/$query/,@bigfile;

            # to perform multiple queries add
            #@result = grep /"second query"/,@result;

            open (PAGE ,">$fileout") || die $!;
            foreach $linea (@result) {
            print PAGE "$linea\n";
            }
            close PAGE;

            my $dopo=time;
            my $tempo=$dopo-$adesso;

            print "Job time: $tempo seconds\n";
            print "Find " , scalar(@risultato) , " lines on " , scalar(@grandelog) , "
            checked\n";
            print "The result was saved as $fileout into $dir directory\n";

            ### EOF ###

            >I guess ideally I should rename the files to their full name,

            With my script you don't need this!

            Oh, BTW, the WAP is "Internet on the cell phone display" (96 x 65 pixels,
            black/white only).
            Now people *must* be concise ;-)


            Bye!

            Marco Bernardini
          • Jody
            Hi Marco, ... Just wanted to say thanks, but that is not what I have been looking for. Bye for now, Jody Adair Prov. 3:5-7; 4:23
            Message 5 of 5 , May 3, 2000
            • 0 Attachment
              Hi Marco,

              >>> I "grep" 15 Meg of text
              >>
              >> Do you know where I can get a free copy of eGREP or the other
              >> favour, fGREP? Plain GREP does not seem to do what I want, at
              >> least the version I have.
              >
              > Hmm... I mean Perl 5.0x "grep" command.

              Just wanted to say thanks, but that is not what I have been
              looking for.

              Bye for now,
              Jody Adair
              Prov. 3:5-7; 4:23

              http://www.sureword.com/sojourner
              http://www.sureword.com/kjb1611
              http://www.sureword.com/notetab
            Your message has been successfully submitted and would be delivered to recipients shortly.