To extend on that idea, I will tell you what I did to standardize the layout of pages on my company's intranet.
I made a script that has the following behaviour :
1. when called with no arguments, defaults to a certain page designated as "index page".
2. if there is an argument, the only accepted argument is the address of another HTML page.
3. In both cases, it will open the page as a file and get the contents.
4. It will then search for anything that looks like a link to another HTML page, and do a little replacement :
original page name : http://blabla/bob.html or
page.html (relative to the currently opened page)
new link : http://myserver/layout.cgi?page=http://blabla/bob.html or
5. It processes some special cases (images, etc.)
6. It extracts the <head> ... </head> of the file, because some of the pages had some scripts or some stylesheet info there
7. It then prints the standard header with some parts of the original file's header (I drop the background color, for example, because some pages had a background color that doesn't fit with the header/footer)
8. Then it prints everything that's between the <body> and </body> tags of the original file.
9. And finally, it prints the standard footer (I added some info like the last modified date of the original file, and some other details)
As you can see, this is not trivial. But it's not really that hard. The difficulty is trying to predict what "looks like a link to an HTML page", or trying to predict the format that most pages will use (not everyone's HTML code looks the same...)
Now that I've given my experiences, it's up to you to see if you can do it...
----- Original Message -----
From: Damien Carbery
Sent: Friday, November 01, 2002 2:26 PM
Subject: [PBML] Re: Server headers and footers
--- In perl-beginner@y..., "Admin" <admin@w...> wrote:
> How do I add a header and footer to web pages with out adding
> the web pages them self.
If you can 'trap'/redirect calls to the web pages through a script
you'd write then you'd be able to add a header and footer.
So, a request for:
would be redirected to something like:
You'd extract the stuff from the end of the URL
(ie '/dir1/dir2/file.htm'), print out the header stuff, open the file
and print it and then print the footer.
I don't know how to do the redirection I suggest.
[Non-text portions of this message have been removed]