RE: [NH] checkLinks
>I managed to replicate your error.I think if you raNope.
>the clip again the error would disappear
>Of course this is a timing thing. Write to disk read then read from disk.That sounds very plausible. I'm on a slow machine.
>I'll rewrite it with a delay loop so hopefully this won't happen.Great. I was thinking your script could probably be handy to check if
bookmarks or any URL lists are still valid.
Anyone alive out there?
I'm wondering if someone knows if w3c has plans on implementing Microsoft's
CSS filters? I know MS has proposed them as an addition to the specs, but I
haven't found anything about w3c's view on this. I've grown a little
infatuated with some of them but it feels pointless to learn how to handle
them if they will never be part of the standard.
> Anyone alive out there?Hi lottta
> I'm wondering if someone knows if w3c has plans on implementing
> CSS filters? I know MS has proposed them as an addition to the
> specs, but I
> haven't found anything about w3c's view on this. I've grown a little
> infatuated with some of them but it feels pointless to learn how
> to handle
> them if they will never be part of the standard.
I think what we as web developers should be aiming for is a cleaner
separation of the structure of a document from it's presentation.
The meaning of a document ought to be derived from it's structure. This is I
think is the main thrust of the W3C recommendations.
So keep document structurally clean "fonts " are positively bad.
However css ''filters' although not part of the W3C css2 recommendations are
not evil like font's are.
They are just a 'presentational gimmick', so if you leave the filter out or
in, the structural intent of the document will not be altered. A filter
doesn't stop a heading from being a heading it just adds a 'glow' or a
'shadow' etc. I wouldn't feel guilty about using them.
How ever don't expect ns6 to support the 'filters' and I wish ie would put
more effort into standards compliance especially better DOM css2 html4
compliance instead of giving us added presentational extras like colored
scrollbars, filters etc. I think ie6 is finally going to support 'optgroup'
.. how long has the html4 recommendation been out...my golly gosh, don't
they work fast and furious in ensuring their browser is W3C compliant .
- Hi Grant,
Sorry, was that re my question? You have lost me. As far as I know the
whole CSS concept is about presentation and all of it can be left out
without altering the structural intent. What's your point? I don't follow.
- Good Evening Lotta,
(least that's what it is here)
I think Grant means that no, Netscape probably won't do thd CSS filter
thing, and the W3C is probably not going to even think about it.
I had not heard of them, but figured they are similiar to the IE
transitions that were big about a year ago. (Kind of cool but kind of
disgusting too is my not too humble opinion)
In the CSS2 stantard there is a text-shadow property. I've tried it on
every browser and does not look like anyone supports it, not Explorer,
not Netscape 6, not Opera and not Lynx 8-}
(just had to voice something on this trhead)
> Hi Grant,
> Sorry, was that re my question? You have lost me.
> Sorry, was that re my question? You have lost me. As far as I know theYeah I don't follow myself sometimes. It was just a small rave.
> whole CSS concept is about presentation and all of it can be left out
> without altering the structural intent. What's your point? I don't follow.
I think I'm better at solving problems than explaining things.;)
I've just built a webified version of the checkLinks library I posted
Unlike the notetab library version you have to make an a security adjustment
to allow data access across domains for it to work.
It also relies on a regular expression to extract the http links from
fetched entered url text instead of the method used in library.
Tested on ie5.5+ although should work in ns6+ when support for request
object is built in as it is in the latest moz builds
If it seems to hang , just wait around a little and the results will pop up.
I checked on a page with about 42 http linked and it took about 3 minutes on
my slow connection.
It works by instantiating the httpRquest object
Fetching the html text of the url entered.
Uses reg exp to extract URLs and puts regExp results into result array.
Each URL in array is then requested using http request then the url with the
response status text is written into the report box.
All this is done on the client with no server side script.
- Hi Grant,
> Yeah I don't follow myself sometimes. It was just a small rave.I see. The fact that someone could like something not yet implemented in
the standard got you going, huh? <g>
*hint* Me neither likes the glowing edges . But I do like the alpha and
maybe one or two more.