Loading ...
Sorry, an error occurred while loading the content.

RE: [NH] checkLinks

Expand Messages
  • Lotta
    Hi Grant and Richard, Grant - linkChecker generates this (my translation): There is no script engine for the file extension .js / No file found / Http files
    Message 1 of 13 , Apr 23 7:10 PM
    • 0 Attachment
      Hi Grant and Richard,

      Grant -
      linkChecker generates this (my translation):
      "There is no script engine for the file extension .js / No file found /
      Http files"

      Something missing on my machine? I had a mishap with IEradicator recently
      and maybe everything wasn't restored.

      Richard -
      Take a look at WebSite Cleaner. It does exactly what you want. The downside
      is that it's a little limited if you don't pay up. There are probably
      similar programs around and some link checkers check for orphan files too.
      I *think* I've seen an on line service with that feature, but don't
      remember which one.
      http://octaneware.hypermart.net/

      Lotta
    • Grant
      - ... Most likely; If you have disabled wsh or not enabled wsh with custom browser or os install then the library won t work.
      Message 2 of 13 , Apr 24 2:41 AM
      • 0 Attachment
        -
        > linkChecker generates this (my translation):
        > "There is no script engine for the file extension .js / No file found /
        > Http files"
        >
        > Something missing on my machine? I had a mishap with IEradicator recently
        > and maybe everything wasn't restored.

        Most likely;
        If you have disabled wsh or not enabled wsh with custom browser or os
        install then the library won't work.
      • Lotta
        Grant. ... I don t know the first thing about WSH, but the components seem to be there. I copied some simple scripts from the MS site and they ran. I m using
        Message 3 of 13 , Apr 24 9:32 AM
        • 0 Attachment
          Grant.

          >If you have disabled wsh or not enabled wsh with custom browser or os
          >install then the library won't work.

          I don't know the first thing about WSH, but the components seem to be
          there. I copied some simple scripts from the MS site and they ran. I'm
          using win95. Maybe your script needs a later version of WSH or modules that
          didn't ship with the OS until later.

          Lotta



          [Non-text portions of this message have been removed]
        • Grant
          Hi lotta ... I managed to replicate your error. I think if you ran the clip again the error would disappear ^!set %jsFile%=^$getScriptPath()$getLinks.js
          Message 4 of 13 , Apr 25 3:58 AM
          • 0 Attachment
            Hi lotta
            > linkChecker generates this (my translation):
            > "There is no script engine for the file extension .js / No file found /
            > Http files"


            I managed to replicate your error.
            I think if you ran the clip again the error would disappear

            ^!set %jsFile%=^$getScriptPath()$getLinks.js
            ^!IfFileExist ^%jsFile% Skip ELSE Next
            ^!TextToFile "^%jsFile%" ^$GetClipText(js_getLinks_file_protocol)$
            ^!Set %links%=^$GetOutput(cscript "^%jsFile%" "//nologo" "^%file%" "file:")$

            What is meant to happen is this
            If the js file doesn't exist then The clip creates the js file then runs it
            in the host.
            Of course this is a timing thing. Write to disk read then read from disk.
            I think what is happening is that, while writing to disk the next clip
            command is executing which is trying to read from the yet unwritten file.
            I'll rewrite it with a delay loop so hopefully this won't happen.

            > I don't know the first thing about WSH, but the components seem to be
            > there. I copied some simple scripts from the MS site and they ran. I'm
            > using win95. Maybe your script needs a later version of WSH or
            > modules that
            > didn't ship with the OS until later.

            The WSH also ships with ie4 5 and 5.5 etc
            The main thing with WSH is that you can instantiate COM object within it.
            Just about everything Ms builds is exposed as a COM object.
            The browser is a com object which can automated
            'InternetExplorer.Application'
            The mshtml parser for the browser also has a Type library "Microsoft HTML
            Object Library"
            Within this library is the coClass HTMLDocument which has a creatable
            progId of 'htmlfile'
            So in my wsh script I go
            var doc = WScript.GetObject(file);
            which instantiates the html document open in Notetab as a 'htmlfile'
            object. Now that I have a handle on the 'htmlfile' object, I can use normal
            DOM methods to extract to extract the links

            var msg = ''
            var max = doc.links.length
            for (var i = 0;i < max;i++)
            {
            if(doc.links[i].protocol == protocol)
            {
            if(msg !== '')
            {msg += '\n'}
            msg += doc.links[i].href;
            }
            }
            WScript.Echo(msg)

            As can be seen below this is the same javascript as I would use in a html
            script block function to do exactly the same thing exept instead of output
            to console with WScript.Echo(msg) there is window.alert()

            <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
            "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
            <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
            <head>
            <title></title>
            <script type="text/javascript">
            function getLinks(protocol)
            {
            var msg = ''
            var max = document.links.length
            for (var i = 0;i < max;i++)
            {
            if(document.links[i].protocol == protocol)
            {
            if(msg !== '')
            {msg += '\n'}
            msg += document.links[i].href;
            }
            }
            alert(msg)
            }
            </script>

            </head>
            <body onload="getLinks('http:')">
            <p>
            <a href="http://www.notetab.com">notetab</a>
            </p>
            </body>
            </html>
          • Lotta
            Grant, ... Nope. ... That sounds very plausible. I m on a slow machine. ... Great. I was thinking your script could probably be handy to check if bookmarks or
            Message 5 of 13 , Apr 26 8:31 PM
            • 0 Attachment
              Grant,

              you wrote:
              >I managed to replicate your error.I think if you ra
              >the clip again the error would disappear

              Nope.

              >Of course this is a timing thing. Write to disk read then read from disk.

              That sounds very plausible. I'm on a slow machine.

              >I'll rewrite it with a delay loop so hopefully this won't happen.

              Great. I was thinking your script could probably be handy to check if
              bookmarks or any URL lists are still valid.

              Lotta
            • Lotta
              Hallo! Anyone alive out there? I m wondering if someone knows if w3c has plans on implementing Microsoft s CSS filters? I know MS has proposed them as an
              Message 6 of 13 , Apr 26 8:48 PM
              • 0 Attachment
                Hallo!

                Anyone alive out there?

                I'm wondering if someone knows if w3c has plans on implementing Microsoft's
                CSS filters? I know MS has proposed them as an addition to the specs, but I
                haven't found anything about w3c's view on this. I've grown a little
                infatuated with some of them but it feels pointless to learn how to handle
                them if they will never be part of the standard.

                Bye,
                Lotta
              • Grant
                ... Hi lottta IMHO I think what we as web developers should be aiming for is a cleaner separation of the structure of a document from it s presentation. The
                Message 7 of 13 , Apr 30 5:18 PM
                • 0 Attachment
                  > Anyone alive out there?
                  >
                  > I'm wondering if someone knows if w3c has plans on implementing
                  > Microsoft's
                  > CSS filters? I know MS has proposed them as an addition to the
                  > specs, but I
                  > haven't found anything about w3c's view on this. I've grown a little
                  > infatuated with some of them but it feels pointless to learn how
                  > to handle
                  > them if they will never be part of the standard.

                  Hi lottta
                  IMHO
                  I think what we as web developers should be aiming for is a cleaner
                  separation of the structure of a document from it's presentation.
                  The meaning of a document ought to be derived from it's structure. This is I
                  think is the main thrust of the W3C recommendations.
                  So keep document structurally clean "fonts " are positively bad.
                  However css ''filters' although not part of the W3C css2 recommendations are
                  not evil like font's are.
                  They are just a 'presentational gimmick', so if you leave the filter out or
                  in, the structural intent of the document will not be altered. A filter
                  doesn't stop a heading from being a heading it just adds a 'glow' or a
                  'shadow' etc. I wouldn't feel guilty about using them.

                  How ever don't expect ns6 to support the 'filters' and I wish ie would put
                  more effort into standards compliance especially better DOM css2 html4
                  compliance instead of giving us added presentational extras like colored
                  scrollbars, filters etc. I think ie6 is finally going to support 'optgroup'
                  .. how long has the html4 recommendation been out...my golly gosh, don't
                  they work fast and furious in ensuring their browser is W3C compliant .
                • Lotta
                  Hi Grant, Sorry, was that re my question? You have lost me. As far as I know the whole CSS concept is about presentation and all of it can be left out without
                  Message 8 of 13 , Apr 30 6:21 PM
                  • 0 Attachment
                    Hi Grant,

                    Sorry, was that re my question? You have lost me. As far as I know the
                    whole CSS concept is about presentation and all of it can be left out
                    without altering the structural intent. What's your point? I don't follow.

                    Lotta
                  • Stephen
                    Good Evening Lotta, (least that s what it is here) I think Grant means that no, Netscape probably won t do thd CSS filter thing, and the W3C is probably not
                    Message 9 of 13 , Apr 30 11:06 PM
                    • 0 Attachment
                      Good Evening Lotta,
                      (least that's what it is here)
                      I think Grant means that no, Netscape probably won't do thd CSS filter
                      thing, and the W3C is probably not going to even think about it.
                      I had not heard of them, but figured they are similiar to the IE
                      transitions that were big about a year ago. (Kind of cool but kind of
                      disgusting too is my not too humble opinion)
                      In the CSS2 stantard there is a text-shadow property. I've tried it on
                      every browser and does not look like anyone supports it, not Explorer,
                      not Netscape 6, not Opera and not Lynx 8-}
                      (just had to voice something on this trhead)
                      Stephen
                      Lotta wrote:
                      >
                      > Hi Grant,
                      >
                      > Sorry, was that re my question? You have lost me.
                    • Grant
                      ... Yeah I don t follow myself sometimes. It was just a small rave. I think I m better at solving problems than explaining things.;) Anyways I ve just built a
                      Message 10 of 13 , May 1, 2001
                      • 0 Attachment
                        > Sorry, was that re my question? You have lost me. As far as I know the
                        > whole CSS concept is about presentation and all of it can be left out
                        > without altering the structural intent. What's your point? I don't follow.

                        Yeah I don't follow myself sometimes. It was just a small rave.
                        I think I'm better at solving problems than explaining things.;)

                        Anyways
                        I've just built a webified version of the checkLinks library I posted
                        earlier.
                        http://www.markup.co.nz/validate/checkLinks.htm

                        Unlike the notetab library version you have to make an a security adjustment
                        to allow data access across domains for it to work.
                        It also relies on a regular expression to extract the http links from
                        fetched entered url text instead of the method used in library.
                        Tested on ie5.5+ although should work in ns6+ when support for request
                        object is built in as it is in the latest moz builds
                        If it seems to hang , just wait around a little and the results will pop up.
                        I checked on a page with about 42 http linked and it took about 3 minutes on
                        my slow connection.

                        It works by instantiating the httpRquest object
                        Fetching the html text of the url entered.
                        Uses reg exp to extract URLs and puts regExp results into result array.
                        Each URL in array is then requested using http request then the url with the
                        response status text is written into the report box.

                        All this is done on the client with no server side script.
                      • Lotta
                        Hi Grant, ... I see. The fact that someone could like something not yet implemented in the standard got you going, huh? *hint* Me neither likes the glowing
                        Message 11 of 13 , May 1, 2001
                        • 0 Attachment
                          Hi Grant,

                          > Yeah I don't follow myself sometimes. It was just a small rave.

                          I see. The fact that someone could like something not yet implemented in
                          the standard got you going, huh? <g>
                          *hint* Me neither likes the glowing edges . But I do like the alpha and
                          maybe one or two more.

                          Lotta
                        Your message has been successfully submitted and would be delivered to recipients shortly.