23241RE: [Clip] How to Open a web-page .txt file in NoteTab
- Sep 5, 2012At 9/5/2012 12:52 PM, John Shotsky wrote:
>I found a solution that works great. It is a single executable namedPerhaps you had a direct link to an old offering. All I see at your URL is
>'download.exe' which is executed via a %!ShellWait command, and it does
>the trick just fine, even including overwriting the original. No install
>needed, no Windows rights issues. This will become a permanent part of the
>Here's the command: (It changes to the target folder ahead of this)
>^!Shellwait ^%Command% http://recipetools.gotdns.com/files/commas.txt
>Here's the location of this little gem:
>John Shotsky wrote:
> > There are no really easy ways around this on the browser side
the $15 "File Downloader" - no 'download.exe'. However, the site has a
bunch of utils that look interesting. Unfortunately, all the good ones seem
to be paidware.
In any case, there are lots of freeware file download apps available. Here
is a comparison chart for nine of them (curl, snarf, wget, pavuk, fget,
fetch, lftp, aria2 and HTTrack): http://curl.haxx.se/docs/comparison-table.html
Two that I have tried are cURL and WGet
cURL: http://curl.haxx.se/ [Home]
From the cURL home page:
"curl is a command line tool for transferring data with URL
syntax, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS,
IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP,
SMTP, SMTPS, Telnet and TFTP. curl supports SSL certificates,
HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload,
proxies, cookies, user+password authentication (Basic, Digest,
NTLM, Negotiate, kerberos...), file transfer resume, proxy
tunneling and a busload of other useful tricks."
The most recent stable version of curl is version 7.27.0,
released on July 27, 2012.
cURL has an extensive [huge!!!] set of options for both input and output. I
used it to directly load web files into NTB for further clip processing. It
was quick and simple.
^!Toolbar New Document
To get a web file and store in a local file with a specific name:
^!ShellWait "^%curlPath%curl" "^%webURL%" -o "^%localName%"
[Note: not tested, stores file in current local directory]
WGet: http://www.gnu.org/software/wget/ [Home]
http://gnuwin32.sourceforge.net/packages/wget.htm [Windows DLs]
"GNU Wget is a free network utility to retrieve files from
the World Wide Web using HTTP and FTP, the two most widely
used Internet protocols. It works non-interactively, thus
enabling work in the background, after having logged off.
The recursive retrieval of HTML pages, as well as FTP sites
is supported -- you can use Wget to make mirrors of archives
and home pages, or traverse the web like a WWW robot (Wget
Wget works exceedingly well on slow or unstable connections,
keeping getting the document until it is fully retrieved.
Re-getting files from where it left off works on servers
(both HTTP and FTP) that support it. Matching of wildcards
and recursive mirroring of directories are available when
retrieving via FTP. Both HTTP and FTP retrievals can be
time-stamped, thus Wget can see if the remote file has
changed since last retrieval and automatically retrieve
the new version if it has."
The most recent Windows version of WGet is version 1.11.4
released on Dec 31, 2008.
I can't remember why I chose cURL over WGet and have no usage info.
- << Previous post in topic Next post in topic >>