RE: End User Apps, versus SW Support Tools
- I certainly did not mean to suggest that "an end user application is only an end user application if someone makes money with it". I agree, that is a very silly argument. In the longer term, I'm not so interested in any old end user application, just 'successful' ones. By "SUCCESSFUL END USER APPLICATION", I mean something like this:
1. End users are not SW techno-geeks, they don't need or want to know or care about RDF, or ontologies, or any of that crap, any more that the web browsing masses want to know or care about html, or Google uses want to know about the finer statistical points of latent-semantic indexing (or whatever else they use)
2. It should be a source of revenue or cost-savings because it makes something easier/faster/cheaper to do, either for the core business area of some company, or for a substantial market of buying customers who purchase the software for their own personal needs.
I applaud you for building your own successful applications that you personally use. But unless SW applications get wide usage by the masses, the SW will never take off. Personal tools, applications and play-things used by small numbers of individuals will have no more impact than research prototype tools, applications and play-things that are produced in academia - perhaps less.
To my knowledge, there remains a dearth of well developed use cases with well developed descriptions or specifications of end user SW applications. What are the killer applications of the SW? You seem to think there are many, can you please show me documents describing them? From my perspective, It seems much more technology driven - build it and they will come. It appears that VC firms won't touch RDF. This is certainly interesting.
I respectfully disagree with your view that: "I think that the distinction between SW support code and SW [END USER] application is a *valid* one, but in practise is too difficult to define to be of any use." It may of course be no use to you, but I would expect that something akin to this distinction will certainly be very important for VC funders. As long as there is no VC funding for the SW, I expect that it will remain a play-thing for interested individuals and academics.
From: Sean B. Palmer [mailto:sean@...]
Sent: Thursday, September 06, 2001 1:36 PM
To: Uschold, Michael F
Cc: firstname.lastname@example.org; Thompson, John A; Clark, Peter E
Subject: Re: End User Apps, versus SW Support Tools
> In this light, do you still wish to defend your position to notWell, I could read that one of two ways... either, "an end user application is only an end user application if someone makes money with it", or "the Semantic Web will only be gagued as a public success if it brings a wide utility of tools that can benefit people in any situation, including ones for economic success". I am going to choose the latter, because the former would be one of the weakest arguments that I'd ever have the misfortune to come across on a mailing list. The latter, OTOH, is a very good point.
> make this distinction? From everything you have said, I see
> CWM as being ONLY a SW support tool. If it is an end
> user application, then show me someone making money
> with it?
CWM is useful - I use it to provide statistic summaries for my server logs, which I record in RDF. If I had public server logs, the utility would be increased, because people could do their own queries on them, rather than trust my summaries (but I don't make the public at the moment). It saves me having to buy a custom server statistics reporting tool, and hence it saves me money. Is that satisfactory? Of course, the W3C site disagrees with me, so this is clearly just an opinion of mine :-)
But CWM can also be used as an SW support tool. I already use notation3.py as a module for my SWIPT RDF parser/SW tool, so it can be use as a base module on top of which people can build larger applications. I recently wrote a Wiki based upon the SWIPT Doc class and a whole lot of CGI hacking. It's not difficult to come up with good SW applications, we just need people with the time and programming skills to do so. And there is a lot that we still need to develop before the larger applications can even be started: we have to get it just right. XML RDF in its current state has a number of bugs in it that need to be resolved.
I think that the distinction between SW support code and SW application is a *valid* one, but in practise is too difficult to define to be of any use.
In any case, res ipsa loquitur; CWM is useful as it is, and as a module for larger applications. Trying to package it up into a nice little category just to impress managers as to how it fits in with your current business plans is like pissing into the wind at this stage in the development of the Semantic Web.
Sean B. Palmer
@prefix : <http://webns.net/roughterms/> .
:Sean :hasHomepage <http://purl.org/net/sbp/> .
> From: Uschold, Michael Finformation [that] would allow the user the ability to make choices that end up driving the user to
> If you can make "a user friendly website interface representing the contents of collected RDF
a provider's website.", and if it makes people' s lives easier, then by all means do so. If you know
someone who can improve on search using better [perhaps even 'semantic'] metadata, please ask them
to do so. I would love to see something that could work better and faster than Google, which amazes
me almost on a daily basis. Not because it would improve my life noticeably - I do not experience
problems finding the things I need, but only because it would be even more amazing! I would also
love to see this empiracally proven with lots of data. It is an open question whether search using
semantic metadata will be better than traditional methods. More likely, there will be some niche
where it is preferred, but many other circumstances, Google-like approaches may be preferred.
> MikeHi Mike,
When I made my comments, I understood the semantic web to be basically what you describe above. That
it was a relational representation of RDF that referred to a website on the net. After a deeper look
into the SW's goals and comments of those made in this discussion group I have a different
perspective. I'm going to make a giant guess that google is nothing more than a pre-indexed
database that stores data on click throughs as well as other factors that influence where a sites
information is presented in the results window of a search.
What I'm proposing, that is different from google, would be a RDF description of website portals in
a returned document. The results of this search would be in the form of several RDF Dublin Core
descriptions of portals that fit the search parameters, all transmitted at once as a single
document. These single documents would then be searchable based on the enhanced capabilities of the
browser that you are using to view the internet with. I accomplish this now with the sub-browser as
the enhancement tool. The thing that is missing to accomplish this is webmasters providing a human
readable set of search terms and a RDF Dublin Core description of the portal that they have created
for this purpose. My solution is MTML in their HTML combined with a RDF Dublin Core description of
their portal information. If enough interest were generated for this to evolve then, at that time,
an indexed form of a database for the RDF would be required.
This is a symbiotic relationship of information gathering between a results document in a search
engine and a results report based on a human readable set of search terms provided in that document.
So after a selection is made, from the search engine document, the sub-browser could download an
entire portal's HTML site that could include several web pages found on the internet. This portal
would be the full resource of information provided by the webmaster. This downloaded information
could be using MTML to facilitate a relational text gathering system. Since the sub-browser acts on
MTML information as if it were pre-indexed by the author of the site, there are many interesting
forms of textual display that can be created using the human readable search terms provided by the
I know that this is way different than creating chunks of information that will be assembled somehow
by some application designed to serve that purpose. I just can't see the semantic web idea yet. I
mean how does a webmaster include their information in the results of a semantic web query?