Loading ...
Sorry, an error occurred while loading the content.

Re: [rest-discuss] Re: WADL pushback

Expand Messages
  • Henry Story
    ... I agree very much. Languages are usually defined as a syntax with a semantics. What is needed is to disassociate the syntax from the semantics. If we keep
    Message 1 of 108 , Jul 13, 2007
      On 13 Jul 2007, at 02:46, Elliotte Harold wrote:
      > A. Pagaltzis wrote:
      > > Impendance mismatch with my language’s data model is not a
      > > feature, it’s a liability.
      > Serialized formats that are tied to one language are a liability,
      > not a
      > feature.
      I agree very much. Languages are usually defined as a syntax with a
      semantics. What is needed is to disassociate the
      syntax from the semantics. If we keep the semantics stable, as we do
      by choosing to work with URIs (which are universal names - names
      name things), we can change the syntax and allow a natural selection
      of syntaxes. JSON with a good mapping could be an interesting syntax.
      But in fact JSON is only half way there. Let us look at an example
      from the JSON wikipedia page:

      "firstName": "John",
      "lastName": "Smith",
      "address": {
      "streetAddress": "21 2nd Street",
      "city": "New York",
      "state": "NY",
      "postalCode": 10021
      "phoneNumbers": [
      "212 732-1234",
      "646 123-4567"

      (Now reading the spec, it says that an "object is an unordered set of
      name-value pairs". That means it cannot be a hash map since a hash
      map forces only single keys, and there is no restriction there on
      single keys. Seems to me that if you map that into an object in a
      simplistic way, you are going to loose data.)

      Now the problem it seems to me with JSON is that it both has a
      beginning of a semantics, it has types for true, false, and numbers,
      and at the same time it does not have enough. The spec is completely
      at the syntactic level. The semantics it has come from it being so
      closely tied to JavaScript, which has a procedural semantics. Number
      refer to numbers because that's the way JavaScript will interpret them.

      As I mention elsewhere, the other problem is the lack of identity of
      things. "firstName" is a relation relating to what we can clearly see
      to be a Person object to the string "John". What if somewhere else I
      find a french site that has "prenom" and "nom" instead. How can I say
      that these two words refer to the same relation? I can't really
      because there is so much that is underspecified.

      In fact the more I look at the example the more I notice the same
      provincialism that lead to all the problems with xmlrpc [1] such as
      that XML-RPC not defining a time zone for dates. When working on the
      internet you have to think globally, or else you are still stuck in
      client-server mode of thinking. Just as the xmlrpc folk never seemed
      to have realized that the world may have more than one time zone, so
      it is clear that JSON was not designed with the thought that data
      would be traveling in a global space with no context (you know on the
      web you can link any two resources together, so you don't know ahead
      of time where people are going to come from, and how they are going
      to mesh up the data). This should not be surprising because
      javascript is a scripting language that is meant to only work within
      a page. Other provincialisms of the example above is to assume that a
      post code is a number. Clearly they have never lived in the UK! Or to
      give phone numbers out without a country prefix! Oh my god! people on
      the internet may find this information from another country? Well
      should they not know that the US prefix is +1? Yeah! I was in Prague
      recently and the people there assumed everywhere that you knew not
      just the country prefix, but what the local Prague prefix was mean to
      be. You want to book a hotel in Prague from another country? You must
      be crazy.

      Let me rewrite the above example in the Turtle subset of N3, and also
      give the person a name:

      @prefix : <http://xmlns.com/foaf/0.1/> .
      @prefix contact: <http://www.w3.org/2000/10/swap/pim/contact#> .

      a :Person;
      :firstName "John";
      :family_name "Smith";
      contact:home [
      contact:address [
      contact:city "New York";
      contact:country "New York";
      contact:postalCode "10021";
      contact:street "21 2nd Street";
      foaf:phone <tel:+1-212-732-1234>, <tel:+1-646-123-4567>;

      Now this may require a little learning curve - but frankly not that
      much - to understand. But it has the following advantages:

      1. you can know what any of the terms mean by clicking on them
      (append the prefix to the name) and do a GET
      2. you can make statements of equality between relations and things,
      such as
      :firstname = frenchfoaf:prenom .
      3. you can infer things from the above, such as that
      <http://eg.com/joe#p> a :Agent .
      4. you can mix vocabularies from different namespaces as above, just
      as in Java you can mix classes developed by
      different organisations. There does not even seem to be the notion
      of a namespace in JSON, so how would you reuse the work of others?
      5. you can split the data about something in pieces. So you can put
      your information about <http://eg.com/joe#p> at the "http://eg.com/
      joe" URL, in a restful way, and other people can talk about him at
      their URL. I could for example add the following to my foaf file:
      <http://bblfish.net/people/henry/card#me> :knows <http://eg.com/
      joe#p> .
      You can't do that in a standard way in JSON because it does not
      have a URI as a base type (weird for a language that wants to be a
      web language, to miss the core element of the web! yet it has true,
      false and numbers!)

      Now that does not mean JSON can't be made to work right, as the
      SPARQL JSON result set serialisation does [2]. But it does not do the
      right thing by default. A bit like languages before Java that did not
      have unicode support by default. You could do the right thing if you
      knew a lot. But most people just got into bad habits instead.


      [1] "Some of the many limitations of the MetaWeblog API":
      [2] http://www.w3.org/TR/rdf-sparql-json-res/

      > > If hash maps in my language can only
      > > have unique keys, I want a format that enforces this constraint
      > > at the parser level, so that ill-formed messages are defined out
      > > of existence, freeing me from ever having to deal with them at a
      > > higher level in the application.
      > Serialized formats that restrict what you can say are a liability.
      > XML usually lets you express what the data actually is, without too
      > many
      > contortions (at least until overlap rears its head). Hashtables
      > don't. :-(

      Home page: http://bblfish.net/
      Sun Blog: http://blogs.sun.com/bblfish/
      Foaf name: http://bblfish.net/people/henry/card#me
    • A. Pagaltzis
      ... Sure. All of the software I’ve written to date will spit stuff back out if it purports to be JSON but contains Javascript code. Because *none* of my code
      Message 108 of 108 , Jul 30, 2007
        * Elliotte Harold <elharo@...> [2007-07-21 14:52]:
        > A. Pagaltzis wrote:
        > > * Elliotte Harold <elharo@...> [2007-07-17 01:00]:
        > >> You're not supposed to put stuff arbitrary JavaScript into
        > >> JSON, but people can and do.
        > >
        > > Then it’s not JSON anymore and JSON parsers will choke on it.
        > > JSON is a computation-free subset of Javascript.
        > Crackers don't play by the rules. They do not send only
        > well-formed messages that adhere to the spec. Secure software
        > has to be ready for absolutely any input, not just input that
        > follows the spec.

        Sure. All of the software I’ve written to date will spit stuff
        back out if it purports to be JSON but contains Javascript code.
        Because *none* of my code that uses JSON is Javascript.

        Now how does that fit into your world view?

        (And if it were JS, my statement would still hold true because
        I wouldn’t use `eval` anyway.)

        Let the crackers have at it. They’re not disturbing my sleep.

        > That XML is so complex that you really need a true parser to
        > handle it is a feature, not a bug. It discourages and mostly
        > prevents the use of porr quality, hand-written solutions to
        > handle it.

        Right. That’s why we had the billion laughs attack, and why XML
        parsers can be caused to participate in a DDoS or to violate the
        privacy of their users if you provide an address for an external

        Give me a JSON parser any day.

        I know, the former is fixed in most parsers. The latter generally
        has to be manually disabled from client code; most app developers
        forget to toggle it appropriately.

        > No one takes an arbitrary XML document and throws it into a
        > JavaScript interpreter. People do this with JSON all the time,

        That’s a bug. Such code will fail to reject things that are not

        > the language was deliberately designed to make this possible.

        Did Crockford actually say that somewhere? Citation?

        And if that were so, why would JSON forbid syntactical variants
        (such as keys with no quotes around them) that are valid in

        Aristotle Pagaltzis // <http://plasmasturm.org/>
      Your message has been successfully submitted and would be delivered to recipients shortly.