Re: [json] Re: JSON and the Unicode Standard
- To save people looking it up:
ECMA-262, section 7.6:
Two IdentifierName that are canonically equivalent according to the
Unicode standard are not equal unless they are represented by the
exact same sequence of code units (in other words, conforming
ECMAScript implementations are only required to do bitwise comparison
on IdentifierName values). The intent is that the incoming source text
has been converted to normalised form C before it reaches the
ECMAScript implementations may recognize identifier characters defined
in later editions of the Unicode Standard. If portability is a
concern, programmers should only employ identifier characters defined
in Unicode 3.0.
There then follows a syntax definition, which expressly precludes use
of reserved keywords from being identifiers.
Looks like the most interesting attacks on json, from a security
viewpoint, would be using keywords as object member names.
implementations would be the most at risk.
I think it's fairly clear that a JSON parser has ABSOLUTELY NO
BUSINESS poking around with actual data strings; Douglas has been very
clear that you are to pass them bit-identical to the recipient. On the
other hand, there's an argument for some kind of sanitation when it
comes to object member names.
I'm really tempted by the idea of a JSON-secure spec, which clamps
down on these details.
Arguing the Unicode details is decidedly NOT compatible with the
"spirit" of JSON, which Douglas has been very clear about; a
lightweight, simple, modern data representation.
I think it speaks to the merit of JSON as a format that you (@Johne)
want to consider the security details.
But I think what you need might well be a branch and a new spec?
I'm probably speaking way out of turn here, so please do accept my
apologies if I've overstepped any bounds.
On Wed, Mar 2, 2011 at 6:22 PM, Dave Gamble <davegamble@...> wrote:
> This seems to be the same question, in practical terms.
> On Wed, Mar 2, 2011 at 6:21 PM, Dave Gamble <davegamble@...> wrote:
>> Would it be too much to specify that key names are to be ASCII top-bit-unset strings?
>> i.e. in the definition of an object, designate that the "string" there is a "simplestring" which uses a restricted definition of char?
>> As far as I can see, this is the only case where the Unicode interpretation is potentially dangerous.
>> In usage of strings as data, I believe they are to be delivered unprocessed to the user of the data.
>> Maybe designate this json_littlebitmoresecure.
>> On Wed, Mar 2, 2011 at 4:46 AM, johne_ganz <john.engelhart@...> wrote:
>>> --- In firstname.lastname@example.org, John Cowan <cowan@...> wrote:
>>> > johne_ganz scripsit:
>>> > > In fact, for my parser (JSONKit), which is Objective-C based and uses
>>> > > NSString to represent the JSON String objects, it is not practical
>>> > > for me to create a JSON parser that "respects the data stored in the
>>> > > JSON byte stream". The NSString class makes no such guarantees in its
>>> > > documentation, nor does the Unicode Standard. It would be extremely
>>> > > non-trivial for me to meet a "respects the data stored in the JSON
>>> > > byte stream" requirement, at least in the sense that the behavior
>>> > > is deterministic.
>>> > Normalization is non-trivial, and I doubt if any existing Unicode library
>>> > imposes it on all strings at creation/modification time. Certainly ICU
>>> > does not; it provides the ability to normalize, that's all.
>>> The Foundation framework (specifically the NSString class) on Mac OS X and iPhone / iPad does. Not sure if 90+ million iPhones count for much, though.
>>> In particular, [@"Ä" compare:@"Ä"] is zero, or "identical", whereas [@"Ä" isEqual:@"Ä"] is "no". Each has different semantics, and -compare: is preferred when dealing with strings because it has the right semantics in that context.
>>> if("Ä" == "Ä") // True
>>> if("Ä" === "Ä") // False
>>> in the same way that ("1" == 1) is true, but ("1" === 1) is false.
>>> And just in case things get mangled along the way, the first string is "\u00c4" and the second string is "\u0041\u0308". In fact, if they do get mangled.... I think that should serve as a warning that these things can and do happen behind your back when dealing with Unicode.
- --- In email@example.com, Dave Gamble <davegamble@...> wrote:
>There is another relevant section (ECMA-262, 8.4 The String Type, pg 28)
> To save people looking it up:
> ECMA-262, section 7.6:
> Two IdentifierName that are canonically equivalent according to the
> Unicode standard are not equal unless they are represented by the
> exact same sequence of code units (in other words, conforming
> ECMAScript implementations are only required to do bitwise comparison
> on IdentifierName values). The intent is that the incoming source text
> has been converted to normalised form C before it reaches the
> ECMAScript implementations may recognize identifier characters defined
> in later editions of the Unicode Standard. If portability is a
> concern, programmers should only employ identifier characters defined
> in Unicode 3.0.
When a String contains actual textual data, each element is considered to be a single UTF-16 code unit. Whether or not this is the actual storage format of a String, the characters within a String are numbered by their initial code unit element position as though they were represented using UTF-16. All operations on Strings (except as otherwise stated) treat them as sequences of undifferentiated 16-bit unsigned integers; they do not ensure the resulting String is in normalised form, nor do they ensure language-sensitive results.
NOTE The rationale behind this design was to keep the implementation of Strings as simple and high-performing as possible. The intent is that textual data coming into the execution environment from outside (e.g., user input, text read from a file or received over the network, etc.) be converted to Unicode Normalised Form C before the running program sees it. Usually this would occur at the same time incoming text is converted from its original character encoding to Unicode (and would impose no additional overhead). Since it is recommended that ECMAScript source code be in Normalised Form C, string literals are guaranteed to be normalised (if source text is guaranteed to be normalised), as long as they do not contain any Unicode escape sequences.
> I think it's fairly clear that a JSON parser has ABSOLUTELY NOI disagree with your first statement. The ECMA-262 standard, at least in my opinion, tries to side step a lot of these issues. It makes a fairly clear distinction between "what happens inside the ECMA-262 environment (which it obviously has near total control over)" and "what happens outside the ECMA-262 environment".
> BUSINESS poking around with actual data strings; Douglas has been very
> clear that you are to pass them bit-identical to the recipient. On the
> other hand, there's an argument for some kind of sanitation when it
> comes to object member names.
> I'm really tempted by the idea of a JSON-secure spec, which clamps
> down on these details.
IMHO, the ECMA-262 standard advocates that "stuff that happens outside the ECMA-262 environment should be treated as if it is NFC".
Since the sine qua non of JSON is the interchange of information between different environments and implementations, it must address any issues that can and will cause difficulties. Like it or not, the fact that it's Unicode means these things can and will happen, and it's simply not practical to expect or insist that every implementation treat JSON Strings as "just a simple array of Unicode Code Points".
> Arguing the Unicode details is decidedly NOT compatible with theI completely agree that these details are NOT compatible with the "spirit" of JSON.
> "spirit" of JSON, which Douglas has been very clear about; a
> lightweight, simple, modern data representation.
But.... so what? Unicode is not simple. I'm not the one who made it that way, but the way that RFC 4627 is written, you must deal with it. There are ways RFC 4627 could have been written such that the JSON to be parsed is considered a stream of 8 bit bytes, and therefore stripped of its Unicode semantics (if any). However, it very clearly and plainly says "JSON text SHALL be encoded in Unicode.", which pretty much kills the idea that you can just treat it as raw bytes.
There's a saying about formalized standards: The standard is right. Even it's mistakes.
As an aside, there is a RFC for "Unicode Format for Network Interchange", RFC 5198 (http://tools.ietf.org/html/rfc5198). It is 18 pages long. RFC 4627 is just 9 pages.
Actually, I would encourage people to read RFC 5198. I'm not sure I agree with all of it, but it goes over a lot of the issues I think are very relevant to this conversation. It's great background info if you're not familiar with the details.