I forgot to add that encoders should only use the big number format if the
number is too big to fit in int64 (or int32, depending on which will be the
largest in the spec) or a double. That way, if a decoder can't handle a
number larger than int64 anyway, it does not need to implement decoding of
big numbers -- you don't want a number that will fit in an int32 put into a
big number format anyway.
On Thu, Sep 22, 2011 at 7:15 AM, Don Owens <don@...> wrote:
> Yes, that is what I was getting at. But see comments embedded.
> On Wed, Sep 21, 2011 at 7:50 PM, rkalla123 <rkalla@...> wrote:
>> I see your point. The way I understand it is that this would require 2 new
>> data types, effectively BigInt and BigDecimal.
>> So say something along these lines:
>> bigint - marker 'G'
>> [G][129 big-endian ordered bytes representing a BigInt]
>> It should be mentioned that they are signed ints, but doing two's
> complement and such is probably too much work. Maybe just specify that the
> first bit always represents the sign (0 for no sign, 1 or minus).
>> bigdouble - marker 'W'
>> [W][222 big-endian ordered bytes representing a BigDecimal]
> BigDecimal should probably be renamed to something like BigFloat, since
> decimal is ambiguous (used to mean base-10 and floating point). I'm less
> familiar with large floating point, but I think a floating point number
> should consist of a sign bit plus two integers (one for the
> mantissa/significand and one for the exponent). In the interest of space
> savings, I think the sign bit should just be included in the exponent and
> order things so they look similar to the IEEE 754 spec, e.g.,
> [W][3 big-endian ordered bytes (where first bit is sign bit) of
> exponent][222 big-endian ordered bytes of mantissa]
> In terms of the documentation, I think the big integers and floats should
> be qualified with a "should implement" instead of a "must implement", since,
> as others have mentioned, not every encoder and decoder will be able to
> handle these. I think this matches JSON implementations well. If an
> encoder does not handle large numbers, it could just throw an error, just as
> it should throw an error now if an oversized number is encountered in JSON.
> The same goes for the decoder side. If there is no good way to represent a
> large number in the language your are working in, throw an error indicating
> that the number is too large.
> Have you looked into using variable-length integers for length specifiers?
> If you have a lot of short strings (or big numbers, etc.) in your data,
> these could significantly reduce your space usage (at the cost of more
> complexity for the developer and CPU). There should be a balance between
> space efficiency and complexity. Thoughts?
>> --- In firstname.lastname@example.org, Don Owens <don@...> wrote:
>> > I've seen very large numbers used in JSON. In Perl, that can be
>> > as a Math::BigInt object. And that is the way I have implemented it in
>> > JSON module for Perl (JSON::DWIW). Python has arbitrary length integers
>> > built-in. For my own language that I'm working on, I'm using libgmp in C
>> > handle arbitrary length integers.
>> > JSON is used as a data exchange format. I want to be able to do a
>> > roundtrip, e.g., Python -> encoded -> Python with native integers (with
>> > arbitrary length in this case). In JSON, this just works, as far as the
>> > encoding is concerned. I see the need for this in any binary JSON format
>> > well. If a large number is represented as a string, then on the decoding
>> > side, you don't know if that was a number or a string (just because it
>> > like a number doesn't mean that the sender means it's a number). If,
>> > decoding JSON, the library can't handle large numbers, it has to throw
>> > error anyway. The same should go for binary JSON.
>> > ./don
> Don Owens
[Non-text portions of this message have been removed]