The spec is too vague; it's "agnostic". A number can include a fractional part, but it doesn't say if that means that fractionless numbers are to be treated as integers. It leaves that to the implementation, which is a terrible idea for an interchange format.
Ruby, IMHO, errs on the side of consistency (fractionless numbers become Fixnum, so encode(decode("1")) => "1"), whereas Go goes the opposite way (all numbers become float64, so encode(decode("1")) => "1.0", unless you enable the magic json.Number type).
JavaScript is an interesting scenario, because JS doesn't even have integer numbers, which means it doesn't have any choice in the matter. Interestingly, Node.js/V8 truncates the fraction if it can: JSON.dump(JSON.parse("1.0")) => "1".
Thanks for your analysis (and the Ruby and JS reports). The more I think about it the more I think parsers should not parse to numbers by default, but instead to (using a Haskell-based pseudo type):
data Sign = Positive | Negative
data Digit = Zero | One ... Eight | Nine
data JSONNumber = JSONNumber
{ _sign :: Maybe Sign
, _integerPart :: [Digit]
, _fractionalPart :: Maybe [Digit]
}
Actually this should include exponents as well -- basically parsers should just report exactly what the spec allows, without coercing to float or whatever.
Of course, users of the parsers could ask to have a number converted to a float, but that wouldn't be the default.
We're a long way from that though. At the moment I don't even feel like I can write a protocol that expected implementations to distinguish `1` and `1.0`. This should not be the case.
> At the moment I don't even feel like I can write a protocol that expected implementations to distinguish `1` and `1.0`.
Indeed you can't, among other reasons because that distinction doesn't exist in JavaScript, but ... why would you want to make such a subtle distinction?
If you really want what you described above, you can get it by using a string.
The distinction may be familiar to us, but if you ask someone on the street, "1" and "1.0" are the same number. The habit we have of giving them different types is somewhat arbitrary.
It sounds like what you're saying is that it should be deserialized as Java's BigDecimal, or whatever the equivalent of that is in a given/language framework; and if there isn't one, then the JSON parser should provide that equivalent.
Ruby, IMHO, errs on the side of consistency (fractionless numbers become Fixnum, so encode(decode("1")) => "1"), whereas Go goes the opposite way (all numbers become float64, so encode(decode("1")) => "1.0", unless you enable the magic json.Number type).
JavaScript is an interesting scenario, because JS doesn't even have integer numbers, which means it doesn't have any choice in the matter. Interestingly, Node.js/V8 truncates the fraction if it can: JSON.dump(JSON.parse("1.0")) => "1".
It's a mess.