JSON decoding

P T Withington ptw at pobox.com
Fri Nov 3 10:26:19 PST 2006


On 2006-11-03, at 12:58 EST, Bob Ippolito wrote:

> On 11/3/06, P T Withington <ptw at pobox.com> wrote:

[...]

>> I understand that the 'from JSON' mechanism must protect itself
>> against such.  But there is no point in preventing the 'to JSON'
>> mechanism from writing such, since 'from JSON' must validate
>> anyways.  More important is to signal an error if an object that
>> cannot be round-tripped is passed to 'to JSON'.  The end user would
>> be justifiably upset if 'to JSON' gave the impression that it
>> preserved their data, only to find out (much) later that 'from JSON'
>> cannot reconstruct it.  These are almost the same condition, but
>> there is an important difference in emphasis:  rather than preventing
>> the user from using 'to JSON' in a way that breaks 'from JSON', you
>> are preventing the user from writing data using 'to JSON' that can
>> not be reconstructed accurately using 'from JSON'.
>
> toJSON does a better job at preserving data than toJSONString. Yes,
> it's possible that you can encode data in a way such that you don't
> get the same thing back out. So what? At least you get a valid
> document.

Hm.  Well, as a user, I would not want to use a serialization  
mechanism that guarantees my document can be de-serialized but not  
that it contains the data I serialized.  As an implementer, such a  
spec is easy to implement, though:  the serializer can just ignore  
its input, write an empty document, and return success.

> Your second point doesn't make any sense. toJSONString has no
> advantage whatsoever in reconstruction than toJSON. It is unsafe,
> however.
>
> The discussion hasn't been "should we allow custom representations"
> but *how* we should allow custom representations.

We're missing here.  I am not arguing toJSONString vs. toJSON.  I am  
saying that we should not create a facility that gives the illusion  
of preserving your data when it does not.  I believe it would be  
better to not have an extension mechanism at all than to have one  
that is incomplete.

[...]

>> I understand this to mean that you intend to support building, on top
>> of the JSON substrate, a general dump/load that would preserve
>> arbitrary values, not just those that can be directly represented as
>> literals.  I don't quite see how you can do that, since it would mean
>> that you would have to use some part of the literal space that can be
>> represented by JSON to represent these additional values.  At the
>> very least, you need to define an escape in JSON to do that, no?
>> Otherwise how do you distinguish between a literal that represents,
>> say, an instance of a class, and the same pattern meant to represent
>> just that literal value?  As a concrete example, if we did not have
>> date literals and you chose to implement 'to JSON' of a date object
>> as `{date: <iso-date-string>}`, how can 'from JSON' tell whether it
>> is to reconstruct a Date vs. reconstruct the Object `{date: <iso- 
>> date-
>> string>}`?
>
> You can do that quite easily. JSON-RPC for example. You just need to
> use an appropriate decoding object hook that understands how the
> objects should be decoded.

In other words, you have to have an out-of-band agreement on the  
encoding.  Once again, this creates the issue that you could save  
your data in a file and not be able to recover that data (because the  
encoding scheme is not in the file, if the two become separated, you  
are out of luck).

Summary:  I don't think JSON is powerful enough (in it's current  
specification) to support an extension to general object  
serialization, so I don't think we should try to layer than feature  
on it.  Either JSON has to evolve, or a different representation  
should be chosen.

Sorry about being a curmudgeon, but its in my job description.





More information about the Es4-discuss mailing list