Consistency in The Negative Result Values Through Expansion of null's Role

Rick Waldron waldron.rick at
Wed Aug 15 15:35:59 PDT 2012

On Wed, Aug 15, 2012 at 6:02 PM, Erik Reppen <erik.reppen at> wrote:

> This topic has probably been beaten to death years before I was even aware
> of es-discuss but it continues to get mentioned occasionally as a point of
> pain so I thought I'd see if I couldn't attempt to hatch a conversation and
> maybe understand the design concerns better than I likely do now.
> Consistent Type Return for Pass and Fail?
> The principle of consistent type-return has occasionally skewered me as
> somebody who came to non-amateur levels of understanding code primarily
> through JavaScript. I can see the value in maintaining consistent types for
> positive results but not so much for indicators that you didn't get
> anything useful. For instance:
> * [0,1].indexOf('wombat'); //returns an index on success or  -1 to
> indicate failure. -1 passed on to a lot of other array methods of course,
> indicates the last element. If you'd asked me the day I made that mistake I
> could have told you indexOf probably returns -1 on failure to find
> something but it didn't occur to me in the moment.

It would be far worse to have a different type of value as a return, right?

> * 'wombat'.charAt(20); //returns an empty string, but that's a concrete
> value whereas 'wombat'[20] returns undefined

For the same reason indexOf always returns a number, charAt always returns
a string.

"wombat"[20] will dereference the string at an index that doesn't exist,
which means it's undefined.

> Is consistent type return a heuristic carried over from more
> strictly-typed paradigms or would it murder performance of the native
> methods to do the logic required to return something like null in these
> cases? In a dynamic language, why not focus on more consistent return types
> across the board for an indicator that you won't be getting particularly
> handy results?

It would break the web.

> Generic Fail Values
> I suspect I'm in the minority but I actually like the variety in the more
> generic negative-result/failure values like undefined, null and NaN since
> they can help you understand the nature of a problem when they show up in
> unexpected places but more consistency of implementation and clarity in
> terms of what they mean would definitely be valuable.
> Here's my assumptions about the intent of the following values. Please
> correct me if I'm wrong:
> * undefined - Makes sense to me as-typically implemented (possibly 100%
> consistently as I can't think of exceptions). You tried to access something
> that wasn't there. Only happens when a function actually returns a
> reference to something holding that value or doesn't define something to
> return in the first place, or via any property access attempt that doesn't
> resolve for the indicated property name/label.
> * NaN - Something is expected to evaluate as a number but that's not
> really possible due to the rules of arithmetic or a type clash. In some
> cases it seems as if the idea is to return NaN any time a number return was
> expected but for some reason couldn't be achieved, which as a heuristic
> doesn't seem like such a hot idea to me.
> * null - Indicates an absence of value. There were no regEx matches in a
> string, for instance.
> How I'd prefer to see them:
> * undefined - as is. It seems like the most consistently implemented of
> the lot and when I spot an undefined somewhere unexpected it only takes 1-2
> guesses to sort out what's going wrong typically.
> * NaN - It can tell you a lot about what kind of thing went wrong but
> given it's not-equal-to-itself nature it can be a nasty return value when
> unexpected. For instance, 'wombat'.charCodeAt(20) returns NaN. How does
> this makes sense in the context of JavaScript? Yes, I'm trying to get a
> number but from what I would assume (in complete ignorance of unicode
> evaluation at the lower level) is some sort of look-up table. I'm not
> trying to divide 'a' by 2, parseInt('a') or get the square root of a
> negative number. It's as counter-intuitive as indexOf returning a NaN on a
> failure to find a matching element value. A highly specific return value
> like NaN only seems ideal to me when the user-placed value responsible is
> an operand or as a single argument for a simpler method that is one step
> away from being evaluating the arg as a number or failing to do so.
> * null - As typically implemented but more universally and broadly. I'd
> like to see null in core methods acting more as a catch-all when dealing
> with something like a NaN that resulted from operations that don't directly
> hit a single obvious argument.  Essentially a message from core methods
> telling you, "There's no error but I can't do anything useful with these
> argumetns" Examples: There is no index for a value that can't be found in
> an array. No matches were possible with that regEx. A more complicated
> method that could be attempting to access something in its instance that's
> not there or have trouble with a number of args runs into trouble and
> returns null on the principle that it's better to be general than misdirect.
> An overly explicitly named method to make my point:
> someImaginaryCoreMethodThatGetsAnArrayValueViaSomeArrayKeyAndDividesByTwo(someArrayKey)
> So basically when the method takes that array key, gets an undefined value
> with it, tries to divide undefined by 2, and gets NaN, what's the most
> helpful return value from a less experienced user's perspective? Is the
> array key undefined or not a number? Is the array element undefined or not
> a number?
> Or would it be easier to branch your logic consistently when you can only
> expect to worry about specifics like NaN at an operator context or for very
> concise/straightforward casting/conversion basic operation methods like
> parseInt or someMethodThatDividesArgByTwo(arg). And then typically expect
> null when there are multiple args or multiple steps to a process where a
> number of things could result in a confusing NaN or other more specific
> return value to indicate the last thing in an often unobvious cascade of
> things to go wrong.
> With a broader complexity-catch-all-null-on-fail-return policy, branching
> for these sorts of values becomes simpler and more predictable. NaN is more
> likely to be something happening in one of my own functions where I can
> actually see the operation or the very straightforward convert-arg-style
> core-method call responsible, and undefined is almost always the result of
> a property typo, a failure to declare a property, or an array index for an
> element that doesn't exist.
> And null results would mean I'm likely dealing with less simple core
> methods that have more than one argument or multiple-step processes. Since
> in such cases I'm most likely going to need to check the args I'm passing
> and the state of the object a method is acting on anyway, why should I
> worry about whether I should be handling empty strings, NaN, -1, undefined
> or null when all but null could easily suggest a specific problem that
> isn't actually the first link in the chain in the first place?
> So for the sake of consistency/sanity in future methods, at least, how
> about establishing the following guidelines somewhere on the usage of these
> values?
> * More specific negative-result values are reserved for simple statements
> and very simple one-arg methods that operate directly on the value of your
> argument
> * Just about anything else returns null for non-positive-result scenarios
> where more specific returns don't necessarily clarify and could confuse
> things.
> * Ditch consistent typing approaches if that's not a lower-level perf
> thing.

I'm confident that abandoning existing ES precedents will only create
unnecessary confusion.


> _______________________________________________
> es-discuss mailing list
> es-discuss at
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the es-discuss mailing list