use decimal

Sam Ruby rubys at intertwingly.net
Wed Sep 17 19:07:52 PDT 2008


2008/9/17 Maciej Stachowiak <mjs at apple.com>:
>
> Were we to adopt this, then I think "use decimal" should condition only
> whether an unqualified numeric literal be interpreted as binary or decimal
> floating point. We should then have a suffix which means binary floating
> point, so you can say it explicitly. Nothing else about the numerics should
> be conditioned by the pragma.
>
> Perhaps that suffix could be 'f' as in C/C++.

'f' in languages like C/C++ and ECMA 334 means single precision floating point.

'd' in such languages means double precision.

And, yes, ECMA 334 has 'd' for binary floating point and 'm' for
decimal floating point.

> Regards,
> Maciej

- Sam Ruby


More information about the Es-discuss mailing list