top | item 45220565

(no title)

unnah | 5 months ago

Since such algorithms were developed in the 1990's, nowadays you can expect your language's standard library to use them for float-to-decimal and decimal-to-float conversions. So all you need to do in code is to print the float without any special formatting instructions, and you'll get the shortest unique decimal representation.

discuss

order

lifthrasiir|5 months ago

Except that C specifies that floating point numbers should be printed in a fixed precision (6 decimal digits) when no precision is given. Internally they do use some sort of float-to-decimal algorithms [1], but you can't get the shortest representation out of them.

[1] Some (e.g. Windows CRT) do use the shortest representation as a basis, in which case you can actually extract it with large enough precision (where all subsequent digits will be zeros). But many libcs print the exact representation instead (e.g. 3.140000000000000124344978758017532527446746826171875 for `printf("%.51f", 3.14)`), so they are useless for our purpose.

Sharlin|5 months ago

That's what the %g format specifier is for.

  printf("%f\n", 3.14); // 3.140000
  printf("%g\n", 3.14); // 3.14