Set precision for floating-point to string conversions
The precision argument is a decimal number specifying the number of significant digits to include when converting floating-point numbers to strings. The default number of significant digits used is 6 digits. You can set precision to 17 for conversions with IEEE floating-point numbers to allow double-precision values to be converted to strings and back to binary with no loss of precision.
This function replaces the tcl_precision variable, which is not recognized by expr.