[ Previous ] [ Contents ] [ Index ] [ Next ]

ns_set_precision

Overview

Set precision for floating-point to string conversions

Syntax

ns_set_precision precision

Description

The precision argument is a decimal number specifying the number of significant digits to include when converting floating-point numbers to strings.  The default number of significant digits used is 6 digits. You can set precision to 17 for conversions with IEEE floating-point numbers to allow double-precision values to be converted to strings and back to binary with no loss of precision.

Notes

This function replaces the tcl_precision variable, which is not recognized by expr.

Top of Page

[ Previous ] [ Contents ] [ Index ] [ Next ]
Copyright © 1998-99 America Online, Inc.