After I insert a value (0.12346789123456789123456789) for example in the table that has a float type column, I query and get back 0.1234567891234568 which contains 17 digits. I have 3 questions
- How can I back track the binary representation of the input and output ? The document says it uses 53 bits as default. I am using Management Studio SQL Server and I don't know how to specify
nvalue during declaration of my column type. - The number
17isn't included in the document, I wish to know where it comes from. - In Big or Little Endian systems, I'd like to know how such an input is treated and translated into the output at the low-level byte system. If anyone knows an explanation, I would be thankful.
Aucun commentaire:
Enregistrer un commentaire