Question:
How do computers know which numbers are floating point?
2011-02-16 22:52:00 UTC
How would it differentiate from a 32 bit floating point and a 32 bit integer? Since everything in the computer is zeroes and ones, how does it know where to put the floating point?
Three answers:
Shadow Wolf
2011-02-16 23:15:04 UTC
It doesn't really know. Programmers write programs that handle different types of numbers. It is the programmer's job to keep track of which ones are what. At the most basic level, a floating point operation instruction uses a memory location or a register to perform the operation. There are no checks or tests to see if it is a valid floating point number. Only a set of bits that are used to perform the operation.



In a similar way, this is sometimes what gets C/C++ programmers into trouble. If the compiler doesn't catch it, you can write code that does the wrong things with the wrong memory locations. When this happens, the results are unpredictable but usually an error that would probably be in the form of bad data or a bad result from a calculation.



As you get to the higher level languages, there are more checks to be sure you are working with the right kind of numbers. Some of them still allow you to make mistakes though. In many forms of BASIC, you can mix variable types without any real consequences other than your answers may be wrong. It keeps track internally of what type of number is stored in a given variable and everything usually defaults to floating point unless it is programmed otherwise.



In general, an integer or a floating point number has a somewhat standard form. It can vary some depending on the microprocessor or programming language involved, but they always follow a general form. These are the forms you'll probably use when studying how they are stored in computers.



Bits only have the meaning placed upon them by programmers. Otherwise a bit is still a bit and it can only be on or off.



Shadow Wolf
boyland
2017-01-11 20:21:31 UTC
There are a pair of distinctive strategies which you would be able to save them. one way is to save the better a million/2 in a single reminiscence region and the decrease a million/2 in yet another. There are valid names for the better and decrease halves btw. yet differently is to save the numbers in series, after which append an illustration bit and a variety that tells you the place to place the decimal factor. relies upon on the programming application, variety, and performance wanted. that is a conventional answer, yet that's how that is executed. For specific processors, one storage excess of yet another will consequence in better performance.
2011-02-16 23:06:56 UTC
It depends on which programming language, but usually if a number is written with a decimal point, it's floating point.


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...