It doesn't really know. Programmers write programs that handle different types of numbers. It is the programmer's job to keep track of which ones are what. At the most basic level, a floating point operation instruction uses a memory location or a register to perform the operation. There are no checks or tests to see if it is a valid floating point number. Only a set of bits that are used to perform the operation.
In a similar way, this is sometimes what gets C/C++ programmers into trouble. If the compiler doesn't catch it, you can write code that does the wrong things with the wrong memory locations. When this happens, the results are unpredictable but usually an error that would probably be in the form of bad data or a bad result from a calculation.
As you get to the higher level languages, there are more checks to be sure you are working with the right kind of numbers. Some of them still allow you to make mistakes though. In many forms of BASIC, you can mix variable types without any real consequences other than your answers may be wrong. It keeps track internally of what type of number is stored in a given variable and everything usually defaults to floating point unless it is programmed otherwise.
In general, an integer or a floating point number has a somewhat standard form. It can vary some depending on the microprocessor or programming language involved, but they always follow a general form. These are the forms you'll probably use when studying how they are stored in computers.
Bits only have the meaning placed upon them by programmers. Otherwise a bit is still a bit and it can only be on or off.
Shadow Wolf