Variable costs are usually broken out in terms of memory and execution time. There are other costs, though, such as precision and whether or not they follow the rules of real number algebra such as the distributive property (which is violated when using floating point.)
Direct memory cost is simple. You can use the sizeof() operator to find out the size of any variable in C++. A larger number means it costs more memory. Indirect memory costs are not so simple. When a compiler generates machine code (or .NET IL code, I suppose) for something you write, that code also takes up memory. Some types of variables under some operations will require more effort and code. For example, if you use a 64-bit integer and your target is an 8-bit embedded processor, it's almost certain that it will take a lot of instructions to add a value to such a variable. And that is a "memory" cost, as well as an execution time cost. Floating point usage may incur an entire floating point library being hauled in, as well. Particularly in the case of 8-bit embedded microcontrollers which almost certainly have no hardware support for floating point. So even using a single instance might incur many thousands of bytes of memory cost just to drag in that library. As well as all the execution time required, as well.
Execution time costs are usually more for floating point, but not always. In some of the Intel CPUs, the floating point actually takes FEWER clock cycles -- but this was because some of the integer operations used the floating point unit and converted back. Execution time costs will be related to how many memory operations are required in order to read and/or write the entire variable and larger sized variables are more at risk here. Other time costs will be related to library calls that may be required and the number of instructions needed to complete some operation.
Floating point also does not follow some rules you'd normally expect. For example, a*(b+c) is not necessarily the exact same value as a*b+a*c. In integer form, it always is (so long as ranges aren't exceeded.) But in floating point, it only sometimes is the same. If you sum up numbers in floating point after sorting them from smallest to largest, you can easily get a different number than if you sorted them from largest to smallest, instead. Just as another example. So this is sometimes a cost, as well.
Precision and dynamic range are other details I will just hand-wave towards, so you can look them up. But there are significant differences here between integers and floating point, as well. And these are also trade-offs to be weighed.