Floating point cannot represent all decimal values exactly. Many must be truncated. The same is true for constants you might use in expressions. So you have inexact values combined with binary operators, whose operation themselves are also inexact and must truncate/round. These errors accumulate throughout your calculations.
For example, in math a*(b+c)=a*b+a*c. But on a computer in floating point notation, that is NOT a TRUE statement. You have to be aware of this, when you decide to use floating point.
There are "easy ways" to patch things up. For example, if you know that a calculation cannot produce a negative value, and you know this fact a-priori, then you could just force negative values to zero. It's kludgy, but it may get you by.
Somewhat more complex methods might be to simply not use floating point, at all. Instead, use integers. In the integer domain on computers, it is true that a*(b+c) = a*b+a*c. So the rules work there. But of course, this can add other kinds of complications dealing with input and output of values. Also, integers can't cope well with wide dynamic ranges. So if you need that, you are forced back into floating point, like it or not.
Another solution is to carefully design all your algorithms, logic, and math to deal with floating point error. This means going through everything you plan to do, working out how errors propagate, and setting up appropriate and reasonable methods to deal with them which are appropriate to the circumstances. This is called "numerical methods." And there are courses on the subject (and books) to help you. The reason you need training and books is because what you do involves not just programming. You need to analyze what you are doing and quite often calculus and lots of partial derivatives work is required in order to even find out what kinds of programming statements will be needed when you do program things up, since you are dealing with finite difference approximations to reality. Perhaps also z-space vs s-space, too.
An area where this can become quite complex is when dealing with wide dynamic ranges, though. Imagine the problem of deciding whether or not 4 points in 3D space are coplanar. Suppose two of the points have very large values for x, y, and z and that one of the points is very close to (0,0,0) but not exactly there. The remaining point is also VERY NEAR to (0,0,0) but also not exactly where the 3rd point is at. From the perspective of those first two points, which are located nearly at infinity away from (0,0,0), both these last two points "appear" to be at the same place and so all calculations will either show all 4 points to be coplanar (if you allow for a tiny error band) or not coplanar (if you don't allow any error band.) But you will NEVER be able to actually determine the reality. Because floating point is a lousy tool for things like this. You have to DESIGN FOR IT. And that means thinking hard.
In many situations where it seriously matters, programmers will keep additional values in parallel with each floating point value that track the error bounds, expanding them as calculations proceed. Yes, it is a pain. But... it works.
Anyway, welcome to the world of floating point. Enjoy.