Ok, enough of all these misinformation going around here.
All electric digital calculating machine uses electricity or absence of electricity to represents 1s and 0s. This includes all CPU, Microcontroller, Microcomputer, and everything else. This is because it is much easier to design a hardware that only need to handle two possible values at a time, and you can easily write software to do everything else.
All of these binary, hexadecimal, octal, and decimal notations are there for the convenience of us, humans, to interpret this electricity flows and patterns. The mathematical foundation of these is the number base http://en.wikipedia.org/wiki/Numeral_system and http://en.wikipedia.org/wiki/Positional_notation .
The binary notation is the closest to what the machine is handling, that's why sometimes people loosely say "computer uses binary".
Modern mathematics implicitly uses base-10 notation (decimals) unless otherwise noted; but humans have, in the past, used other bases. Babylonians have been known to use base-60 notation. And Roman Numeral is, in a sense, a base-1 notation with abbreviations (these abbreviations is a bit of mix between base-5 and base-10). We used base 12/24 when dealing with hours, and base 12 for months, and base 60 for seconds and minutes, and base 360 for degrees in an angle.
The decimal notation is the one you'd use for everyday computation, simply because we, humans, are typically most familiar with it since that's the first number base system we're taught in preschools by finger counting up to ten. Once you get past preschool and starts dealing with large number, in some schools you may be taught to finger count using binary notation http://en.wikipedia.org/wiki/Finger_binary which can represent numbers up to 1024 with two hands.
Since binary is how computer (essentially) works, and decimal is how most human works, why bother with every other bases you ask? This is because binary is long and inconvenient for human to work with. We need a middle ground when we need to represent numbers to computer; a single octal digit can represent 3-binary digits, and a single hexadecimal digit can represent 4-binary digits. A decimal notation represents halfway between 3 and 4 binary digits, so it takes a little bit more work to convert between decimal and binary/hex/octal.
In short, the notations are there just for the convenience of us.