Well, you recognize the use of binary, which is good. The computer just translates the hexadecimal to binary in the end. Since 16 (hexadecimal) is a power of 2, it makes the conversion incredibly simple for a computer. Same for octal.
From Wikipedia: Each hexadecimal digit represents four binary digits (bits). As such, the primary use of hexadecimal notation is a human-friendly representation of binary-coded values in computing and digital electronics.
For example: Consider the following equivalent 3 byte values:
01010111 11010000 01100110
and
57 D0 66
Which would be easier to remember? Which would be easier to debug? Which would be easier to double check? Which would be easier to read in someone else's code?
In all cases, hexadecimal is better ... for the human. For the computer, it's just a representation of 4 binary numbers, so the computer is able to translate it to binary without issue.
Hexadecimal means that a byte can be represented by just two "numbers". It is especially useful when attempting to set particular byte values, whether in an array, or in code, or as RGB values, or other places.
The same arguments can be made for octal numbers. Octal, however, has seen a decline in use as modern computers no longer use 12-bit, 24-bit, or 36-bit words (which translate to octal easily), instead using 16-, 32-, or 64-bit words (which do not).