Question:
Binary, Hexadecimal, Octal, Decimal?
2010-04-24 03:08:01 UTC
Hello
I've been doing a bit of research in c++ and was reading about binary, hexa, octal and decimal. I found the formulas very interesting but still not one hundred percent sure on why we use these terms. By the research I've done it seems that it is basically to simplify different types of numbers. A good explanation of why we use them would really help as I'm new to all this work.
Thanks
Eight answers:
?
2010-04-25 00:49:02 UTC
To be very very short.



Binary - multiples of 2

Hex - multiples of 16

Decimal - normal number

Octal - multiples of 8
Matti E
2010-04-24 04:03:42 UTC
^

as above, but to add, binary is vital due to the way computers handle and store information. In electricity you got positive or negative charges. Two. Bi. Binary. True or False. 1 or 0.



Binary means base 2 (numbers 0..1)

Octal means base 8 - (numbers 0..7) Used also in computing, since it works nicely with binary.

Decimal means base 10 (numbers 0..9) - That's what we use in everyday math.

Hexadecimal means base 16 (numbers 0..F )- Most commonly used in displaying binary data.



To give you an example, a sequence of 8 binary numbers is called a a byte (a single binary number is called a bit).

A byte can be presented by two hex numbers. However, the length of a decimal number is not fixed, because 10 isn't a power of two (powers of two: 2, 4, 8, 16, 32, 64..)





For example, the text "bi" encoded in ASCII can be presented by the following:

String: bi

Binary: 01100010 01101001

Hex: 62 69

Deci: 98 105





The maximum value of a single byte:

Binary: 11111111

Hex: FF

Deci: 255



As you can see, Decimals don't make much sense when calculating things from the computers perspective. There have been computer systems that try to employ a base 10 approach, however they have always been slow and rather useless.



Just one more thing to see another connection; the ASCII encoding of text uses one byte to represent a single character. Thus possible characters are between 0 and FF, which means that ascii codec can use only 256 different characters (0-255).



This was a major limitation in earlier times, because it was optimized for English, pretty much making trouble to all other languages. UTF-8 is the favoured encoding of today, which is a variable byte length encoding that supports pretty much all spoken languages of the world.
deonejuan
2010-04-24 04:30:48 UTC
All a computer is is a gazillion switches. Those switches have two states -- ON / OFF. That is all a computer is. As the early computers evolved, there was competing methods on how to declare the groups of binary into convenient numbers. A single transistor switch by itself is not a number.



Switches in groups of 7 became became Octal, while using a check-value for the blank 8th, we address a chunk of binary in 0,1,2,3,4,5,6,7 10. 8 is convenient quotient divisor of RAM memory. So, when we have a memory full of 010101010, Octal will read 8 binary switches -- called a byte.



Base-16 became very convenient as it counts in Powers of 2. Such numbers are asking the computer to fetch even bigger chunks of switches.



In real world practice, Octal numbers make up the Ocets. We write to the hard drive in 16-bit [space ] 16-bit [space] etc. The file contents looks something like 1A 91 33 4A... , in other words two single-digit Hexa for each 8-bit. Octal is keeping count as we read streams. Binary is the state of the switch.



A computer does all its math via addition. It can do so because these numbering systems treat groups of binary like the old Chinese abacus.
?
2016-04-12 09:05:49 UTC
To convert hex to binary and octal to binary are direct conversions. A hex digit will form a binary number of 4 bits (binary digits) An octal digit will form a binary number or 3 bits. So 2AD will become 0010 1010 1101 and 234 octal will become 010 011 100 To convert from hex to octal and octal to hex it is therefore easier to go via binary because this is a simple conversion. If I take my first hex example 2AD 0010 1010 1101 Group this into threes from the right 001 010 101 101 is 1255 octal. Also use the windows calculator in scientific mode to check your answers. I hope this helps.
2010-04-24 03:41:34 UTC
Binary Hexadecimal Octal Decimal

All are number systems.



Like we understand the decimal system...



Computers & microprocessor chips understand binary, hexadecimal & octal numbers.

They don't understand decimal nos.



In Computers, we enter decimal nos. but the machine converts it into these systems using Arithmetic Logic Unit [ALU] to understand the input.



Computers use binary codes [base is 2, nos. are 1 & 0]



Microprocessors use hexadecimal nos. [base is 16, nos are 0 1 2 3 4 5 6 7 8 9 A B C D E F]



Micro-controllers use octal codes [base is 8, nos. are 0 1 2 3 4 5 6 7]



We humans use decimal [base is 10, 0 - 10]



Gotcha!
Lie Ryan
2010-04-24 04:28:22 UTC
Ok, enough of all these misinformation going around here.



All electric digital calculating machine uses electricity or absence of electricity to represents 1s and 0s. This includes all CPU, Microcontroller, Microcomputer, and everything else. This is because it is much easier to design a hardware that only need to handle two possible values at a time, and you can easily write software to do everything else.



All of these binary, hexadecimal, octal, and decimal notations are there for the convenience of us, humans, to interpret this electricity flows and patterns. The mathematical foundation of these is the number base http://en.wikipedia.org/wiki/Numeral_system and http://en.wikipedia.org/wiki/Positional_notation .



The binary notation is the closest to what the machine is handling, that's why sometimes people loosely say "computer uses binary".



Modern mathematics implicitly uses base-10 notation (decimals) unless otherwise noted; but humans have, in the past, used other bases. Babylonians have been known to use base-60 notation. And Roman Numeral is, in a sense, a base-1 notation with abbreviations (these abbreviations is a bit of mix between base-5 and base-10). We used base 12/24 when dealing with hours, and base 12 for months, and base 60 for seconds and minutes, and base 360 for degrees in an angle.



The decimal notation is the one you'd use for everyday computation, simply because we, humans, are typically most familiar with it since that's the first number base system we're taught in preschools by finger counting up to ten. Once you get past preschool and starts dealing with large number, in some schools you may be taught to finger count using binary notation http://en.wikipedia.org/wiki/Finger_binary which can represent numbers up to 1024 with two hands.



Since binary is how computer (essentially) works, and decimal is how most human works, why bother with every other bases you ask? This is because binary is long and inconvenient for human to work with. We need a middle ground when we need to represent numbers to computer; a single octal digit can represent 3-binary digits, and a single hexadecimal digit can represent 4-binary digits. A decimal notation represents halfway between 3 and 4 binary digits, so it takes a little bit more work to convert between decimal and binary/hex/octal.



In short, the notations are there just for the convenience of us.
Castle
2010-04-24 03:18:10 UTC
we use binary because it is simple. The 0 refers to (CLOSED or OFF) where as the 1 means (OPEN or ON) it's one way or the other there is no other way to interpret it e.g. if you have a group of binary numbers each number represents a place holder 1,2,4,8,16,32,64,128,256,512,1024 etc etc so if there is a 1 in that place holder spot then you would add it into the equation if there is a 0 then you wouldn't.
?
2010-04-24 03:41:19 UTC
Basically, these units are used in Hardware.

Binary unit (0 & 1) represnt off & on, High or low, etc, usually to inform how IC circuits are working, Gates, Transistors, etc.

Hex & Oct are used to denote or represnt addresses(memory, bus, pipe,etc) or periphals, i.e keyboard.

In assembly language you are passing arrguments or commands in hex.



Decimal are used by real world, us.


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Continue reading on narkive:
Loading...