Question:
Why do computers have 8 bit words?
html0000000000000
2009-03-11 07:23:55 UTC
Computers use word lengths of 8 bits. Why? Why not have word lengths of 12, 5, or even 9 bits?
Six answers:
2009-03-11 09:28:10 UTC
Some complex and good answers but there is a very simple reason. The heart of the computer is binary arithmetic, is something on or off, is there power or isn't there, so it is optimal to have everything in powers of 2.



If you look at the history the size of a machine word was 4, 8, 16, 32, 64, etc. Not sure if we are at 128 as I stopped looking at hardware a long time ago.



There was an attempt at a ternary computer where things were in 3's. In that case you would have 3, 9, 27, etc. Didn't stick around.
MDC
2009-03-11 14:36:45 UTC
Historical reasons. The "word" is the physical number of wires making up the bus. So, an 8-bit processor performs actions on 8 bits at a time (8 physical voltage values of 0 or 5V) at a time. The 8-bit processor (and thus 8-bit word) is something you might find in your wrist watch or your old Atari. Modern computers are more likely to have 32-bit or 64-bit words, however, programming languages or other software may define words as being 8-bits for simplicity or backward compatibility.



As to why word sizes are 8, 16, 32... etc. is due to the way computers perform arithmetic. Computers store numbers in one of two states--a 1 or a 0--which is a binary (base 2) numbering system. The sizes are thus determined by the digits which are powers of 2.



It's actually pretty fascinating how it all works. Read up on "Binary Numeral System" and "Computer Architecture" for more information.



Cheers.
The Phlebob
2009-03-12 02:52:32 UTC
Word lengths do not have to be powers of 2. Back in the 50's and 60's, the most common character size was actually 6 bits, which was enough to handle 26 alphabetic characters (all upper case), 10 numeric, and a bunch of punctuation characters. Words tended to be multiples of six bits.



Eight bits became popular with IBM's Extended Binary Coded Decimal Interchange Code (EBCDIC) character set and the 7-bit ASCII set.



But Control Data Computers stayed with 6-bit characters and had word sizes of 12, 18, 36 and 60 bits.



Hope that helps.
Bob M
2009-03-11 14:48:57 UTC
All of the data bit lengths actually come from architecture. Initialy these were named in the days of assembler program.



A 'register' in a CPU could hold 8 bits, which we all called a byte (as in bite), this allowed for a value from 0->255 (unsigned value).



Often in assembler you don't need whole bytes, so you had 4 bits, which were a nibble.



Then probably as often you only wanted one piece of a byte, a bit



This matched out computers, the Intel architecture kept true to the programming model. So increasing the size of a register, they kept to multiples of bytes, q6bt, 32 bit and so on.



The registers in your CPU are larger now, 64Bit, but the principle remains the same



1 bit = 1 of the 8 bits in a byte, or one of the 4 in a nibble

4 bit = nibble = 1/2 length os a byte.

8 bit = byte

16 bit = WORD = 2 x length of a byte

32 bit = DWORD (Double word)



So it started with the size of a single CPU registry length in 'bits'.



Notice the multiples of '2', this comes from the binary numbering system.

The bits in binay 11111111 are read frpm right to left increasing a multiple of two with each,

128,64,32,16,8,4,2,1 add the values together and you get our 255



So here is another example, 11011011

This i

1x128 , 1x64 , 0x32 , 1x16 , 1x8 , 0x4 , 1x2 , 1x1 which added together = ????
el1986
2009-03-11 14:29:41 UTC
Actually they use words of about 4096 bytes. 8 bits = 1 byte = 256 values. I honestly don't know why 8 bits, but it's just an optimal size for the power of 2, since it needs to be power of 2.
Chris G
2009-03-11 14:29:27 UTC
Because it has to be a power of 2 (as the amount of possibilities in a binary sequence that is X characters long is equal to 2^X)


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...