Some possible reasons for 8-bit bytes
-
Julia Evans[^]:
I’ve been working on a zine about how computers represent thing in binary, and one question I’ve gotten a few times is – why does the x86 architecture use 8-bit bytes? Why not some other size?
Because 9 would have been too much?
-
Julia Evans[^]:
I’ve been working on a zine about how computers represent thing in binary, and one question I’ve gotten a few times is – why does the x86 architecture use 8-bit bytes? Why not some other size?
Because 9 would have been too much?
She should subscribe the CP news...
Quote a couple of messages below:
That prime numbers and powers of 2 fascinate many people comes as no surprise.
Ta-Taaaa.... mistery solved. 4 - 16 (too few combinations) 8 - 256 (nice number, allows a lot and is low enough to be used mentally or even by paper) 16 - 32768 (too much combinations for mental / paper work)
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
Julia Evans[^]:
I’ve been working on a zine about how computers represent thing in binary, and one question I’ve gotten a few times is – why does the x86 architecture use 8-bit bytes? Why not some other size?
Because 9 would have been too much?
Because they tried 4 but it wasn't enough.
PartsBin an Electronics Part Organizer - An updated version available! JaxCoder.com Latest Article: ARM Tutorial Part 1 Clocks
-
Julia Evans[^]:
I’ve been working on a zine about how computers represent thing in binary, and one question I’ve gotten a few times is – why does the x86 architecture use 8-bit bytes? Why not some other size?
Because 9 would have been too much?
On the 36-bit Univac-1100 series, you had the choice between 9-bit and 6-bit bytes. 6-bit "Fieldata" was the original character set of the EXEC-8 OS, uppercase only. 9-bit was introduced when ISO-646 (in the US of A known as ASCII) became popular. DEC-10 and DEC-20 (mainframe relatives of the far more well-known PDP series) also had 36 bit word length, but they stuffed five 7-bit bytes to a word, with one bit to spare. They handled ISO-646 from the beginning, but lots of programmers thought it was an odd format. Note that both U-1100 and DEC-10/20 were word addressable in memory, not byte addressable. Memory size was commonly expressed in K (you didn't have machines with Mega-memories then!), meaning K-s of addressable units, i.e. words. Then came the IBM 360 architecture in 1964, the first major CPU family that was octet addressable. (Even the IBM predecessors to the 360, the 704/709/7090, were 36 bits word addressable machines). IBM marketed their 360 memory by the number of addressable units, misleading lots of customer to think it was super-cheap compared to e.g. Univac or DEC - but 1 K of IBM octets were just 22% as much memory (measured in bits) compared to the Univac/DEC 36 bit words. In the 1960-70s, IBM had something like 80% of the computer market in the US of A. They were not quite that dominating in Europe, but still they were The dominating manufacturer. When IBM went for 8 bit bytes, everybody else followed suit, at least for new architectures. (One are were IBM did not manage to dominate the world: Their EBCDIC character set never was adopted by others - except for communication with IBM mainframes. I see two major reasons for that: There were more national variants of EBCDIC than we have Linux file systems today. And, for historical reasons, A-Z did not fill 26 consecutive code values - mixed in with the alphabetics were other, non-alphabetic characters. Working with ISO-646 was just so much more convenient!)
-
Julia Evans[^]:
I’ve been working on a zine about how computers represent thing in binary, and one question I’ve gotten a few times is – why does the x86 architecture use 8-bit bytes? Why not some other size?
Because 9 would have been too much?
She obviously doesn't know anything about computer history. Several of the early computer systems (Multic, LISP Machine, etc.) used 36 bit words. This allowed the hardware to handle things such as dynamic memory management, garbage collection and other OS level functions that we take for granted today but that use a different solution to the same problem of managing a finite resource such as memory.