dslreports logo
    All Forums Hot Topics Gallery


how-to block ads

Search Topic:
share rss forum feed


reply to Kearnstd

Re: Hmm...

said by Kearnstd:

I do have to wonder, why did computer science go with divisions of 8?

as in why was a byte not engineered as 10bits.

Because of the binary nature of computer equipment, everything is based on powers of a bit (binary digit). 10 doesn't fall on a whole bit-length, with the closest whole bit-lengths being 3 (for decimal 7, yielding 8 zero-referenced decimal values) or 4 (for decimal 15, yielding 16 zero-referenced decimal values).

Mountain View, CA

I took what Kearnstd See Profile to mean why didn't we go with bit lengths that were a different size rather than 8. Meaning, what's special about the value of 8? Why must a byte range from 0-255 rather than, say, 0-63 (6-bit), 0-1023 (10-bit), or even something strange like 0-8191 (13-bit)?

There are architectures (old and present-day) which define a byte as something other than 8 bits. The examples I've seen cited are the Intel 4004 (1 byte = 4 bits), PDP-8 (1 byte = 12 bits), PDP-10 (byte length in bits was variable, from 1 to 36), and present-day DSPs (which often just use the term "word", where a single word can represent something like 60-bits).
Making life hard for others since 1977.
I speak for myself and not my employer/affiliates of my employer.


Chesterfield, MO
reply to TelecomEng

I was about ready to answer that way too but then I read his posts again and thought about it more. Why couldn't a character have originally been defined as 10 bits? Perhaps it's because 10-bit boundaries would have been really wacky and inefficient in terms of an address controller?