dslreports logo
 
    All Forums Hot Topics Gallery
spc
uniqs
32
« hey attWhat is the Actual Intent? »
This is a sub-selection from Hmm...
Kearnstd
Space Elf
Premium Member
join:2002-01-22
Mullica Hill, NJ

Kearnstd to koitsu

Premium Member

to koitsu

Re: Hmm...

I do have to wonder, why did computer science go with divisions of 8?

as in why was a byte not engineered as 10bits.

figuring at some point in the design of electronic computers somebody had to decide to use base 8 instead of base 10 which would have allowed computer data to fall in line with the metric system.

TelecomEng
@rr.com

TelecomEng

Anon

said by Kearnstd:

I do have to wonder, why did computer science go with divisions of 8?

as in why was a byte not engineered as 10bits.

Because of the binary nature of computer equipment, everything is based on powers of a bit (binary digit). 10 doesn't fall on a whole bit-length, with the closest whole bit-lengths being 3 (for decimal 7, yielding 8 zero-referenced decimal values) or 4 (for decimal 15, yielding 16 zero-referenced decimal values).
rradina
join:2000-08-08
Chesterfield, MO

rradina to Kearnstd

Member

to Kearnstd
Early engineers needed to represent the alphabet (upper/lower case), 10 digits, various common symbols (+-/"%$...) and control characters. This required 7 bits and it was the birth of the ASCII character set. An 8th bit was added for parity error correction. Anything more than this was wasteful and in those early days, core memory was ridiculously expensive and in very short supply.

Binary coded decimal (BCD) also requires multiples of four. Even though four bits can hold 16 values, only 10 of the 16 possible values is needed to represent a digit in BCD. However, since 3 bits can only represent 8 unique values, four bits with a bit of waste is necessary. (This is also called packed decimal.)

koitsu
MVM
join:2002-07-16
Mountain View, CA
Humax BGW320-500

koitsu to TelecomEng

MVM

to TelecomEng
I took what Kearnstd See Profile to mean why didn't we go with bit lengths that were a different size rather than 8. Meaning, what's special about the value of 8? Why must a byte range from 0-255 rather than, say, 0-63 (6-bit), 0-1023 (10-bit), or even something strange like 0-8191 (13-bit)?

There are architectures (old and present-day) which define a byte as something other than 8 bits. The examples I've seen cited are the Intel 4004 (1 byte = 4 bits), PDP-8 (1 byte = 12 bits), PDP-10 (byte length in bits was variable, from 1 to 36), and present-day DSPs (which often just use the term "word", where a single word can represent something like 60-bits).
rradina
join:2000-08-08
Chesterfield, MO

rradina to TelecomEng

Member

to TelecomEng
I was about ready to answer that way too but then I read his posts again and thought about it more. Why couldn't a character have originally been defined as 10 bits? Perhaps it's because 10-bit boundaries would have been really wacky and inefficient in terms of an address controller?
gkloepfer
Premium Member
join:2012-07-21
Austin, TX

gkloepfer to Kearnstd

Premium Member

to Kearnstd
The most likely reason for 8 bit bytes was because a decimal digit could be represented by 4 bits (a "nybble" or "nibble" as it was called). It probably made sense to increase the data bus size in increments of 4 bits. Early processors even had special instructions to deal with decimal arithmetic as to avoid having to convert an 8-bit binary number to up to 3 groups of 4 bits (which was generally displayed using a hardware decoder onto a 7 segment display).

The first large minicomputer I used (PDP-10/PDP-20) had a 36 bit "word" (data size) which likewise gives heartburn to those who write emulators on modern hardware that is more geared toward 32 bits and multiples of that.

In any case, increasing the widths by powers of 2 have some special advantages at the machine level over other sizes, which is the most likely reason they chose 8 over 10 bits as the size of a byte.
« hey attWhat is the Actual Intent? »
This is a sub-selection from Hmm...