|reply to Kearnstd |
Early engineers needed to represent the alphabet (upper/lower case), 10 digits, various common symbols (+-/"%$...) and control characters. This required 7 bits and it was the birth of the ASCII character set. An 8th bit was added for parity error correction. Anything more than this was wasteful and in those early days, core memory was ridiculously expensive and in very short supply.
Binary coded decimal (BCD) also requires multiples of four. Even though four bits can hold 16 values, only 10 of the 16 possible values is needed to represent a digit in BCD. However, since 3 bits can only represent 8 unique values, four bits with a bit of waste is necessary. (This is also called packed decimal.)