If you're keen to go down the wikipedia hole, https://en.wikipedia.org/wiki/Six-bit_character_code and then https://en.wikipedia.org/wiki/BCD_(character_encoding) explain that IBM created a 6-bit card punch encoding for alphanumeric data in 1928, that this code was adopted by other manufacturers, and that IBM's early electronic computers' word sizes were based on that code. (Hazarding a guess, but perhaps to take advantage of existing manufacturing processes for card-handling hardware, or for compatibility with customers existing card handling equipment, teletypes, etc.)So backward compatibility is likely the most historically accurate answer. Fewer bits wouldn't have been compatible, more bits might not have been usable!
(Why 6 bit codes for punch cards in 1928? Dunno. Perhaps merely the physical properties of paper cards and the hardware for reading them. This article talks about that stuff: https://web.archive.org/web/20120511034402/http://www.ieeegh...)
tablespoon|3 years ago
I'm guessing it was the smallest practical size to encode alphanumeric data, and making it bigger than it needed to be would have added mechanical complexity and expense.
https://en.wikipedia.org/wiki/Six-bit_character_code: "Six bits can only encode 64 distinct characters, so these codes generally include only the upper-case letters, the numerals, some punctuation characters, and sometimes control characters."
There was apparently a 5-bit code in use since the 1870s, but that was only enough for alphas: https://en.wikipedia.org/wiki/Baudot_code
DougMerritt|3 years ago
But note that I asked about why six characters, not why six bits per character -- however your note is perhaps suggestive -- maybe the six character limit is similar to the six bit character after all: something established (possibly for mechanical reasons) in 1928? Perhaps?
lapsed_lisper|3 years ago