adjacent bits, usually eight, processed by a computer as a unit.
the combination of bits used to represent a particular letter, number, or special character.
The First Boke of Moses called Genesis William Tyndale
a group of bits, usually eight, processed as a single unit of data
the storage space in a memory or other storage device that is allocated to such a group of bits
a subdivision of a word
A sequence of adjacent bits operated on as a unit by a computer. A byte usually consists of eight bits. Amounts of computer memory are often expressed in terms of megabytes (1,048,576 bytes) or gigabytes (1,073,741,824 bytes).
Our Living Language : The word bit is short for binary digit. A bit consists of one of two values, usually 0 or 1. Computers use bits because their system of counting is based on two options: switches on a microchip that are either on or off. Thus, a computer counts to seven in bits as follows: 0, 1, 10 , 11 , 100 , 101 , 110 , 111 . Notice that the higher the count, the more adjacent bits are needed to represent the number. For example, it requires two adjacent bits to count from 0 to 3, and it takes three adjacent bits to count from 0 to 7. A sequence of bits can represent not just numbers but other kinds of data, such as the letters and symbols on a keyboard. The sequence of 0s and 1s that make up data are usually counted in groups of 8, and these groups of 8 bits are called bytes. The word byte is short for binary digit eight. To transmit one keystroke on a typical keyboard requires one byte of information (or 8 bits). To transmit a three-letter word requires three bytes of information (or 24 bits).
A popular computing magazine.
/bi:t/ (B) A component in the machine data hierarchy larger than a bit and usually smaller than a word; now nearly always eight bits and the smallest addressable unit of storage. A byte typically holds one character.
A byte may be 9 bits on 36-bit computers. Some older architectures used “byte” for quantities of 6 or 7 bits, and the PDP-10 and IBM 7030 supported “bytes” that were actually bit-fields of 1 to 36 (or 64) bits! These usages are now obsolete, and even 9-bit bytes have become rare in the general trend toward power-of-2 word sizes.
The term was coined by Werner Buchholz in 1956 during the early design phase for the IBM Stretch computer. It was a mutation of the word “bite” intended to avoid confusion with “bit”. In 1962 he described it as “a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units”. The move to an 8-bit byte happened in late 1956, and this size was later adopted and promulgated as a standard by the System/360 operating system (announced April 1964).
James S. Jones adds:
I am sure I read in a mid-1970’s brochure by IBM that outlined the history of computers that BYTE was an acronym that stood for “Bit asYnchronous Transmission E..?” which related to width of the bus between the Stretch CPU and its CRT-memory (prior to Core).
Terry Carr says:
In the early days IBM taught that a series of bits transferred together (like so many yoked oxen) formed a Binary Yoked Transfer Element (BYTE).
[True origin? First 8-bit byte architecture?]
See also nibble, octet.
adjacent bits, usually eight, processed by a computer as a unit. the combination of bits used to represent a particular letter, number, or special character. Historical Examples The Civilization of Illiteracy Mihai Nadin noun (computing) a group of bits, usually eight, processed as a single unit of data the storage space in a memory or […]