All photos, text documents and programs are stored in computer memory as bits and bytes. What are these tiny units of information and how many bits are in a byte?

Storing data in memory
Computer memory is a huge set of cells filled with zeros and ones. A cell is the minimum amount of data that a reader can access. Physically, it is a trigger (in modern computers). The trigger is so small that it is difficult to see even under a microscope. Each cell has a unique address at which one or another program finds it.
A cell in most cases is understood as one byte. But, depending on the bitness of the architecture, it can combine 2, 4 or 8 bytes. A byte is perceived by electronic devices as a whole, but in fact it consists of even smaller cells - bits. In 1 byte, you can encode some character, for example, a letter or a number, while 1 bit is not enough for this.
Controllers rarely operate on individual bits, thoughtechnically it is possible. Instead, it refers to whole bytes or even groups of bytes.

What is a beat?
Often a bit is understood as a unit of measurement of information. Such a definition cannot be called accurate, because the very concept of information is rather vague. More correctly, a bit is a letter of the computer alphabet. The word "bit" comes from the English expression "binary digit", which literally means "binary digit".
The alphabet of computers is simple and consists of only two characters: 1 and 0 (the presence or absence of a signal, true or false). This set is quite enough to logically describe anything. The third state, which refers to the silence of the computer (stopping signaling), is a myth.
The letter itself does not carry any value in terms of information: looking at one or zero, it is impossible to even understand what kind of data this value refers to. And photos, and texts, and programs ultimately consist of ones and zeros. Therefore, a bit is inconvenient as an independent unit. Therefore, bits must be combined in order to encode useful information with their help.

What is a byte?
If a bit is a letter, then a byte is like a word. One byte can contain a text character, an integer, part of a large number, two small numbers, and so on. Thus, a byte already contains meaningful information, albeit in a small amount.
Beginner programmers and just inquisitive users are interested in how many bits are in 1 byte. In modern computers, one byte always equals eight bits.
If a bit can only take on two values, then a combination of eight bits can create 256 different combinations. The number 256 is formed by raising two to the eighth power (according to how many bits are in a byte).
One bit is 1 or 0. Two bits can already create combinations: 00, 01, 10 and 11. When it comes to 8 bits, there are exactly 256 combinations of zeros and ones in the range 00000000 … 11111111 If you remember how many values it can take and how many bits are contained in one byte, then remembering this figure will be very easy.
Each combination of characters can carry different information depending on the encoding (ASCII, Unicode, etc.). That is why users are faced with the fact that information entered in Russian is sometimes displayed in the form of intricate characters.

Features of the binary number system
The binary system has all the same properties as the decimal system we are used to: numbers consisting of ones and zeros can be added, subtracted, multiplied, etc. The only difference is that the system does not consist of 10- ty, but only from 2 digits. That is why it is convenient to use it to encrypt information.
In any positional system, numbers consist of digits: units, tens, hundreds, etc. In the decimal system, the maximum value of one digit is 9, and inbinary system - 1. Since one bit can take only two values, binary numbers quickly increase in length. For example, the familiar number 9 will be written as 1001. This means that the nine will be written in four characters, with one binary character corresponding to one bit.
Why is information encrypted in binary form?
The decimal system is convenient for entering and outputting information, and the binary system is for organizing the process of its transformation. Systems that contain eight and sixteen characters are also very popular: they translate machine codes into a convenient form.
The binary system is the most convenient in terms of logic. One conditionally means "yes": there is a signal, the statement is true, etc. Zero is associated with the value "no": the value is false, there is no signal, etc. Any open-ended question can be converted into one or more questions with answer options "yes" " or not". A third option, such as "unknown", would be completely useless.
In the course of the development of computer technology, three-digit capacities for storing information, called trits, were also developed. They can take three values: 0 - container is empty, 1 - container is half full and 2 - container is full. However, the binary system turned out to be simpler and more logical, so it gained much more popularity.
How many bits were there before in a byte?
Before, it was impossible to say unambiguously how many bits are in a byte. Initially, a byte was understood as a machine word, that is, the number of bitswhich the computer can process in one working cycle (cycle). When computers were not yet placed in offices, different microprocessors worked with bytes of various sizes. A byte could include 6 bits, and the first IBM models had 9 bits.
Today, 8-bit bytes have become so commonplace that even the definition of a byte often says that it is a unit of information consisting of 8 bits. However, on some architectures, a byte is 32 bits and acts as a machine word. Such architectures are used in some supercomputers and signal processors, but not in our usual computers, laptops and mobile phones.
Why did the eight-bit standard win?

Bytes acquired an eight-bit size thanks to the IBM PC platform with the then most popular 8-bit Intel 8086 processor. The prevalence of this model contributed to the fact that in the 1970s. 8 bits per byte has actually become the standard value.
The eight-bit standard is convenient because it allows you to store two decimal characters in 1 byte. With a 6-bit system, it is possible to store one digit, while 2 bits are superfluous. In 9 bits, you can write 2 digits, but there is still one extra bit. The number 8 is the third power of 2 for added convenience.
Uses of bits and bytes
Many users are wondering: how not to confuse bit and byte? First of all, you need to pay attention to how the designation is written:byte is abbreviated as a capital letter "B" (in English - "B"). Accordingly, a small letter "b" ("b") serves to designate a bit.
However, there is always the possibility that the case is incorrect (for example, some programs automatically convert all text to lower or upper case). In this case, you should know what is customary to measure in bits, and what - in bytes.

Traditionally, volumes are measured by bytes: the size of a hard drive, flash drive and any other media will be indicated in bytes and larger units, for example, gigabytes.
Bits are used to measure speed. The amount of information that passes through the channel, the speed of the Internet, etc. are measured in bits and derived units, such as megabits. File download speed is also always displayed in bits.
If desired, you can convert bits to bytes or vice versa. To do this, it is enough to remember how many bits are in a byte and perform a simple mathematical calculation. Bits are converted to bytes by dividing by 8, and reversed by multiplying by the same number.
What is a machine word?

A machine word is information stored in a memory cell. It represents the maximum sequence of units of information that is processed as a whole.
Word length corresponds to the processor capacity, which for a long time was equal to 16 bits. In most modern computers, itis 64 bits, although there are both shorter (32 bits) and longer machine words. In this case, the number of bits that form a machine word is always a multiple of eight and can be easily converted into bytes.
For a particular computer, the length of a word is unchanged and is one of the most important characteristics of hardware.