Binary is the language computers speak natively — not because engineers chose it arbitrarily, but because transistors have two stable states (on and off, 1 and 0), and a number system built on two symbols maps perfectly onto physical hardware. Every decimal number you type, every character you see on screen, every color in an image, and every instruction a processor executes exists somewhere in the system as a sequence of binary digits (bits). Understanding how to convert between binary and decimal — and understanding why the conversion works the way it does — is the foundation of understanding how computers represent any information at all.
Binary Arithmetic
Binary addition follows the same rules as decimal with different carry triggers. Decimal: 1 + 1 = 2. Binary: 1 + 1 = 10₂ (zero, carry 1). The full binary addition rules: 0+0=0, 0+1=1, 1+0=1, 1+1=0 carry 1, 1+1+1 (with carry)=1 carry 1.
Add 1011₂ + 0110₂: rightmost column: 1+0=1. Second column: 1+1=0 carry 1. Third column: 0+1+1(carry)=0 carry 1. Fourth column: 1+0+1(carry)=10₂ (0, carry 1). Fifth column: carry 1. Result: 10001₂. Verify: 11₁₀ + 6₁₀ = 17₁₀. Binary 10001₂ = 16+1 = 17₁₀. Correct. CPU addition circuits implement exactly this process at the bit level, billions of times per second.
Two's Complement: How Computers Store Negative Numbers
Positive binary numbers are straightforward. Negative numbers use two's complement representation — a specific encoding that lets the CPU add and subtract without special case logic. To find the two's complement of a number: flip all bits (find the one's complement) then add 1.
To represent -13 in 8-bit two's complement: 13₁₀ = 00001101₂. Flip all bits: 11110010. Add 1: 11110011. So -13 is stored as 11110011 in two's complement. To verify: -13 + 13 should = 0. 11110011 + 00001101 = 100000000 (9 bits, but the 9th bit overflows and is discarded in 8-bit storage) → 00000000. Zero. Correct. This elegant trick means CPUs can perform subtraction using their addition circuits — subtraction is just addition of the two's complement representation. It's why computer hardware can be simplified dramatically: one circuit type handles both operations.