Number Base Converter

Convert between binary, octal, decimal, hexadecimal, and custom bases (2–36) with positional notation breakdown and two's complement display.

Input

Conversions

BIN (2)
OCT (8)
DEC (10)
HEX (16)

Two's Complement

8-bit
Signed value
Unsigned

ASCII / Character

parseInt(n, fromBase) num.toString(toBase).toUpperCase()

Positional Notation Breakdown

Enter a number in the Converter tab to see its breakdown.

Bit Pattern

Decimal 0–255 Quick Reference

DecBinOctHex

ASCII Printable Characters (32–126)

DecHexChar

Common Hex Colors

Frequently Asked Questions

What is a number base (radix)?

A number base (or radix) defines how many unique digits are used in a positional numeral system. Base 10 (decimal) uses 0–9. Base 2 (binary) uses only 0 and 1. Base 16 (hexadecimal) uses 0–9 plus A–F.

How does binary work?

Binary uses only 0 and 1. Each digit (bit) represents a power of 2. The rightmost bit is 2⁰=1, next is 2¹=2, then 2²=4, etc. Binary 1010 = (1×8) + (0×4) + (1×2) + (0×1) = 10 in decimal.

Why is hexadecimal used in computing?

Hex is compact — one hex digit represents exactly 4 bits (a nibble). Two hex digits represent a full byte. It is much easier to read and write memory addresses, color codes (#FF5733), and machine code in hex than binary.

What is two's complement?

Two's complement is how computers represent negative integers. To negate a number: flip all bits (one's complement) then add 1. In 8-bit: +5 = 00000101, −5 = 11111010 + 1 = 11111011. This makes addition and subtraction use the same hardware circuit.

How do you convert decimal to binary?

Divide the number by 2 repeatedly and record the remainders from bottom to top. Example: 13 ÷ 2 = 6 R1, 6 ÷ 2 = 3 R0, 3 ÷ 2 = 1 R1, 1 ÷ 2 = 0 R1 → read remainders up: 1101.

What is octal and where is it used?

Octal (base 8) uses digits 0–7. It was used in early computing because 3 bits map to one octal digit. Today it appears in Unix/Linux file permissions (chmod 755 means rwxr-xr-x — each digit is 3 permission bits).

How are hex color codes structured?

#RRGGBB format: each pair of hex digits represents red, green, blue intensity from 00 (0) to FF (255). #FF0000 = pure red, #00FF00 = pure green, #0000FF = pure blue, #FFFFFF = white, #000000 = black.

What does IEEE 754 floating point mean?

IEEE 754 is the standard for representing decimal fractions in binary. A 32-bit float has 1 sign bit, 8 exponent bits, and 23 mantissa bits. It can represent approximately 7 significant decimal digits. Example: 3.14 in binary IEEE 754 is 0 10000000 10010001111010111000011.

Can any base be used?

Yes. Bases 2 through 36 are supported (using digits 0–9 and letters A–Z). Base 36 (alphanumeric) is used for URL shorteners. Base 60 (sexagesimal) is used for time (60 seconds/minutes) and angles (360°).

What is the maximum safe integer in JavaScript?

JavaScript's Number.MAX_SAFE_INTEGER is 2⁵³ − 1 = 9,007,199,254,740,991. Beyond this, integer precision is lost. For larger numbers, BigInt must be used. This calculator handles values up to MAX_SAFE_INTEGER accurately.

How do you convert between bases without going through decimal?

The most practical approach is to convert to decimal as an intermediate step. However, binary and hex have a shortcut: since 16 = 2⁴, each hex digit maps directly to exactly 4 binary digits. Similarly, octal digits map to 3 binary digits each.

What is a nibble and a byte?

A nibble is 4 bits (one hex digit, values 0–15). A byte is 8 bits (two hex digits, values 0–255). A word is typically 16 bits (4 hex digits, 0–65535). These are fundamental units in computer architecture.

Why do programmers count from zero?

Zero-based indexing aligns with binary addressing. The first address is 0 (all bits zero). An array of N elements occupies addresses 0 through N−1, allowing the size calculation as last − first = N−1 − 0 + 1 = N, consistent with binary arithmetic.

What is BCD (Binary Coded Decimal)?

BCD encodes each decimal digit separately as 4 bits. The decimal number 59 in BCD is 0101 1001 (5 = 0101, 9 = 1001). BCD is used in financial applications and digital displays where exact decimal representation is needed without floating-point rounding.

How are negative numbers represented in binary?

Three main methods exist: sign-magnitude (leftmost bit = sign), one's complement (flip all bits), and two's complement (flip all bits + 1). Modern computers universally use two's complement because it avoids having two representations of zero and simplifies arithmetic circuits.