Epoch Converter
Convert Unix timestamps to dates and back.
What is Unix epoch time?
Unix epoch time is the count of seconds elapsed since 1970-01-01 00:00:00 UTC. The reference instant is fixed, the unit is fixed, and the value is always measured in UTC. There is no such thing as "epoch in PST" or "epoch in IST". The number itself is timezone-independent. What varies is the wall-clock string the renderer prints, which depends on whichever zone the rendering code happens to convert into.
Worked example: the value 1777010400 resolves to 2026-04-24 06:00:00 UTC. The same number resolves to 2026-04-24 11:00:00 +05:00 in Pakistan Standard Time and to 2026-04-23 23:00:00 -07:00 in Pacific Daylight Time. Three valid wall-clock displays, one underlying instant. The number on the wire never changed; only the renderer's timezone setting did. This property is exactly why systems use epoch on the storage and transport layer and only convert to a local string at the presentation edge.
Seconds vs milliseconds vs nanoseconds
Epoch time ships in four common precisions and the digit count gives the unit away. A 10-digit value is seconds, the classic Unix convention used by POSIX, most server logs, and JWT exp claims. A 13-digit value is milliseconds, the form returned by JavaScript's Date.now() and most browser APIs. A 16-digit value is microseconds, the form Postgres now() truncates to and what most modern observability pipelines emit. A 19-digit value is nanoseconds, the form Go's time.Now().UnixNano() returns and what high-resolution profilers record.
Quick rule for sanity-checking any timestamp from the current era: a 2026 instant lands at roughly 1.78x10^9 in seconds and 1.78x10^12 in milliseconds. If a value reads 1.78x10^15, it is microseconds. If it reads 1.78x10^18, it is nanoseconds. Anything outside those magnitudes is either a different era or a unit-mismatch bug.
Worked example of the bug: a backend emits 1777010400000 (milliseconds), the consumer parses it as seconds, and the resulting date renders to roughly the year 58,000. That kind of absurd output is the clearest possible "something is wrong" smell, and it always traces back to a mismatched precision contract between two services. Our converter auto-detects the unit from the digit count, so a paste of either form resolves correctly without a manual toggle.
Common pitfalls
Epoch time looks deceptively simple. It is one number. The pitfalls below all stem from treating that number as if it carried context it does not, or from quietly mixing two precisions in a single code path. Each one has cost real production hours in postmortems we have read or written.
- Treating "epoch in local time" as a concept. It is not one. Epoch is always UTC by definition, and the wall-clock display is a rendering choice made at the presentation layer. A timestamp does not carry a timezone, and trying to "store epoch in EST" means storing a wrong number that will be misread the moment it crosses a process boundary.
- The Y2038 problem. A 32-bit signed Unix timestamp overflows at
2147483647, which is2038-01-19 03:14:07 UTC. Past that instant the value wraps negative and dates appear in 1901. The active risk concentrates in embedded systems with 32-bittime_t, older C code compiled before the kernel's 64-bit migration, and MySQLTIMESTAMPcolumns. Audit before the cliff, not after. - Mixing seconds and milliseconds in the same code path. A multiplication or division by 1000 inserted in one branch and forgotten in another produces silently wrong timestamps that pass every type check. The bug stays invisible in development (durations of seconds look the same as milliseconds at small scale) and surfaces in production when a cache TTL fires 1000x too soon or a token expires 1000x too late.
- Using
Date.now() / 1000and forgetting toMath.floorthe result. The output is a fractional second that some APIs reject outright. AWS request signing is the canonical example: a JWT or signed request withiat: 1777010400.123fails validation at the gateway with a useless "invalid signature" error. Always floor the value when seconds precision is required.
When to use this tool
We built the converter for three concrete workflows. The first is debugging a server log that prints raw epoch timestamps with no human-readable side: paste the number, get the UTC and local wall-clock instantly, and stop interrupting the investigation to open a Python REPL. The second is setting an OAuth token's exp claim correctly. JWT exp is seconds since epoch, not milliseconds, and emitting Date.now() directly into that field is one of the most common bugs in fresh auth code. The third is sanity-checking a database created_at against the wall-clock time of a known event. If a row claims to have been written at 1777010400 but the deploy happened at 12:00 UTC the same day, the converter shows the gap in one paste.
Frequently asked
- Why is my timestamp showing the wrong time?
- Two causes account for nearly every case. First, the renderer is converting to a different timezone than expected: epoch is always UTC and the wall-clock display is a rendering choice, so a server in PST and a browser in IST will print different strings for the same number. Second, confusing seconds with milliseconds: a 13-digit value parsed as seconds resolves to year 58000, while a 10-digit value parsed as milliseconds resolves to 1970.
- How do I tell if a timestamp is in seconds or milliseconds?
- Count digits against the current year. A 2026 timestamp lands at roughly 1.78x10^9 in seconds, 1.78x10^12 in milliseconds, 1.78x10^15 in microseconds, and 1.78x10^18 in nanoseconds. So 10 digits = seconds (classic Unix), 13 digits = milliseconds (JavaScript `Date.now`), 16 digits = microseconds (Postgres `now()` truncated), 19 digits = nanoseconds (Go `time.Now().UnixNano()`). Our converter auto-detects from the digit count and warns when the value sits between two valid ranges.
- What is the Y2038 problem?
- 32-bit signed Unix timestamps overflow at 2147483647, which corresponds to 2038-01-19 03:14:07 UTC. Past that instant, the value wraps to a large negative number and dates appear in 1901. The risk is concentrated in three places: embedded systems and IoT devices with 32-bit `time_t`, legacy C code compiled before the Linux kernel's 2020 64-bit time_t migration, and MySQL `TIMESTAMP` columns (still 32-bit through MySQL 9). Audit and migrate before the cliff.
- Does my database store timestamps in UTC?
- Depends on the column type. Postgres `TIMESTAMPTZ` stores instants in UTC and converts to the session timezone on read: this is the only choice we use in production. Postgres `TIMESTAMP` (without TZ) stores wall-clock with no zone info and is genuinely ambiguous. MySQL `TIMESTAMP` stores UTC under the hood but is 32-bit (Y2038-vulnerable). MySQL `DATETIME` is 64-bit but naive wall-clock with no zone. Always pick the TZ-aware variant: `TIMESTAMPTZ` on Postgres, `TIMESTAMP` on MySQL until 64-bit lands.
- How do I convert between epoch and ISO 8601?
- JavaScript: `new Date(epochMs).toISOString()` returns a Z-suffixed UTC string. Python: `datetime.fromtimestamp(epochSeconds, tz=timezone.utc).isoformat()`. Bash on Linux: `date -u -d @1777010400 +'%Y-%m-%dT%H:%M:%SZ'`; on macOS swap `-d` for `-r`. Postgres: `to_timestamp(1777010400) AT TIME ZONE 'UTC'`. ISO 8601 always preserves the UTC offset (either `Z` or `+HH:MM`), so round-tripping epoch -> ISO -> epoch is lossless to the second.
- What's the difference between epoch time and Unix time?
- Same thing. POSIX defines the value formally as 'seconds since 1970-01-01 00:00:00 UTC' in IEEE Std 1003.1, and 'Unix time', 'Unix epoch time', 'POSIX time', and plain 'epoch' are interchangeable in casual usage. Strict reading: 'epoch' is the reference instant (1970-01-01), and 'Unix time' is the count from that instant. Loose reading also calls a 13-digit JavaScript `Date.now()` value 'epoch', which is technically 'milliseconds since the Unix epoch'.