Unix Timestamp: Seconds vs Milliseconds, and Why It Matters

The single most common timestamp bug in modern apps. How to spot the difference, convert between them, and never get bitten again.

·5 min read·by ToolsWalla·
timestampsdeveloperstutorial

You're debugging a bug where the "created date" on your app shows January 1970, or your script crashes with a "year out of range" error. Or you're reading a CSV export where every row has a timestamp like 1729345200 but the date you get back is nowhere near right. Both are the same bug: confusing Unix timestamps in seconds with Unix timestamps in milliseconds.

The core distinction

The POSIX standard defines a Unix timestamp as the number of seconds since 1970-01-01 00:00:00 UTC. But Java, JavaScript, and a lot of modern tooling use milliseconds instead. So you have two flavors floating around:

  • Seconds: 10 digits for any recent date. Example: 1729345200
  • Milliseconds: 13 digits. Example: 1729345200000

A quick rule: if the number has 10 digits, it's almost certainly seconds. If it has 13, milliseconds. You rarely see other lengths in the wild.

The ways this breaks

Your date shows up as 1970

You got a timestamp in seconds and passed it to new Date() in JavaScript, which expects milliseconds. The result: something around 1970-01-20 because 1729345200 milliseconds is about 20 days after the epoch.

Fix: multiply by 1000.

javascript
new Date(1729345200 * 1000) // correct new Date(1729345200) // wrong, gives 1970

Your script crashes with "year out of range"

You got a timestamp in milliseconds and passed it to Python's datetime.fromtimestamp(), which expects seconds.

Fix: divide by 1000.

python
datetime.fromtimestamp(1729345200000 / 1000) # correct datetime.fromtimestamp(1729345200000) # wrong, raises ValueError: year out of range

Your database stores one but your API expects the other

Very common in microservice systems. Your Postgres column stores epoch seconds (common convention), but your Node.js service assumes milliseconds from Date.now(). Dates from reads look like 1970. Writes produce dates far in the future.

The fix is to pick one convention and enforce it at the boundary. The safest convention is ISO-8601 strings ("2024-10-19T12:00:00Z") at API boundaries, and convert to whatever your storage needs internally.

How to tell which you're looking at

If you have a timestamp and aren't sure:

  • Fewer than 10 digits: seconds, a long time ago (before 2001). Probably legitimate.
  • Exactly 10 digits starting with 1 or 2: seconds, sometime in the last few decades.
  • 13 digits starting with 17 or 18: milliseconds, sometime recent.
  • Anything weirder: ask whoever sent it.

The ToolsWalla Timestamp Converter auto-detects the format by length so you never have to multiply or divide by 1000 manually.

Timezone is a separate problem

Epoch time is always UTC. If your display says "3pm" and you expected "9am", that's a timezone problem, not a seconds-vs-milliseconds problem. The underlying moment in time is the same.

Store epoch in UTC, display in the user's local timezone. Mixing the two creates bugs that are much harder to catch than the multiply-by-1000 kind.

One more trap: leap seconds

POSIX time does not count leap seconds. That means there is technically no POSIX timestamp for a leap second. Most real systems pretend this isn't a problem and everything works. If you're building a high-precision financial or scientific system, this matters. For the other 99% of apps, you can ignore it.

Summary

The seconds-vs-milliseconds bug is preventable with two habits:

1. At API and CSV boundaries, prefer ISO-8601 strings. 2. When dealing with raw epoch numbers, assume 10 digits is seconds and 13 is milliseconds.

If you get into the habit of running suspicious timestamps through a converter before trusting them, you'll catch the issue in seconds instead of discovering it when a bug hits production.

Related tools on ToolsWalla