Conversion from UNIX time to timestamp starting in January 1, 2000
Solution 1
The time 1 January 1970 00:00:00 is considered the UNIX epoch. So, if you want to convert from UNIX time to a timestamp having an epoch of January 1, 2000 (Let's say, 2000 epoch) the simplest way would be to simply subtract the UNIX time of January 1, 2000 from the UNIX time.
<2000 time> = <UNIX time> - <January 1, 2000 UNIX time>
<UNIX time> = <2000 time> + <January 1, 2000 UNIX time>
Where January 1, 2000 UNIX time is 946684800.
EDIT: The docs does say
Absolute time is measured in seconds from midnight, 1st January 2000.
So, 946684800
is the exact time difference which should be used to calculate. The few seconds difference that you calculated could be attributed to network delay or some other delays.
Solution 2
EDIT: The OP added details specifying that the time starts at midnight and hence it is absolute time different from J2000 which starts at noon. But since the title states "timestamp starting in January 1, 2000" I am letting this answer to be for future answer seekers.
ANSWER:
The timestamp you have mentioned appears to be the J2000.0 mentioned here
Since the Unix and J2000 epoch times are constants, you could define a constant to store the difference.
If you have a mathematical inclination the following links provide some info regarding the conversion
- http://www.giss.nasa.gov/tools/mars24/help/algorithm.html Refer to step A-2
-
http://onegeek.org/software/smeg/current/src/time.c (C file),
the
#define
section in the C file contains the following#define J2000 2451545.0 /* you-know-when */
#define U1970 -10957.5 /* unix epoch relative to J2000 */
Solution 3
Well, there are 946684800 seconds between 2000-01-01T00:00:00Z and 1970-01-01T00:00:00Z. So, you can just set a constant for 946684800
and add or subtract from your Unix timestamps.
The variation you are seeing in your numbers has to do with the delay in sending and receiving the data, and could also be due to clock synchronization, or lack thereof. Since these are whole seconds, and your numbers are 3 to 4 seconds off, then I would guess that the clocks between your computer and your device are also 3 to 4 seconds out of sync.
Solution 4
Unix epoch: 1970-1-1, 00:00:00 UTC
J2000 epoch: 2000-1-1, 12:00:00 UTC
The time gap is: 946684800 in seconds
The pitfall is that python will use local timezone for conversion from datetime to timestamp. As shown below:
import datetime
import time, calendar
dt = datetime.datetime(1970, 1, 1, 0, 0) # datetime.datetime, default is computer's local time zone
tt = dt.timetuple()
print(tt) # time.struct_time
print("Unix Timestamp: ",time.mktime(tt)) # struct_time in local time
print("Unix Timestamp: ",calendar.timegm(tt)) # struct_time in UTC
So if you want to convert timestamp back to datetime, use these codes:
t = 0 # Unix timestamp
t2000 = t+946728000
#value = datetime.datetime.fromtimestamp(t2000) # from local time
dt= datetime.datetime.utcfromtimestamp(t2000) # from UTC time
print(dt.timetuple())
print(dt.strftime('%Y-%m-%d %H:%M:%S'))
There is a nice conversion tool online: https://www.unixtimestamp.com/index.php
Solution 5
The differences in the two times is indeed 30 years:
>>> import datetime
>>> d1 = datetime.datetime.fromtimestamp(1456979510)
>>> d1.ctime()
'Wed Mar 2 20:31:50 2016'
>>> d2 = datetime.datetime.fromtimestamp(510294713)
>>> d2.ctime()
'Mon Mar 3 20:31:53 1986'
Creating a variable to hold the difference conversion either way can be done:
>>> conv_factor = (d1 - d2).total_seconds()
>>> conv_factor
946684797.0
>>> conv_time = d2 + datetime.timedelta(seconds=conv_factor)
>>> conv_time
datetime.datetime(2016, 3, 2, 20, 31, 50)
>>> conv_time.ctime()
'Wed Mar 2 20:31:50 2016'
Subtracting the conv_factor
works to convert the other direction.
Comments
-
Tom Sitter almost 2 years
I am trying to interact with an API that uses a timestamp that starts at a different time than UNIX epoch. It appears to start counting on 2000-01-01, but I'm not sure exactly how to do the conversion or what the name of this datetime format is.
When I send a message at 1456979510 I get a response back saying it was received at 510294713.
The difference between the two is 946684796 (sometimes 946684797) seconds, which is approximately 30 years.Can anyone let me know the proper way to convert between the two? Or whether I can generate them outright in Python?
Thanks
Edit
An additional detail I should have mentioned is that this is an API to a Zigbee device. I found the following datatype entry in their documentation:
1.3.2.7 Absolute time
This is an unsigned 32-bit integer representation for absolute time. Absolute time is measured in seconds from midnight, 1st January 2000.I'm still not sure the easiest way to convert between the two
-
Matt Johnson-Pint about 8 yearsJ2000.0 is used in astronomy, and is based on Julian days. It uses an epoch of noon on 2000-01-01. The OP's data is based on midnight, so this is incorrect. Good try though.
-
Tom Sitter about 8 yearsThis is naively correct, but there's going to be difference (see my post about how the difference is actually 946684797) I'm not sure if this is due to leap seconds perhaps? I was hoping there would be a more accurate way to convert, particularly using Python.
-
JRodDynamite about 8 years@TomSitter - Well, the docs does say "Absolute time is measured in seconds from midnight, 1st January 2000". So,
946684800
is the exact time difference to calculate. The few seconds could be attributed to the network delay or other delays. -
raghav710 about 8 years@MattJohnson i typed the answer before the edit. Should it be edited or deleted now?
-
Tom Sitter about 8 yearsMakes sense, I'll do that.
-
Matt Johnson-Pint about 8 yearsUp to you. You can leave it if you like.
-
jfs about 8 years@TomSitter: it may be incorrect. The device may count elapsed seconds while Unix time is usually is a POSIX time and therefore leap seconds are NOT counted. Here's a code example on how to convert GPS time (1980 epoch) to UTC (POSIX). In your case the formula is:
utc_time = datetime(2000, 1, 1) + timedelta(seconds=510294713 - (36 - 32))
that is earlier by one second then the sent time (1456979510
): sent (host) time:4:31:50
, received (device) time:4:31:49
if we assume POSIX time on host, and TAI with 2000-01-01UTC epoch on the device. -
jfs about 8 yearsanother possibility is that the device time uses TAI scale with 2000UTC epoch
-
jfs about 8 yearsthe ZIGBEE spec uses
UTCTime
for the type: "UTCTime is an unsigned 32 bit value representing the number of seconds since 0 hours, 0 minutes, 0 seconds, on the 1st of January, 2000 UTC". But it doesn't mention leap seconds at all and therefore it is still possible that the device measures elapsed seconds. -
Matt Johnson-Pint about 8 years@J.F.Sebastian - Yes, it would be more correct if the spec added "excluding leap seconds" at the end of that sentence, but that is generally implied. Since these sorts of devices usually get their time set via NTP, leap seconds are already accounted for, and thus the need for the developer to concern themselves with leap seconds is mitigated. In my experience, it is quite rare to see any device following TAI (UTC without leap seconds), unless it's directly involved with timekeeping. GPS time is an exception, but it is not strictly TAI either (being 19 seconds behind).
-
Matt Johnson-Pint about 8 yearsWell, we could speculate all day :) But Occam's razor... It's likely just UNIX Time (aka POSIX time) offset 30 years. Anything fancier than that requires specially designed software or hardware.
-
jfs about 8 years@MattJohnson if the device can sync its time with UTC then it is likely the timestamp doesn't count leap seconds and the naive formula should work (both host and the device use UTC scale but with different epochs). If the device does nothing (no syncing) then by default leap seconds are counted. You have to enable syncing to avoid counting leap seconds. In practice, the device's clock accuracy is poor and therefore such fine points as leap seconds are irrelevant (if the error of several seconds is acceptable) -- UTC and TAI are indistinguishable in this case.
-
jfs about 8 yearsspeaking of Occam's razor: do you understand that not counting leap seconds is more work (more complex) unless the clocks are in sync with the external source (in that case it doesn't matter what time scale is used)? So let's not speculate. 2- Unix time (the value return by time.time()) is not the same as POSIX time e.g., set TZ=right/UTC and see what happens.
-
Matt Johnson-Pint about 8 yearsI think we're saying the same things, just in inverse context. The time on the device does include leap seconds that have been applied up to when it synced. But the
UTCTime
value is the number of seconds elapsed not counting them. We're on the same page. :) -
Matt Johnson-Pint about 8 yearsSee my other comment, but also, Wikipedia disagrees that Unix time and Posix time are not the same. en.wikipedia.org/wiki/Unix_time I couldn't find any other reference. Care to elaborate on that? Cheers!
-
jfs about 8 years1- "right/UTC" is not POSIX. Agree? 2- Linux is a "Unix" for the purpose of this discussion. Agree?
time.time()
returns "seconds since epoch" (Unix time). Agree?time.time()
returns non-POSIX timestamp in "right" timezones (fact). 3- Wikipedia is useful if you know nothing about a non-controversial subject i.e., you can link to it, to explain something but you should not refer to it as an authoritative source. You know, it is like name vs variable in Python: if we are not discussing name issues specifically; there is nothing wrong in using "variable" term otherwise use exact term: name