How to efficiently determine changes between rows using SQL
Solution 1
You might try this - I'm not going to guarantee that it will perform better, but it's my usual way of correlating a row with a "previous" row:
SELECT
* --TODO, list columns
FROM
data d
left join
data d_prev
on
d_prev.time < d.time --TODO - Other key columns?
left join
data d_inter
on
d_inter.time < d.time and
d_prev.time < d_inter.time --TODO - Other key columns?
WHERE
d_inter.time is null AND
(d_prev.value is null OR d_prev.value <> d.value)
(I think this is right - could do with some sample data to validate it).
Basically, the idea is to join the table to itself, and for each row (in d
), find candidate rows (in d_prev
) for the "previous" row. Then do a further join, to try to find a row (in d_inter
) that exists between the current row (in d
) and the candidate row (in d_prev
). If we cannot find such a row (d_inter.time is null
), then that candidate was indeed the previous row.
Solution 2
I suppose it's not an option for you to switch DB engine. In case it might be, then window functions would allow you to write things like this:
SELECT d.*
FROM (
SELECT d.*, lag(d.value) OVER (ORDER BY d.time) as previous_value
FROM data d
) as d
WHERE d.value IS DISTINCT FROM d.previous_value;
If not, you could try to rewrite the query like so:
select data.*
from data
left join (
select data.measure_id,
data.time,
max(prev_data) as prev_time
from data
left join data as prev_data
on prev_data.time < data.time
group by data.measure_id, data.time, data.value
) as prev_data_time
on prev_data_time.measure_id = data.measure_id
and prev_data_time.time = data.time
left join prev_data_value
on prev_data_value.measure_id = data.measure_id
and prev_data_value.time = prev_data_time.prev_time
where data.value <> prev_data_value.value or prev_data_value.value is null
cg.
Updated on June 19, 2022Comments
-
cg. almost 2 years
I have a very large MySQL table containing data read from a number of sensors. Essentially, there's a time stamp and a value column. I'll omit the sensor id, indexes other details here:
CREATE TABLE `data` ( `time` datetime NOT NULL, `value` float NOT NULL )
The
value
column rarely changes, and I need to find the points in time when those changes occur. Suppose there's a value every minute, the following query returns exactly what I need:SELECT d.*, (SELECT value FROM data WHERE time<d.time ORDER by time DESC limit 1) AS previous_value FROM data d HAVING d.value<>previous_value OR previous_value IS NULL; +---------------------+-------+----------------+ | time | value | previous_value | +---------------------+-------+----------------+ | 2011-05-23 16:05:00 | 1 | NULL | | 2011-05-23 16:09:00 | 2 | 1 | | 2011-05-23 16:11:00 | 2.5 | 2 | +---------------------+-------+----------------+
The only problem is that this is very inefficient, mostly due to the dependent subquery. What would be the best way to optimize this using the tools that MySQL 5.1 has to offer?
One last constraint is that the values are not ordered before they are inserted into the data table and that they might be updated at a later point. This might affect any possible de-normalization strategies.
-
Johan almost 13 years@Denis, note that
group by
already order the elements listed in it, so the lastorder by ..
is not needed. -
Denis de Bernardy almost 13 yearsTrue, but that ordering is an implementation side-effect, rather than the SQL standard. You never know when MySQL will drop the side-effect (Oracle did). :-)
-
ypercubeᵀᴹ almost 13 yearsYou can also experiment with index on
(value,time)
or(sensor_id,value,time)
and see the query plan using this index. -
cg. almost 13 years@Denis, Thanks a lot for your time! Could you please explain the column measure_id in your example? Is that supposed to be the primary key for the data table or a foreign key?
-
Denis de Bernardy almost 13 years@cg: the primary key of the data table.
-
cg. almost 13 years@Denis: Ok I just create the data table using your column names and inserted a few rows for testing purposes. The sequence of test values ordered by time is (3,2,1,1,1,2,2.5,2.5,2). When I execute your query, I'm getting all the rows and not just those that mark a value change. Looking at the query, I don't quite see why it should work. Maybe I'm missing some critical point...
-
Denis de Bernardy almost 13 yearsThe new one should work as expected, but the join on the aggregate should make it marginally faster than the current one. :-|
-
cg. almost 13 yearsGreat! This is actually the kind of "trick" that I was looking for. You query is magnitudes faster than the original one. It's still not fast enough to be used directly but it could be the basis for the data aggregation I need. Thank you very much for you answer.
-
cg. almost 13 yearsI'll vote it up now and accept it in a few days if no better solution comes up.
-
user1383092 about 8 yearsI think you may also technically need OR d.value is null in that last bracketed statement of the WHERE clause.
-
Damien_The_Unbeliever about 8 years@user1383092 - from the question -
value float NOT NULL
. We only end up generatingNULL
s in columns from the right hand side ofLEFT JOIN
s. Butd
is on the left hand side of those joins. Therefore, it'svalue
can never beNULL
. -
youcantryreachingme about 7 yearsShould the line "left join prev_data_value" actually read "left join data prev_data_value"?