Postgres Hash Join speed
FYI: sanitised query:
SELECT pd.patient_id
, de.serial_number
FROM reads re
INNER JOIN devices de ON de.id = re.device_id
INNER JOIN patient_devices pd ON pd.device_id = de.id
AND re.read_datetime >= pd.issuance_datetime -- changed this from '>' to '>='
AND (re.read_datetime < pd.unissuance_datetime OR pd.unissuance_datetime IS NULL)
WHERE re.read_datetime BETWEEN '2012-01-01 10:30:01.000000' AND '2013-05-18 03:03:42'
GROUP BY de.serial_number, pd.patient_id
LIMIT 10
;
UPDATE: without the original typos:
EXPLAIN ANALYZE
SELECT pd.patient_id
, de.serial_number
FROM reads re
INNER JOIN devices de ON de.id = re.device_id
INNER JOIN patient_devices pd ON pd.device_id = de.id
AND re.read_datetime >= pd.issuance_datetime
AND (re.read_datetime < pd.unassignment_datetime OR pd.unassignment_datetime IS NULL)
WHERE re.read_datetime BETWEEN '2012-01-01 10:30:01.000000' AND '2013-05-18 03:03:42'
GROUP BY de.serial_number, pd.patient_id
LIMIT 10
;
UPDATE: this is about 6 times as fast here (on synthetic data, and with a slightly altered data model)
-- Modified data model + synthetic data:
CREATE TABLE devices
( id serial NOT NULL
, serial_number character varying(20) NOT NULL
-- , created_at timestamp without time zone NOT NULL
-- , updated_at timestamp without time zone NOT NULL
, CONSTRAINT devices_pkey PRIMARY KEY (id )
, UNIQUE (serial_number)
) ;
CREATE TABLE reads
-- ( id serial NOT NULL PRIMARY KEY -- You don't need this surrogate key
( device_id integer NOT NULL REFERENCES devices (id)
, value bigint NOT NULL
, read_datetime timestamp without time zone NOT NULL
-- , created_at timestamp without time zone NOT NULL
-- , updated_at timestamp without time zone NOT NULL
, PRIMARY KEY ( device_id, read_datetime)
) ;
CREATE TABLE patient_devices
-- ( id serial NOT NULL PRIMARY KEY -- You don't need this surrogate key
( patient_id integer NOT NULL -- REFERENCES patients (id)
, device_id integer NOT NULL REFERENCES devices(id)
, issuance_datetime timestamp without time zone NOT NULL
, unassignment_datetime timestamp without time zone
-- , created_at timestamp without time zone NOT NULL
-- , updated_at timestamp without time zone NOT NULL
, PRIMARY KEY (device_id, issuance_datetime)
, UNIQUE (device_id, unassignment_datetime)
) ;
-- CREATE INDEX index_patient_devices_on_issuance_datetime ON patient_devices (device_id, unassignment_datetime );
-- may need some additional indices later
-- devices -- ~500 rows
INSERT INTO devices(serial_number) SELECT 'No_' || gs::text FROM generate_series(1,500) gs;
-- reads -- ~100K rows
INSERT INTO reads(device_id, read_datetime, value)
SELECT de.id, gs
, (random()*1000000)::bigint
FROM devices de
JOIN generate_series('2012-01-01', '2013-05-01' , '1 hour' ::interval) gs
ON random() < 0.02;
-- patient_devices -- ~25,000 rows
INSERT INTO patient_devices(device_id, issuance_datetime, patient_id)
SELECT DISTINCT ON (re.device_id, read_datetime)
re.device_id, read_datetime, pa
FROM generate_series(1,100) pa
JOIN reads re
ON random() < 0.01;
-- close the open intervals
UPDATE patient_devices dst
SET unassignment_datetime = src.issuance_datetime
FROM patient_devices src
WHERE src.device_id = dst.device_id
AND src.issuance_datetime > dst.issuance_datetime
AND NOT EXISTS ( SELECT *
FROM patient_devices nx
WHERE nx.device_id = src.device_id
AND nx.issuance_datetime > dst.issuance_datetime
AND nx.issuance_datetime < src.issuance_datetime
)
;
VACUUM ANALYZE patient_devices;
VACUUM ANALYZE devices;
VACUUM ANALYZE reads;
-- EXPLAIN ANALYZE
SELECT pd.patient_id
, de.serial_number
--, COUNT (*) AS zcount
FROM reads re
INNER JOIN devices de ON de.id = re.device_id
INNER JOIN patient_devices pd ON pd.device_id = de.id
AND re.read_datetime >= pd.issuance_datetime
AND re.read_datetime < COALESCE(pd.unassignment_datetime , 'infinity'::timestamp)
WHERE re.read_datetime BETWEEN '2012-01-01 10:30:01' AND '2013-05-18 03:03:42'
GROUP BY de.serial_number, pd.patient_id
LIMIT 10
;
Jon
Updated on June 16, 2022Comments
-
Jon almost 2 years
I have 3 tables which I wish to join using inner joins in Postgres 9.1, reads, devices, and device_patients. Below is an abbreviated schema for each table.
reads -- ~250,000 rows
CREATE TABLE reads ( id serial NOT NULL, device_id integer NOT NULL, value bigint NOT NULL, read_datetime timestamp without time zone NOT NULL, created_at timestamp without time zone NOT NULL, updated_at timestamp without time zone NOT NULL, CONSTRAINT reads_pkey PRIMARY KEY (id ) ) WITH ( OIDS=FALSE ); ALTER TABLE reads OWNER TO postgres; CREATE INDEX index_reads_on_device_id ON reads USING btree (device_id ); CREATE INDEX index_reads_on_read_datetime ON reads USING btree (read_datetime );
devices -- ~500 rows
CREATE TABLE devices ( id serial NOT NULL, serial_number character varying(20) NOT NULL, created_at timestamp without time zone NOT NULL, updated_at timestamp without time zone NOT NULL, CONSTRAINT devices_pkey PRIMARY KEY (id ) ) WITH ( OIDS=FALSE ); ALTER TABLE devices OWNER TO postgres; CREATE UNIQUE INDEX index_devices_on_serial_number ON devices USING btree (serial_number COLLATE pg_catalog."default" );
patient_devices -- ~25,000 rows
CREATE TABLE patient_devices ( id serial NOT NULL, patient_id integer NOT NULL, device_id integer NOT NULL, issuance_datetime timestamp without time zone NOT NULL, unassignment_datetime timestamp without time zone, created_at timestamp without time zone NOT NULL, updated_at timestamp without time zone NOT NULL, CONSTRAINT patient_devices_pkey PRIMARY KEY (id ) ) WITH ( OIDS=FALSE ); ALTER TABLE patient_devices OWNER TO postgres; CREATE INDEX index_patient_devices_on_device_id ON patient_devices USING btree (device_id ); CREATE INDEX index_patient_devices_on_issuance_datetime ON patient_devices USING btree (issuance_datetime ); CREATE INDEX index_patient_devices_on_patient_id ON patient_devices USING btree (patient_id ); CREATE INDEX index_patient_devices_on_unassignment_datetime ON patient_devices USING btree (unassignment_datetime );
patients -- ~1,000 rows
CREATE TABLE patients ( id serial NOT NULL, first_name character varying(50) NOT NULL, middle_name character varying(50), last_name character varying(50) NOT NULL, created_at timestamp without time zone NOT NULL, updated_at timestamp without time zone NOT NULL, CONSTRAINT participants_pkey PRIMARY KEY (id ) ) WITH ( OIDS=FALSE ); ALTER TABLE patients OWNER TO postgres;
Here is my abbreviated query.
SELECT device_patients.patient_id, serial_number FROM reads INNER JOIN devices ON devices.id = reads.device_id INNER JOIN patient_devices ON device_patients.device_id = devices.id WHERE (reads.read_datetime BETWEEN '2012-01-01 10:30:01.000000' AND '2013-05-18 03:03:42') AND (read_datetime > issuance_datetime) AND ((unassignment_datetime IS NOT NULL AND read_datetime < unassignment_datetime) OR (unassignment_datetime IS NULL)) GROUP BY serial_number, patient_devices.patient_id LIMIT 10
Ultimately this will be a small part of a larger query (without the LIMIT, I only added the limit to prove to myself that the long runtime was not due to returning a bunch of rows), however I've done a bunch of experimenting and determined that this is the slow part of the larger query. When I run EXPLAIN ANALYZE on this query I get the following output (also viewable here)
Limit (cost=156442.31..156442.41 rows=10 width=13) (actual time=2815.435..2815.441 rows=10 loops=1) -> HashAggregate (cost=156442.31..159114.89 rows=267258 width=13) (actual time=2815.432..2815.437 rows=10 loops=1) -> Hash Join (cost=1157.78..151455.79 rows=997304 width=13) (actual time=30.930..2739.164 rows=250150 loops=1) Hash Cond: (devices.device_id = devices.id) Join Filter: ((reads.read_datetime > patient_devices.issuance_datetime) AND (((patient_devices.unassignment_datetime IS NOT NULL) AND (reads.read_datetime < patient_devices.unassignment_datetime)) OR (patient_devices.unassignment_datetime IS NULL))) -> Seq Scan on reads (cost=0.00..7236.94 rows=255396 width=12) (actual time=0.035..64.433 rows=255450 loops=1) Filter: ((read_datetime >= '2012-01-01 10:30:01'::timestamp without time zone) AND (read_datetime <= '2013-05-18 03:03:42'::timestamp without time zone)) -> Hash (cost=900.78..900.78 rows=20560 width=37) (actual time=30.830..30.830 rows=25015 loops=1) Buckets: 4096 Batches: 1 Memory Usage: 1755kB -> Hash Join (cost=19.90..900.78 rows=20560 width=37) (actual time=0.776..20.551 rows=25015 loops=1) Hash Cond: (patient_devices.device_id = devices.id) -> Seq Scan on patient_devices (cost=0.00..581.93 rows=24893 width=24) (actual time=0.014..7.867 rows=25545 loops=1) Filter: ((unassignment_datetime IS NOT NULL) OR (unassignment_datetime IS NULL)) -> Hash (cost=13.61..13.61 rows=503 width=13) (actual time=0.737..0.737 rows=503 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 24kB -> Seq Scan on devices (cost=0.00..13.61 rows=503 width=13) (actual time=0.016..0.466 rows=503 loops=1) Filter: (entity_id = 2) Total runtime: 2820.392 ms
My question is how do I speed this up? Right now I'm running this on my Windows machine for testing, but ultimately it will be deployed on Ubuntu, will that make a difference? Any insight into why this takes 2 seconds would be greatly appreciated.
Thanks
It has been suggested that the LIMIT might be altering the query plan. Here is the same query without the LIMIT. The slow part still appears to be the Hash Join.
Also, here are the relevant tuning parameters. Again I'm only testing this on Windows now, and I don't know what effect this would have on a Linux machine
shared_buffers = 2GB effective_cache_size = 4GB work_mem = 256MB random_page_cost = 2.0
Here are the statistics for the reads table
Statistic Value Sequential Scans 130 Sequential Tuples Read 28865850 Index Scans 283630 Index Tuples Fetched 141421907 Tuples Inserted 255450 Tuples Updated 0 Tuples Deleted 0 Tuples HOT Updated 0 Live Tuples 255450 Dead Tuples 0 Heap Blocks Read 20441 Heap Blocks Hit 3493033 Index Blocks Read 8824 Index Blocks Hit 4840210 Toast Blocks Read Toast Blocks Hit Toast Index Blocks Read Toast Index Blocks Hit Last Vacuum 2013-05-20 09:23:03.782-07 Last Autovacuum Last Analyze 2013-05-20 09:23:03.91-07 Last Autoanalyze 2013-05-17 19:01:44.075-07 Vacuum counter 1 Autovacuum counter 0 Analyze counter 1 Autoanalyze counter 6 Table Size 27 MB Toast Table Size none Indexes Size 34 MB
Here are the statistics for the devices table
Statistic Value Sequential Scans 119 Sequential Tuples Read 63336 Index Scans 1053935 Index Tuples Fetched 1053693 Tuples Inserted 609 Tuples Updated 0 Tuples Deleted 0 Tuples HOT Updated 0 Live Tuples 609 Dead Tuples 0 Heap Blocks Read 32 Heap Blocks Hit 1054553 Index Blocks Read 32 Index Blocks Hit 2114305 Toast Blocks Read Toast Blocks Hit Toast Index Blocks Read Toast Index Blocks Hit Last Vacuum Last Autovacuum Last Analyze Last Autoanalyze 2013-05-17 19:02:49.692-07 Vacuum counter 0 Autovacuum counter 0 Analyze counter 0 Autoanalyze counter 2 Table Size 48 kB Toast Table Size none Indexes Size 128 kB
Here are the statistics for the patient_devices table
Statistic Value Sequential Scans 137 Sequential Tuples Read 3065400 Index Scans 853990 Index Tuples Fetched 46143763 Tuples Inserted 25545 Tuples Updated 24936 Tuples Deleted 0 Tuples HOT Updated 0 Live Tuples 25547 Dead Tuples 929 Heap Blocks Read 1959 Heap Blocks Hit 6099617 Index Blocks Read 1077 Index Blocks Hit 2462681 Toast Blocks Read Toast Blocks Hit Toast Index Blocks Read Toast Index Blocks Hit Last Vacuum Last Autovacuum 2013-05-17 19:01:44.576-07 Last Analyze Last Autoanalyze 2013-05-17 19:01:44.697-07 Vacuum counter 0 Autovacuum counter 6 Analyze counter 0 Autoanalyze counter 6 Table Size 2624 kB Toast Table Size none Indexes Size 5312 kB
Below is the full query that I'm trying to speed up. The smaller query is indeed faster, but I was unable to make my full query faster which is reproduced below. As suggested, I added 4 new indices, UNIQUE(device_id, issuance_datetime), UNIQUE(device_id, issuance_datetime), UNIQUE(patient_id, unassignment_datetime), UNIQUE(patient_id, unassignment_datetime)
SELECT first_name , last_name , MAX(max_read) AS read_datetime , SUM(value) AS value , serial_number FROM ( SELECT pa.first_name , pa.last_name , value , first_value(de.serial_number) OVER(PARTITION BY pa.id ORDER BY re.read_datetime DESC) AS serial_number -- I'm not sure if this is a good way to do this, but I don't know of another way , re.read_datetime , MAX(re.read_datetime) OVER (PARTITION BY pd.id) AS max_read FROM reads re INNER JOIN devices de ON de.id = re.device_id INNER JOIN patient_devices pd ON pd.device_id = de.id AND re.read_datetime >= pd.issuance_datetime AND re.read_datetime < COALESCE(pd.unassignment_datetime , 'infinity'::timestamp) INNER JOIN patients pa ON pa.id = pd.patient_id WHERE re.read_datetime BETWEEN '2012-01-01 10:30:01' AND '2013-05-18 03:03:42' ) AS foo WHERE read_datetime = max_read GROUP BY first_name, last_name, serial_number ORDER BY value desc LIMIT 10
Sorry for not posting this earlier, but I thought this query would be too complicated, and was trying to simply the problem, but apparently I still can't figure it out. It seems like it would be a LOT quicker if I could limit the results returned by the nested select using the max_read variable, but according to numerous sources, that isn't allowed in Postgres.
-
Jon almost 11 yearsThose also popped out at me, and I have btree indices for all of them, so I'm not sure what else I can do. However, it would seem that all the time is spent in the Hash Join.
-
Jon almost 11 yearsHow will this help me?
-
wildplasser almost 11 yearsIt might not help you, but it might help others who try to help you (and will probably start by exactly the same rewrite)
-
Jon almost 11 yearsOK, well hopefully it does ;)
-
Jon almost 11 yearsAlso, I'm not sure which index function you are referring to. After reading the documentation, I thought a b-tree was the type of index I wanted, but if that isn't true, I'd love to know which type I do want.
-
Jon almost 11 yearsI've updated my question to include my full query. Your suggestions did indeed speed up the original query I posted. I've tried to incorporate your suggestions into the full query but am not seeing the speed ups.
-
wildplasser almost 11 yearsMost of my answer is not about the query per se but about the data model: I removed useless (IMHO) surrogate keys, and added sane (IMHO) foreign key constraints. The underlying problem is still a 3 dimentional triangle model: {device, patient, datetime} are a 2/3 dimentional PK on your junction table
patient_devices
. It might indicate a BCNF-violation, but that would depend on the actual contents of the data. -
Jon almost 11 yearsI added an additional index that I see that I forgot, and now the fully query is down to about 1.2 seconds. Honestly, I'm not a DBA, and I'm learning a lot about DBs now. I think I'll have to read more about BCNF (as I have never heard of it). I was under the impression that using an intermediary table for a many-to-many relationship was a common practice. If this is not the industry standard, what do you recommend?