[PDB-2424] DELETE FROM edges is extremely slow Created: 2016/02/11  Updated: 2016/04/20  Resolved: 2016/03/01

Status: Closed
Project: PuppetDB
Component/s: None
Affects Version/s: None
Fix Version/s: PDB 3.2.5, PDB 4.0.0

Type: Bug Priority: Normal
Reporter: Daniele Sluijters Assignee: Ryan Senior
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Template:
Story Points: 1
Sprint: PuppetDB 2016-03-09, PuppetDB 2016-02-24

 Description   

We were seeing a lot of DELETE FROM edges queries taking 4-6s or more. We added some indices on edges which seems to have solved it:

    "edges_certname_idx" btree (certname)
    "edges_source_hex_idx" btree (encode(source, 'hex'::text))
    "edges_target_hex_idx" btree (encode(target, 'hex'::text))
    "edges_type_idx" btree (type)



 Comments   
Comment by Russell Mull [ 2016/02/22 ]

edges appears to be pretty seriously under-indexed at the moment; this fix does sound plausible.

Comment by Ryan Senior [ 2016/02/22 ]

Daniele Sluijters I looked at this and I don't see the issue. We have a unique index on (certname, source, target, type). Our current delete edges code, calls a delete using (certname, source, target). This still hits the unique index. There's only a few types of edges that we support, so it doesn't seem like that would ever be slow. One thing I do see is that we issue seperate deletes for each edge, which is really slow. Instead we should issue one delete statement that includes all the edges we need to delete.

That doesn't seem to match up with what you're seeing. Do you have any more info on the issue?

Comment by Daniele Sluijters [ 2016/02/24 ]

Ryan Senior ~~From what we could see the DELETE would call with the encode(source) in there and since the index created by PuppetDB doesn't take the end value produced by encode into account, but only the original source the index you mention doesn't get used. We also noticed that the encode and decode operations were actually pretty expensive which creating the index managed to cool down as we had a lot of queries using those functions.

Unfortunately I can't see the queries anymore since the indices took care of removing them from sight and I don't feel comfortable dropping them to resurface the issue.~~

Comment by Ryan Senior [ 2016/02/24 ]

maybe it's the encode thing, we've switched to that recently, I'll investigate that side of it

Comment by Ryan Senior [ 2016/02/24 ]

I have figured out the issue and I have a fix for it. We're coercing the value in the column to something so that we can compare it with the value we're trying to delete. This means we'll decode every source and target hash for the certname before we compare it to the value we have in hand (the thing we're trying to delete). I have a PR up that switches it so we convert the source/target hash we're trying to delete to bytea, then run the query. This means we convert the value we have in hand, not each of the values in the column.

This is not only WAY faster, it also hits the unique index that is already in place for the edges table. Daniele Sluijters with this patch you won't need your indexes anymore, they won't hurt anything, but you'll find they won't be used.

Comment by Daniele Sluijters [ 2016/02/25 ]

Thanks for figuring this out Ryan Senior! Sorry I couldn't get you a better description of what we were seeing.

Generated at Mon Oct 21 08:18:18 PDT 2019 using JIRA 7.7.1#77002-sha1:e75ca93d5574d9409c0630b81c894d9065296414.