Details
-
Improvement
-
Status: Closed
-
Normal
-
Resolution: Fixed
-
None
-
Data Platform
-
2
-
Not Needed
-
See
PDB-3546instead.
Description
The problem
Purging a lot of nodes at once can cause a performance slowdown in PuppetDB because deletes on the certnames table cascade to a lot of other tables.
See PDB-2415 for details on that.
The Suggestion
I would like to be able to pass admin/cmd api a purge_nodes payload that also specifies the maximum number of nodes to delete.
https://docs.puppet.com/puppetdb/4.2/api/admin/v1/cmd.html
This way I could hypothetically do something really slow like 1 node every 5 minutes or 10 nodes every 10 minutes etc...
This way if I decommission 100 nodes I don't have to worry about the next GC cycle bogging down PuppetDB.
A possible future improvement would be to set the maximum number of nodes to delete and the size of batches to run. Say I want to delete 100 nodes but do it 5 nodes at a time. Then puppetdb would delete 5 nodes 20 times in a row. It would be nice if the default behavior of node-purge-ttl was to delete nodes in batches so even if you decide to turn it on and have 1000s of nodes to delete at least other traffic wouldn't be completely blocked out when purging nodes.