[PDB-156] Handle large catalog sizes gracefully Created: 2013/12/04  Updated: 2016/12/16  Resolved: 2015/12/11

Status: Closed
Project: PuppetDB
Component/s: None
Affects Version/s: None
Fix Version/s: PDB 3.2.3

Type: New Feature Priority: Normal
Reporter: redmine.exporter Assignee: Unassigned
Resolution: Fixed Votes: 0
Labels: redmine
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
blocks PDB-2230 Include header information in the com... Closed
blocks PDB-2231 Memory improvement spike Closed
supports SERVER-1695 Add ability to reject API submissions... Resolved
Epic Link: Memory Use - Phase 1
Story Points: 3
Sprint: PuppetDB 2015-12-02, PuppetDB 2015-12-16


If we get a large catalog, the JVM will crash with OOM errors if not sized correctly. This is not very elegant, and we need a much nicer way of dealing with this.

We should have a way of dealing with this, either through a configurable option for maximum catalog receive size at the earlier HTTP stage (initially), or something more heuristic (long term). We should either way, not throw an OOM if we can avoid it.

Comment by Ryan Senior [ 2015/11/18 ]

Eventually we want to support arbitrarily large catalogs, but that will be a significant change. Changing this ticket to fail fast on catalogs that are too large.

Generated at Sun Jul 12 22:31:23 PDT 2020 using Jira 8.5.2#805002-sha1:a66f9354b9e12ac788984e5d84669c903a370049.