[PDB-156] Handle large catalog sizes gracefully Created: 2013/12/04  Updated: 2016/12/16  Resolved: 2015/12/11

Status: Closed
Project: PuppetDB
Component/s: None
Affects Version/s: None
Fix Version/s: PDB 3.2.3

Type: New Feature Priority: Normal
Reporter: redmine.exporter Assignee: Unassigned
Resolution: Fixed Votes: 0
Labels: redmine
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Blocks
blocks PDB-2230 Include header information in the com... Closed
blocks PDB-2231 Memory improvement spike Closed
Relates
Support
supports SERVER-1695 Add ability to reject API submissions... Resolved
Template:
Epic Link: Memory Use - Phase 1
Story Points: 3
Sprint: PuppetDB 2015-12-02, PuppetDB 2015-12-16

 Description   

If we get a large catalog, the JVM will crash with OOM errors if not sized correctly. This is not very elegant, and we need a much nicer way of dealing with this.

We should have a way of dealing with this, either through a configurable option for maximum catalog receive size at the earlier HTTP stage (initially), or something more heuristic (long term). We should either way, not throw an OOM if we can avoid it.



 Comments   
Comment by Ryan Senior [ 2015/11/18 ]

Eventually we want to support arbitrarily large catalogs, but that will be a significant change. Changing this ticket to fail fast on catalogs that are too large.

Generated at Sun Aug 25 04:04:37 PDT 2019 using JIRA 7.7.1#77002-sha1:e75ca93d5574d9409c0630b81c894d9065296414.