Uploaded image for project: 'Puppet Server'
  1. Puppet Server
  2. SERVER-2978

Puppet Server keeps several redundant copies of request data



    • Bug
    • Status: Accepted
    • Normal
    • Resolution: Unresolved
    • SERVER 7.0.3
    • None
    • Puppet Server
    • Froyo
    • Customer Feedback
    • Major
    • 44196,49876
    • 2
    • 1,500
    • Needs Assessment


      When processing agent requests such as fact submissions, catalog requests,
      or report uploads, Puppet Server creates several copies of the request data
      as part of processing. However, many of these copies outlive their useful
      context and are retained in memory until a response is delivered to the
      client and the request is closed.

      The following reproduction case examines report submission — which creates
      at least 7 copies of the request data by the time the report is handed
      off to PuppetDB. This behavior magnifies the impact of large run reports
      and makes it easier for a single agent or group of agents to exhaust the
      memory available to Puppet Server.

      Reproduction Case

      • Install Puppet Server 7 on a CentOS 7 node:

      yum install -y http://yum.puppetlabs.com/puppet7-release-el-7.noarch.rpm
      yum install -y puppetserver
      source /etc/profile.d/puppet-agent.sh
      puppet config set server $(hostname -f)
      puppetserver ca setup
      systemctl start puppetserver

      • Install PuppetDB 7 and configure it as a report processor:

      puppet module install puppetlabs-puppetdb
      puppet apply <<'EOF'
      class { 'puppetdb':
        postgres_version => '11',
      class { 'puppetdb::master::config':
        enable_reports          => true,
        manage_report_processor => true,

      • Next, configure Puppet Server to use one JRuby instance with the default allocation of 512 MB of RAM and configure JRuby to produce extra debugging information in heap dumps:

      puppet module install puppetlabs-hocon
      puppet apply <<'EOF'
      service { 'puppetserver':
        ensure => running,
      ini_subsetting {
          ensure            => present,
          path              => '/etc/sysconfig/puppetserver',
          section           => '',
          key_val_separator => '=',
          setting           => 'JAVA_ARGS',
          notify            => Service['puppetserver'],
        'puppetserver min ram':
          subsetting => '-Xms',
          value      => '1g',
        'puppetserver max ram':
          subsetting => '-Xmx',
          value      => '1g',
        'reify jruby classes':
          subsetting => '-Djruby.reify.classes',
          value      => 'true',
        'reify jruby instance variables':
          subsetting => '-Djruby.reify.variables',
          value      => 'true',
      hocon_setting { 'puppetserver jruby instances':
        ensure  => present,
        path    => '/etc/puppetlabs/puppetserver/conf.d/puppetserver.conf',
        setting => 'jruby-puppet.max-active-instances',
        value   => 1,
        notify  => Service['puppetserver'],

      • Generate certificates for a test node and configure it to recursively purge a deep directory tree in order to generate a large report:

      curl -L https://raw.githubusercontent.com/LLNL/fdtree/master/fdtree.bash -o /usr/local/bin/fdtree
      mkdir -p /tmp/recursion_test
      # Create, 1 level, 14 directories per level, 999 files per directory, 0 bytes per file
      bash /usr/local/bin/fdtree -C -l 1 -d 14 -f 999 -s 0 -o /tmp/recursion_test
      puppetserver ca generate --certname recursion.test
      cat <<'EOF' >/etc/puppetlabs/code/environments/production/manifests/site.pp
      node default {}
      node 'recursion.test' {
        file {"/tmp/recursion_test":
          ensure  => directory,
          recurse => true,
          purge   => true,
          noop    => true,

      • Install mitmproxy and configure it to dump the Puppet Server heap when a report is submitted to PuppetDB:

      yum install -y java-1.8.0-openjdk-devel python3-pip
      pip3 install mitmproxy
      useradd --create-home mitmproxy
      cat $(puppet config print hostprivkey) $(puppet config print hostcert) >/home/mitmproxy/cert_bundle.pem
      # Allow mitmproxy to execute commands as puppet, to satisfy Java security policies
      cat <<'EOF' >/etc/sudoers.d/mitmproxy
      Defaults:mitmproxy !requiretty
      mitmproxy ALL=(puppet) NOPASSWD: ALL
      cat <<'EOF' >/home/mitmproxy/dump_heap.py
      import subprocess
      import sys
      def request(flow):
        if flow.request.query['command'] == 'store_report':
          sys.stderr.write("Dumping Puppet Server heap on PuppetDB store_report request.\n")
                           '-u', 'puppet',
                           '/bin/bash', '-c',
                           '/usr/bin/jmap -dump:live,format=b,file=/tmp/$(hostname)-$(date +%Y%m%d%H%M%S).hprof $(systemctl show -p MainPID puppetserver|cut -d= -f2)'])
      cat <<'EOF' >/etc/systemd/system/mitm-heapdump.service
      Description=mitmproxy configured to dump puppetserver heap upon report submission
      ExecStart=/usr/sbin/runuser -u mitmproxy -- /usr/local/bin/mitmdump --certs /home/mitmproxy/cert_bundle.pem --set client_certs=/home/mitmproxy/cert_bundle.pem --ssl-insecure --mode transparent --listen-port 9000 --scripts /home/mitmproxy/dump_heap.py
      ExecStartPost=/usr/sbin/iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner mitmproxy --dport 8081 -j REDIRECT --to-port 9000
      ExecStopPost=-/usr/sbin/iptables -t nat -D OUTPUT -p tcp -m owner ! --uid-owner mitmproxy --dport 8081 -j REDIRECT --to-port 9000
      systemctl daemon-reload
      systemctl start mitm-heapdump

      • Run puppet agent to enforce the resource and submit a report (this will take about 5 minutes):

      # Direct output to /dev/null to avoid spamming the console
      puppet agent -t --certname recursion.test &>/dev/null

      • Analyze the *.hprof file written to /tmp


      At the time data is being handed off to PuppetDB, 8 of the 10 largest objects on the heap are related to the report processing request:

      • A java.lang.String instance containing the HTTP request body submitted by the puppet agent and recieved by Java. The content of this string is UTF-16 encoded, which means it uses twice the memory a UTF-8 encoded string would need to store the same ASCII data. Retains 39,835,016 bytes.
      • A org.jruby.RubyString instance containing a copy of the HTTP request body after conversion from Java to a Puppet::Network::HTTP::Request. Retains 21,909,328 bytes.
      • An org.jruby.RubyHash instance representing the report data after the Puppet::Network::HTTP::Request body is parsed to create a Puppet::Transaction::Report instance. Retains 26,189,328 bytes.
      • An org.jruby.RubyArray instance holding the log entries of the report. Created when the Puppet::Transaction::Report instance is duplicated before processing by PuppetDB. Retains 8,705,088 bytes.
      • An org.jruby.RubyHash instance representing a copy of the report data, transformed by the PuppetDB report processor. Retains 27,387,848 bytes.
      • An org.jruby.RubyString instance created by serializing the above hash to JSON for submission to PuppetDB. Retains 13,334,192 bytes.
      • An org.jruby.RubyString instance created by duplicating the above string and adding some metadata. Used soley for computing a PuppetDB command checksum. Retains 13,334,272 bytes.
      • A com.puppetlabs.http.client.RequestOptions instance used to make the actual POST request to PuppetDB that contains a copy of the above strings as the request body. The request body in this object is a java.lang.String which also pays the UTF-16 tax. Retains 26,668,368 bytes.

      End result: a 19,917,508 byte report submission by the agent is magnified to 177,142,440 bytes of memory usage for the Puppet Server by the time the data is handed off to PuppetDB and the request starts closing out — an overhead of nearly 10x.

      Expected Outcome

      Puppet Server retains minimal copies of large data blocks while serving agent request.

      Engineering outcomes:

      Dig through this information and create tickets describing any work we can do to streamline this.


        Issue Links



              Unassigned Unassigned
              chuck Charlie Sharpsteen
              3 Vote for this issue
              14 Start watching this issue



                Zendesk Support