Module Version: Reported in 4.1.0, also happens in the latest 4.3.2
Puppet Version: 4.10.5, 2017.2.3 master/agent
OS Name/Version: 2012r2
This bug was raised from a support ticket where the customer was using the IIS module to manage IIS application pools in one their environments. They wish to control the IIS app pool start/stop state:
"When we provide the variable values for the parameters ‘state’ as ‘started’ and ‘auto_start’ as ‘true’. Everything works fine. The app pool gets created and subsequent puppet runs are errorless.
When we provide the variable values for the parameters ‘state’ as ‘stopped’ and ‘auto_start’ as ‘false’, we observe, that puppet run works fine first time. An app pool gets created and it is stopped. However, the subsequent puppet run throws an error message"
Desired Behavior: We should be able to apply the application pool resource with no errors, no matter what state the application pool is currently in. **
Actual Behavior: I have replicated this in my 2017.2.3 lab environment, using both version 4.1.0 and the latest 4.3.2 of the IIS module.
Below is the code I used to recreate this, which is more or less what the customer is using.
When I apply this to my 2012r2 agent, the app pool 'Pool1' gets created just fine, the state is stopped and autostart is set to false. I also find the subsequent puppet runs also work fine with no errors.
However, when I manually set the state of Pool1 to started and then do a puppet run, I run into the same error that the customer receives:
This does seem like quite a cosmetic error, as Pool1 does get set back to stopped and auto start also gets set to false.
It seems to me like something outside of Puppet has set the state of the app pool to started on the customer's agent and I have communicated this with them. Regardless of this, no matter what state the app pool is in we should be able to apply the resource to correct this with no errors.