-
Type:
Bug
-
Status: Resolved
-
Priority:
Normal
-
Resolution: Fixed
-
Affects Version/s: None
-
Fix Version/s: None
-
Component/s: None
-
Labels:
-
Template:customfield_10700 110947
-
Team:Enterprise in the Cloud
-
Sub-team:
-
Story Points:1
-
Sprint:WC 2016-10-05
If a request is made on a clj-http-client instance while the process has exhausted the number of open file descriptors that it is allowed to allocate, that request will hang indefinitely - even after the number of open file descriptors drops back below the maximum allowed. I observed this problem when doing Puppet Server heavy load testing. In that case, the problem resulted in Puppet Server being unable to make any future HTTP requests to Puppet DB, even after the number of open descriptors had dropped. The only known way to recover to the state where HTTP requests can be made again is to restart the Java process.
I believe the root of this issue lies in how the Apache HttpAsyncClient library handles the inability to make a request when file descriptors are exhausted. It would be reasonable for an individual HTTP client request to fail in this case but it should be possible for subsequent requests to be made and succeed, if file descriptors become available, even without having to restart the process.
I submitted a JIRA ticket to the Apache project about this problem - https://issues.apache.org/jira/browse/HTTPASYNC-99. I'll follow-up on that and comment further on this ticket as the conversation on that ticket progresses.
- links to