Linear increase in app pool threads in use with no requests being served

Yesterday afternoon, after some marketing efforts, we received a higher than average (but by no means dangerously high) traffic throughout the day.

After this traffic had been going on for a while we were alerted that one of our APIs (a micro-service which is hit at least once on each request) was hanging for a while, then returning 503 response codes.

These responses turned out to be from the Load Balancer as it couldn’t get a response from either of the two (identical) instances.

We then looked into IIS on the instances, where we found that the number of threads in use was increasing rapidly, almost in line with the number of requests coming in (about 10/second), with none of the requests getting responses.

We checked the error logs (expecting to see at least a few timeouts or other errors) but there was nothing there to lead us in the correct direction as there were no relevant errors being logged at all.

The strangest thing about the whole shebang is that the problem seemed to arise out of nowhere (albeit with increased traffic), and then everything just started working again after hours of this going on. This could potentially mean we were over a threshold which caused IIS to act strangely with threads, or could just be coincidence.

After the issue resolved itself, it did not crop up again on the same instance, but the problem manifested on another instance running a different micro-service later down the line for a while.

Does anyone seen a pattern like this before? Or anything similar?

Answer

Attribution
Source : Link , Question Author : HelpMyShitBroke , Answer Author : Community

Leave a Comment