So I'm running Tomcat behind apache and using proxypass to pass through all traffic. Everything works fine, but whenever I do nothing for 60 seconds, and then hit the server again, i get a 8-20 second delay, like the apache is creating a new process to handle the request.
My configuration is pretty much the default, with the addition of the proxy stuff, which I believe is the culprit:
TimeOut 600
ProxyPass /static/ !
ProxyPass / http://localhost:8088/
ProxyPassReverse / http://localhost:8088/
I added the /static/ exemption to see if same problem would happen on static files being served, and apparently it does. I further narrowed it down by commenting out all the ProxyPass stuff, and verifying my static file always loads fast. Then i uncommented ProxyPass stuff, and only requested my static file, and it again always returned fast. But once I hit a URL that takes me through the proxy, wait a minute, then hit it again, something goes horribly wrong. Below is network monitor output for two requests, one of the static file being requested a second time after a 1 minute delay before proxy use, the other after the proxy had been used twice with delay between proxy requests.
3501 4:17:48 PM 10/21/2015 104.2752287 httpd.exe HTTP HTTP:Request, GET /static/index.html
3502 4:17:48 PM 10/21/2015 104.2760830 httpd.exe HTTP HTTP:Response, HTTP/1.1, Status: Not modified, URL: /static/index.html
After:
24232 4:26:13 PM 10/21/2015 608.7355960 httpd.exe HTTP HTTP:Request, GET /static/index.html
24775 4:26:20 PM 10/21/2015 616.0896861 httpd.exe HTTP HTTP:Response, HTTP/1.1, Status: Not modified, URL: /static/index.html
I'm noticing more of this SynReTransmit line after it was initially broken, not sure if it's relevant:
24226 4:26:13 PM 10/21/2015 608.7286692 httpd.exe TCP TCP:[SynReTransmit #24107]Flags=......S., SrcPort=61726, DstPort=HTTP(80), PayloadLen=0, Seq=1157444168, Ack=0, Win=8192 ( Negotiating scale factor 0x2 ) = 8192
But basically every call, be it to static file or over proxy, if it's been over 60 seconds since the last call, will take forever to get a response!
Any ideas?
I have no idea but I am wondering if it has something to do with this being discussed on the dev list.
http://marc.info/?t=144529112000008&r=1&w=2
Hmm, that does sound similar. I also just confirmed that using AJP doesn't solve the problem.
Looking at how to compile my own binary and I don't know if i'm equipped to do that, so not sure how i can test the patch on my problem :(
Seems like a pretty serious issue though, any chance of this patch being included in a binary any time soon?
If it is about chunked transfer you can for the ReverseProxy to use HTTP/1.0 that is not capable of chunked transfer. I had such an issue with an older backend server.
SetEnv force-proxy-request-1.0 1
SetEnv proxy-nokeepalive 1
Taken from http://httpd.apache.org/docs/2.4/mod/mod_proxy.html#envsettings
AFAIK there was an issue in mod_xml2enc with the EOS arround line 403
Also see http://marc.info/?t=127236917600002&r=1&w=2
Fiddling around with those environment variables didn't help either :(
I just checked and see that the version we have on the server is 2.4.12, so kind of old already, so first i'll try upgrading to the latest version (2.4.17). If that doesn't fix it, maybe I'll just run nginx till a fix comes out. It's good to see regular updates are being released, so maybe won't be too long.
well, this is embarrassing... Upgrading to the latest version appears to have fixed it.
Sorry to have wasted your time guys :-[
huh, may have spoken too soon. It's definitely seems better, but after doing other stuff for an hour, came back, and suffered through a 15 second wait again. I'll fiddle with parameters again to see if they work now...
Any updates from your side?
It's strange, cause I swear after i installed the update, i was not seeing this issue. Tested it over 15 minutes or so, which normally would have been enough to trigger the problem.
Then the next day i decide to check, and back the way it was. No fiddling/tweaking of parameters like keepAlive seemed to have any impact, but I'll admit that my level of expertise when it comes to configuring servers is almost nil. So sadly, i'm just running nginx for now, and will keep an eye out for the next update.
What i don't get is that this issue would seem to be common place, given the common use case of reverse proxying to a tomcat server for java application. I wonder if I use the reverse proxy to hit a different server, like node.js, would cause the same issue... might give that a try.
Yep, confirmed the same issue regardless of what i'm reverse proxying to.