www.flickr.com
|
Wednesday, January 03, 2007
Rails realities part 22 (rollover mongrel... play dead)
Every once in awhile everyone runs into a bug that is difficult to track down and happens seemingly intermittently or after long periods of time. Today's entry is one such bug. The bug was only occurring on my production system which is setup to use Apache 2.2/Mongrel. Apache proxies through to 4 mongrel instances.
The bug was that after long periods of inactivity the app would return "500 internal server error" messages on 3 of 4 requests. So basically only 1 of 4 mongrel instances was still running properly.
Unfortunately there was not a peep in any log file anywhere to give a clue as to WTF was going on. The leading contender was this was somehow a DB connection timeout but the biggest problem with that argument was that 1 of the 4 processes would still work.
Well after a question on the mongrel mailing list and some feedback from Ezra suggested maybe it was log file rolling. As soon as I read that I thought that had to be it as I remembered I setup the rails logger to roll daily.
In my environment.rb I had this:
This would of course roll the log file everyday at midnight. So when I would wake up the next day and hit my app the log would roll and only one lucky mongrel would still work after the rollover, the other three would play dead.
After confirming that it was the ruby logger causing the problem I removed the configuration to roll the log and things have been fine since.
I used this same configuration in my first rails app which was a lighty/fcgi environment with 2 fcgi processes. In that environment when the log would roll daily the app would still work but the processes didn't log to the log file again until the processes were restarted.
The solution to all this is to use logrotate instead. I've not yet set it up but have earmarked this blog entry to refer to when I'm ready to setup my production housecleaning.
The bug was that after long periods of inactivity the app would return "500 internal server error" messages on 3 of 4 requests. So basically only 1 of 4 mongrel instances was still running properly.
Unfortunately there was not a peep in any log file anywhere to give a clue as to WTF was going on. The leading contender was this was somehow a DB connection timeout but the biggest problem with that argument was that 1 of the 4 processes would still work.
Well after a question on the mongrel mailing list and some feedback from Ezra suggested maybe it was log file rolling. As soon as I read that I thought that had to be it as I remembered I setup the rails logger to roll daily.
In my environment.rb I had this:
Rails::Initializer.run do |config|
# Settings in config/environments/* take precedence those specified here
config.logger = Logger.new(config.log_path, 'daily')
...
This would of course roll the log file everyday at midnight. So when I would wake up the next day and hit my app the log would roll and only one lucky mongrel would still work after the rollover, the other three would play dead.
After confirming that it was the ruby logger causing the problem I removed the configuration to roll the log and things have been fine since.
I used this same configuration in my first rails app which was a lighty/fcgi environment with 2 fcgi processes. In that environment when the log would roll daily the app would still work but the processes didn't log to the log file again until the processes were restarted.
The solution to all this is to use logrotate instead. I've not yet set it up but have earmarked this blog entry to refer to when I'm ready to setup my production housecleaning.