Recently as the traffic to our website increases, the problems with the downtime on deployment have increased.
Currently we’re using deployer, which reloads the fpm after deployment, in order to change the directories and avoid having strange bugs.
We don’t want to experiment and change the way it works, but over time when running the fpm-reload, our website results in 502 bad gateway errors, also interrupts tasks that should report and change sensitive data.
As you could guess this results in broken records and so on.
Having that in mind and browsing around different sources we decided to try process_control_timeout, the problem that happened is due to our architecture. We have two applications running on the same fpm. Those two applications exchange data between them.
- Step1 – [Client] -> [Api1]
- Step2 – [Api1] -> [Api2]
- Step3 – [Api2] -> [Api1]
- Step4 – [Api1] -> [Client]
So therefore if we start an fpm reloading, at the time someone is on [Step 2] all the incoming requests will be “hold” and put into a queue (this is how the PHP-fpm works).
Have you been in a situation like this, and is there some way to go over this?