We are running a Node Socket.io server with Express 3. The server is monitored using Forever. The service is running well, but the CPU grows throughout the day, until it reaches 90%+ and then suddenly drops back down to ~20%, as shown in the graphs below. I believe that the drop is caused by Forever restarting the app.
What I would like to know is;
- What are the typical factors that could cause a Node.js app to behave like this?
- What tools / methods are available to debug memory leaks / cpu hogging in node apps?
I think it could be something to do with Socket.io not cleaning up resources once users have disconnected, although the docs say that Socket.io will manage this automatically.
Any help would be greatly appreciated, this issue is making managing our server very difficult. I have posted this question on Serverfault a week ago, but did not receive a response, so I think it may be better here.
Update: After more research, it appears that the CPU does not directly correlate with the number of connections. Our critical mass seems to be around ~1500 concurrent connections split up like so:
- xhr-polling: 767
- websocket: 692
- jsonppolling: 80
Sometimes we could be at 100% CPU with only 500 connections, other times its 1500 connections. I’m aware that the rate of messages sent has a big affect, however the rate is fairly consistent.
I am not that experienced but the main reasons for your CPU latency might be:
- Code execution time: If you have several loops look on giving them less data to cranch/extract.
- Concurrent Connection: I also had a problem with this one,a person can open 3-4 browser tabs/windows so try to make 1 concurrent connection per user NB. you can use sessions
I will try to edit this answer if I have some thing come to mind and you can see the link below for server stress tests on Nodejs.
Hope it helps you.