By Admin on Tuesday, 15 November 2022
Category: All

Indy Servers - EPOLL Linux (3 / 3)

​From sgcWebSockets 2022.9.0 there is a new IOHandler for Linux, using EPOLL you can avoid the "one-thread-per-client" problem where the performance decrease a lot as more connections are handled by the server. IOCP provides a few threads that handle multiple clients. The threads are suspended and don't use CPU cycles until there is something to process.

The EPOLL IOHandler is only available on sgcWebSockets Enterprise Package.

Configuration 

EPOLL for Linux is an API which allows handles thousands of connections using a limited pool of threads instead of using a thread for connection like Indy by default does.

To enable EPOLL for Indy Servers, Go to IOHandlerOptions property and select iohIEPOLL as IOHandler Type.

1. EPOLLThreads are the threads used for EPOLL asynchronous requests (overlapped operations), by default the value is zero which means the number of threads are calculated using the number of processors (except for Delphi 7 and 2007 where the number of threads is set to 32 because the function cpucount is not supported). You can adjust the number of threads manually.

2. WorkOpThreads only must be enabled if you want that connections are processed always in the same thread. When using EPOLL, the requests are processed by a pool of threads, and every request (for the same connection) can be processed in different threads. If you want to handle every connection in the same thread set in WorkOpThreads the number of threads used to handle these requests. This impacts in the performance of the server and it's only recommended to set a value greater of zero only if you require this feature.

Enabling EPOLL for Linux servers is recommended when you need handle thousands of connections, if your server is only handling 100 concurrent connections at maximum you can stay with default Indy Thread model.

Performance Test 

​​A simple test will show the differences between the Indy Thread model and EPOLL. The test will connect to the server using websocket protocol and the cpu usage and memory consumption will be shown in the following table. The % usage of cpu is while the server is IDLE.

​Num. Connections ​Indy IOHandler Default ​Indy IOHandler EPOLL
​100 ​1% (1.4MB) ​0% (1.4MB)
​500​4% (5.6MB)​0% (4.8MB)
​100010% (10.5MB)​0% (8.9MB)
​1500​-- (Failed)​0% (13.3MB)
​2000​-- (Failed)​0% (17.4MB)

The Indy server could not open more than 1024 concurrent connections, due to the limitation of Select method to accept more than 1024 concurrent connections. Comparing the results till 1000 connections, the Indy Default Server was using more cpu and more RAM than EPOLL server. The EPOLL server was using no CPU while it was in idle state and the memory consumption was lower too.

The next test measures how much time it takes to connect X clients.

​Num. Connections ​Indy IOHandler Default ​Indy IOHandler EPOLL
​1000 ​16.5 seconds ​9.49 seconds
​10000​-- (Failed)​1min 55 seconds

​The tests were done on a Ubuntu 20.0.4 with 2 processors and 8GB running in a virtual machine.

EPOLL Documentation

https://man7.org/linux/man-pages/man7/epoll.7.html

Great Delphi Projects implementing EPOLL

https://github.com/winddriver/Delphi-Cross-Socket
https://github.com/grijjy/GrijjyFoundation

Related Posts