Indy Servers use threads to handle the client connections, every time a new client connects to the server, a new thread is created and this thread handles the connection, so if you have 100 connections, there will be 100 threads. Additionally, indy uses blocking sockets, which means when you read or write, the function doesn't return until it is completed.
This model has some advantages, like it's easy to code, because the code is processed sequentially. But as a cons, if the number of connections increases, the performance is worse and worse due to the thread context switch. Context switching is the process of storing the state of a thread so that it can be restored to resume execution at a later point in time. Rapid context switching between threads is expensive in terms of CPU utilization. If you create 1000 connections in a server, you will see the cpu is working although there is no data exchanged, this cpu use is due to the thread context switch.
Alternatives to Indy Model
Instead of using 1 thread for every connection, there are alternatives like IOCP (for Windows) or EPOLL (for Linux) which make use of a pool of threads to handle the connections and use non-blocking sockets. This model is much more efficient when the number of concurrent connections is high and scales much better than the default Indy Thread Model.
IOCP (Windows)
I/O completion ports provide an efficient threading model for processing multiple asynchronous I/O requests on a multiprocessor system. When a process creates an I/O completion port, the system creates an associated queue object for threads whose sole purpose is to service these requests. Processes that handle many concurrent asynchronous I/O requests can do so more quickly and efficiently by using I/O completion ports in conjunction with a pre-allocated thread pool than by creating threads at the time they receive an I/O request.
IOCP Model uses non-blocking sockets, instead of using Select makes use of AcceptEx function and a pool of threads to handle the client connections.
To enable IOCP on sgcWebSockets Indy server just check the code below
A new WebSocket + HTTP Server will be created and a pool of threads will handle the connections, the number of threads used will depend on the number of cpu where the server is running.
EPOLL (Linux)
Epoll is a Linux kernel system call for a scalable I/O event notification mechanism, first introduced in version 2.5.44 of the Linux kernel. Its function is to monitor multiple file descriptors to see whether I/O is possible on any of them. It is meant to replace the older POSIX select and poll system calls, to achieve better performance in more demanding applications, where the number of watched file descriptors is large.
Check the following table comparing the performance for 100,000 monitoring operations:
Num. Operations | poll | select | epoll |
10 | 0.61 | 0.73 | 0.41 |
100 | 2.9 | 3.0 | 0.42 |
1000 | 35 | 35 | 0.53 |
10000 | 990 | 930 | 0.66 |
So using epoll really is a lot faster once you have more than 10 or so file descriptors to monitor.
EPOLL Model uses non-blocking sockets, instead of using Select makes use of Accept async function and a pool of threads to handle the client connections.
To enable EPOLL on sgcWebSockets Indy server just check the code below