TsgcWebSocketHTTPServer | HTTP/2 Server Threads

See below the differences between HTTP 1.1 and HTTP 2.0:

 

HTTP 1.1

In traditional HTTP behavior, when making multiple requests over the same connection, the client has to wait for the response of each request before sending the next one. This sequential approach significantly increases the load time of a website's resources. To address this issue, HTTP/1.1 introduced a feature called pipelining, allowing a client to send multiple requests without waiting for the server's responses. The server, in turn, responds to the client in the same order as it received the requests.

 

While pipelining appeared to be a solution, it faced challenges:

 

 

 

In an effort to optimize page loading from servers supporting HTTP/1.1, the Web-Browsers implemented a workaround. It opens six-eight parallel connections to the server, enabling the simultaneous transmission of multiple requests. This parallelism aims to mitigate the issues associated with pipelining and improve overall page load times.

 

The choice of six-eight parallel connections by the Web-Browsers is based on optimization considerations. The specific reasons behind selecting this number may involve a trade-off between resource utilization, network efficiency, and avoiding potential bottlenecks. 

 

 

HTTP 2.0

In response to the constraints encountered in pipelining, HTTP/2 introduced a feature called multiplexing. Multiplexing allows for more efficient communication between the client and server by enabling the concurrent transmission of multiple requests and responses over a single connection.

 

HTTP/2 utilizes a binary framing mechanism, which means that HTTP messages are broken down into smaller, independent units called frames. These frames can be interleaved and sent over the connection independently of one another. At the receiving end, the frames are reassembled to reconstruct the original HTTP message.

 

This binary framing mechanism is fundamental to achieving multiplexing in HTTP/2. It enables the browser to send multiple requests over the same connection without encountering blocking issues. As a result, browsers like Chrome utilize the same connection ID for HTTP/2 requests, allowing for efficient and uninterrupted communication between the client and server.

In essence, HTTP/2's multiplexing feature, enabled by the binary framing mechanism, enhances the efficiency and speed of data exchange between clients and servers by facilitating concurrent transmission of multiple requests and responses over a single connection.

 

 

TsgcWebSocketHTTPServer

To improve the performance of the HTTP/2 protocol, the requests are dispatched by default in a Pool Of Threads (by default 32) every time a new HTTP/2 request is received by the server, this avoid waits when a single connection sends a lot of concurrent requests which will require processing sequentially (in the context of the connection thread) in the absence of this pool of threads.

 

The behaviour of the PoolOfThreads can be configured in the following properties.

 

 

To fine-tune the requests, selecting which must be processed in the Pool Of Threads (because are time consuming) while others can be processed in the connection thread, you can use the event OnHttp2BeforeAsyncRequest, this event is raised before queue the request in the pool of threads, use the parameter Async to set if the request is threaded or not.

 


procedure OnHTTP2BeforeAsyncRequest(Sender: TObject; Connection: TsgcWSConnection; const ARequestInfo: TIdHTTPRequestInfo; var Async: Boolean);
begin
  if ARequestInfo.Document = '/fast-request' then
    ASync := False;
end;