From sgcWebSockets 2024.2.0 the HTTP/2 server has been improved when receiving HTTP/2 requests. Now, by default, when the server receives a new HTTP/2 request, this is queued and dispatched by one of threads of the Pool of Threads. This avoid the problem when several requests are sent using the same connection and those are processed sequentially.
See below the differences between HTTP 1.1 and HTTP 2.0:
In traditional HTTP behavior, when making multiple requests over the same connection, the client has to wait for the response of each request before sending the next one. This sequential approach significantly increases the load time of a website's resources. To address this issue, HTTP/1.1 introduced a feature called pipelining, allowing a client to send multiple requests without waiting for the server's responses. The server, in turn, responds to the client in the same order as it received the requests.
While pipelining appeared to be a solution, it faced challenges:
In an effort to optimize page loading from servers supporting HTTP/1.1, the Web-Browsers implemented a workaround. It opens six-eight parallel connections to the server, enabling the simultaneous transmission of multiple requests. This parallelism aims to mitigate the issues associated with pipelining and improve overall page load times.
The choice of six-eight parallel connections by the Web-Browsers is based on optimization considerations. The specific reasons behind selecting this number may involve a trade-off between resource utilization, network efficiency, and avoiding potential bottlenecks.
In response to the constraints encountered in pipelining, HTTP/2 introduced a feature called multiplexing. Multiplexing allows for more efficient communication between the client and server by enabling the concurrent transmission of multiple requests and responses over a single connection.
HTTP/2 utilizes a binary framing mechanism, which means that HTTP messages are broken down into smaller, independent units called frames. These frames can be interleaved and sent over the connection independently of one another. At the receiving end, the frames are reassembled to reconstruct the original HTTP message.
This binary framing mechanism is fundamental to achieving multiplexing in HTTP/2. It enables the browser to send multiple requests over the same connection without encountering blocking issues. As a result, browsers like Chrome utilize the same connection ID for HTTP/2 requests, allowing for efficient and uninterrupted communication between the client and server.
In essence, HTTP/2's multiplexing feature, enabled by the binary framing mechanism, enhances the efficiency and speed of data exchange between clients and servers by facilitating concurrent transmission of multiple requests and responses over a single connection.
To improve the performance of the HTTP/2 protocol, the requests are dispatched by default in a Pool Of Threads (by default 32) every time a new HTTP/2 request is received by the server, this avoid waits when a single connection sends a lot of concurrent requests which will require processing sequentially (in the context of the connection thread) in the absence of this pool of threads.
The behaviour of the PoolOfThreads can be configured in the following properties.
procedure OnHTTP2BeforeAsyncRequest(Sender: TObject; Connection: TsgcWSConnection; const ARequestInfo: TIdHTTPRequestInfo; var Async: Boolean); begin if ARequestInfo.Document = '/time-consuming-request' then ASync := False; end;
When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.