nginx will not try to read the whole response from the proxied server.The maximum size of the data that nginx can receive from the server at a time is set by the proxy_buffer_size directive.
In the meantime, the rest of the buffers can be used for reading the response and, if needed, buffering part of the response to a temporary file.
By default, Allows starting a background subrequest to update an expired cache item, while a stale cached response is returned to the client.
Buffering can also be enabled or disabled by passing “ of the buffers used for reading a response from the proxied server, for a single connection.
By default, the buffer size is equal to one memory page. of buffers that can be busy sending a response to the client while the response is not yet fully read.
The directory for temporary files is set based on the As part of commercial subscription, the shared memory zone also stores extended cache information, thus, it is required to specify a larger zone size for the same number of keys.
For example, one megabyte zone can store about 4 thousand keys. A minute after the start the special “cache loader” process is activated.
I would prefer serving stale content until the cache is updated.
What you're looking for is called stale-while-revalidate (RFC 5861) and it's implemented in nginx as a directive called proxy_cache_background_update.
The following fields can be ignored: “X-Accel-Redirect”, “X-Accel-Expires”, “X-Accel-Limit-Rate” (1.1.6), “X-Accel-Buffering” (1.1.6), “X-Accel-Charset” (1.1.6), “Expires”, “Cache-Control”, “Set-Cookie” (0.8.44), and “Vary” (1.7.7).
If not disabled, processing of these header fields has the following effect: is specified in bytes per second. The limit is set per a request, and so if nginx simultaneously opens two connections to the proxied server, the overall rate will be twice as much as the specified limit.
A cached response is first written to a temporary file, and then the file is renamed.