Skip to content

aiohttp: Reimplement ChunkedClientResponse to correctly handle chunked responses#1094

Open
rspeed wants to merge 1 commit intomicropython:masterfrom
rspeed:aiohttp-chunkedresponse
Open

aiohttp: Reimplement ChunkedClientResponse to correctly handle chunked responses#1094
rspeed wants to merge 1 commit intomicropython:masterfrom
rspeed:aiohttp-chunkedresponse

Conversation

@rspeed
Copy link

@rspeed rspeed commented Mar 10, 2026

A chunk-encoded response can (and often will) require multiple reads from the response stream, each of a length determined by a value encoded in its first line. aiohttp.ChunkedClientResponse currently treats the first chunk's size as the size of the full data (chunked encoding is specifically intended for situations where the full size isn't known when the transfer starts) and there is only a single chunk.

This PR reimplements aiohttp.CHunkedClientResponse based on the HTTP/1.1 specification. Chunked responses with more than one chunk will now work correctly. As a side-effect it also fixes #1093.

@rspeed rspeed force-pushed the aiohttp-chunkedresponse branch from 701eeb7 to b123d24 Compare March 10, 2026 18:26
Correctly combines multiple chunks into a single response body.

Signed-off-by: Rob Speed <speed.rob@gmail.com>
@rspeed rspeed force-pushed the aiohttp-chunkedresponse branch from b123d24 to abcc45d Compare March 10, 2026 18:28
@dpgeorge
Copy link
Member

aiohttp.ChunkedClientResponse currently treats the first chunk's size as the size of the full data (chunked encoding is specifically intended for situations where the full size isn't known when the transfer starts) and there is only a single chunk.

I don't think that's correct. The current code does support multiple chunks, using the self.chunk_size variable to keep track of how much is remaining in the current chunk. When that becomes 0 then it reads in the next chunk header and start the process again.

This allows you to read in a chunked response in separate chunks, where the chunk sizes are different. Eg the incoming data could be chunked every 256 bytes, but the application reads in 16 byte blocks:

resp = session.get(...)
while True:
    data = await resp.read(16)
    if not data:
        # end of stream
        break
    process_data(data)

IMO that's a useful extension to standard aiohttp because most microcontroller applications can't read in megabytes at once.

I think the only issue to solve here is handling -1 to indicate that you want to read everything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Chunked responses in aiohttp result in an attempt to allocate a 4 gigabyte buffer… and maybe it doesn't actually work anyway?

2 participants