Is there a reason the malicious part of the payload has to be pixels? You could have a 100x100px image with 000s of 2GB iTXt chunks, no? That would bypass naive header checks that only reject based on canvas size.
However, it may work with the article's process - a 100x100 png with lots of 2GB-of-nothing iTXt chunks could be gzipped and served with `Content-Encoding: gzip` - so it would pass the "is a valid png" and "not pixel-huge image" checks but still require decompression in order to view it.
Firefox seems to handle this correctly: it reads the first part of the image and displays the image, but stops decompressing after the full image file is read
Chrome and Safari both crash after using up all OS memory on the task (Safari crashes earlier and not as badly because it has a per-page memory limit)