Compressing Images with Neural Networks
171 points by skandium 1 year ago | 69 comments- StiffFreeze9 1 year agoHow badly will its lossy-ness change critical things? In 2013, there were Xerox copiers with aggressive compression that changed numbers, https://www.theregister.com/2013/08/06/xerox_copier_flaw_mea...
- bluedino 1 year agoIf I zoom all the way with my iPhone, the camera-assisting intelligence will mess up numbers too
- qrian 1 year agoThe mentioned Xerox copier incident was not an OCR failure, but the copier actively changed the numbers in the original image due to its image compression algorithm.
- barfbagginus 1 year agoHere's some of the context: www.dkriesel.com/blog/2013/0810_xerox_investigating_latest_mangling_test_findings
Learn More: https://www.dkriesel.com/start?do=search&id=en%3Aperson&q=Xe...
Brief: Xerox machines used template matching to recycle the scanned images of individual digits that recur in the document. In 2013, Kriesel discovered this procedure was faulty.
Rationale: This method can create smaller PDFs, advantageous for customers that scan and archive numerical documents.
Prior art: https://link.springer.com/chapter/10.1007/3-540-19036-8_22
Tech Problem: Xerox's template matching procedure was not reliable, sometimes "papering over" a digit with the wrong digit!
PR Problem: Xerox press releases initially claimed this issue did not happen in the factory default mode. Kriesel demonstrated this was not true, by replicating the issue in all of the factory default compression modes including the "normal" mode. He gave a 2015 FrOSCon talk, "Lies, damned lies and scans".
Interesting work!
- lifthrasiir 1 year agoAny lossy compressor changes the original image for better compression at expense of the perfect accuracy.
- barfbagginus 1 year ago
- qrian 1 year ago
- thomastjeffery 1 year agoLossy compression has the same problem it has always had: lossy metadata.
The contextual information surrounding intentional data loss needs to be preserved. Without that context, we become ignorant of the missing data. Worst case, you get replaced numbers. Average case, you get lossy->lossy transcodes, which is why we end up with degraded content.
There are only two places to put that contextual information: metadata and watermarks. Metadata can be written to a file, but there is no guarantee it will be copied with that data. Watermarks fundamentally degrade the content once, and may not be preserved in derivative works.
I wish that the generative model explosion would result in a better culture of metadata preservation. Unfortunately, it looks like the focus is on metadata instead.
- 1 year ago
- _kb 1 year agoThe suitable lossy-ness (of any compression method) is entirely dependant on context. There is no one size fits all approach for all uses cases.
One key item with emerging 'AI compression' techniques is the information loss is not deterministic which somewhat complicates assessing suitability.
- fl7305 1 year ago> the information loss is not deterministic
It is technically possible to make it deterministic.
The main reason you don't deterministic outputs today is that Cuda/GPU optimizations make the calculations run much faster if you let them be undeterministic.
The internal GPU scheduler will then process things in the order it thinks is fastest.
Since floating point is not associative, you can get different results for (a + (b + c)) and ((a + b) + c).
- _kb 1 year agoThe challenge goes beyond rounding errors.
Many core codecs are pretty good at adhering to reference implementations, but are still open to similar issues so may not be bit exact.
With a DCT or wavelet transform, quantisation, chroma subsampling, entropy coding, motion prediction and the suite of other techniques that go into modern media squishing it’s possible to mostly reason about what type of error will come out the other end of the system for a yet to be seen input.
When that system is replaced by a non-linear box of mystery, this ability is lost.
- _kb 1 year ago
- fl7305 1 year ago
- begueradj 1 year agoThat was interesting (info in your link)
- lifthrasiir 1 year agoThis JBIG2 "myth" is too widespread. It is true that Xerox's algorithm mangled some numbers in its JBIG2 output, but it is not an inherent flaw of JBIG2 to start, and Xerox's encoder misbehaved almost exclusively for lower dpis---300dpi or more was barely affected. Other artifacts at lower resolution can exhibit similar mangling as well (specifics would of course vary), and this or similar incident wasn't repeated so far. So I don't feel it is even a worthy concern at this point.
- thrdbndndn 1 year ago1. No one, at least not OP, ever said it's a inherent flaw of JBIG2. The fact it's an implementation error on XeroX's end is a good technical detail to know, but it is irrelevant to the topic.
2. "Lower DPI" is extremely common if your definition for that is 300dpi. At my company, all the text document are scanned at 200dpi by default. And 150dpi or even lower is perfectly readable if you don't use ridiculous compression ratios.
> Other artifacts at lower resolution can exhibit similar mangling as well (specifics would of course vary)
Majority of traditional compressions would make text unreadable when compression is too high or the source material is too low-resolution. They don't substitute one number for another in an "unambiguous" way (i.e. it clearly shows a wrong number instead of just a blurry blob that could be both).
The "specifics" here is exactly what the whole topic is focus on, so you can't really gloss over it.
- lifthrasiir 1 year ago> 1. No one, at least not OP, ever said it's a inherent flaw of JBIG2. The fact it's an implementation error on XeroX's end is a good technical detail to know, but it is irrelevant to the topic.
It is relevant only when you assume that lossy compression has no way to control or even know of such critical changes. In reality most lossy compression algorithms use a rate-distortion optimization, which is only possible when you have some idea about "distortion" in the first place. Given that the error rarely occurred in higher dpis, its cause should have been either a miscalculation of distortion or a misconfiguration of distortion thresholds for patching.
In any case, a correct implementation should be able to do the correct thing. It would have been much problematic if similar cases were repeated, since it would mean that it is much harder to write a correct implementation than expected, but that didn't happen.
> Majority of traditional compressions would make text unreadable when compression is too high or the source material is too low-resolution. They don't substitute one number for another in an "unambiguous" way (i.e. it clearly shows a wrong number instead of just a blurry blob that could be both).
Traditional compressions simply didn't have much computational power to do so. The "blurry blob" is something with lower-frequency components only by definition, and you have only a small number of them, so they were easier to preserve even with limited resources. But if you have and recognize a similar enough pattern, it should be exploited for further compression. Motion compensation in video codecs were already doing a similar thing, and either a filtering or intelligent quantization that preserves higher-frequency components would be able to do so too.
----
> 2. "Lower DPI" is extremely common if your definition for that is 300dpi. At my company, all the text document are scanned at 200dpi by default. And 150dpi or even lower is perfectly readable if you don't use ridiculous compression ratios.
I admit I have generalized too much, but the choice of scan resolution is highly specific to contents, font sizes and even writing systems. If you and your company can cope with lower DPIs, that's good for you, but I believe 300 dpi is indeed the safe minimum.
- lifthrasiir 1 year ago
- thrdbndndn 1 year ago
- bluedino 1 year ago
- Dwedit 1 year agoThere was an earlier article (Sep 20, 2022) about using the Stable Diffusion VAE to perform image compression. Uses the VAE to change from pixel space to latent space, dithers the latent space down to 256 colors, then when it's time to decompress it, it de-noises that.
https://pub.towardsai.net/stable-diffusion-based-image-compr...
HN discussion: https://news.ycombinator.com/item?id=32907494
- dheera 1 year agoI've done a bunch of experiments on my own on the Stable Diffusion VAE.
Even when going down to 4-6 bits per latent space pixel the results are surprisingly good.
It's also interesting what happens if you ablate individual channels; ablating channel 0 results in faithful color but shitty edges, ablating channel 2 results in shitty color but good edges, etc.
The one thing it fails catastrophically on though is small text in images. The Stable Diffusion VAE is not designed to represent text faithfully. (It's possible to train a VAE that does slightly better at this, though.)
- dheera 1 year ago
- rottc0dd 1 year agoSomething similar by Fabrice Bellard:
- skandium 1 year agoIf you look at the winners of the Hutter prize, or especially the Large Text Compression Benchmark, then almost every approach uses some kind of machine learning approach for the adaptive probability model and then either arithmetic coding or rANS to losslessly encode it.
This is intuitive, as the competition organisers say: compression is prediction.
- p0w3n3d 1 year agoSome people are fans of Metallica or Taylor Swift. I think Fabrice Bellard should get the same attention!
- p0w3n3d 1 year agoAnd the same money for performance, of course
- p0w3n3d 1 year ago
- skandium 1 year ago
- mbtwl 1 year agoA first NN based image compression standard is currently being developed by JPEG. More information can be found here: https://jpeg.org/jpegai/documentation.html
Best overview you can probably get from “JPEG AI Overview Slides”
- jfdi 1 year agoAnyone know of open models useful (and good quality) for going the other way? I.e., Input is a 800x600 jpg and output is 4k version.
- davidbarker 1 year agoMagnific.ai (https://magnific.ai) is a paid tool that works well, but it is expensive.
However, this weekend someone released an open-source version which has a similar output. (https://replicate.com/philipp1337x/clarity-upscaler)
I'd recommend trying it. It takes a few tries to get the correct input parameters, and I've noticed anything approaching 4× scale tends to add unwanted hallucinations.
For example, I had a picture of a bear I made with Midjourney. At a scale of 2×, it looked great. At a scale of 4×, it adds bear faces into the fur. It also tends to turn human faces into completely different people if they start too small.
When it works, though, it really works. The detail it adds can be incredibly realistic.
Example bear images:
1. The original from Midjourney: https://i.imgur.com/HNlofCw.jpeg
2. Upscaled 2×: https://i.imgur.com/wvcG6j3.jpeg
3. Upscaled 4×: https://i.imgur.com/Et9Gfgj.jpeg
----------
The same person also released a lower-level version with more parameters to tinker with. (https://replicate.com/philipp1337x/multidiffusion-upscaler)
- aspyct 1 year agoThat magnific.ai thingy is taking a lot of liberty on the images, and denaturing it.
Their example with the cake is the most obvious. To me, the original image shows a delicious cake, and the modified one shows a cake that I would rather not eat...
- hug 1 year agoEvery single one of their before & after photos looks worse in the after.
The cartoons & illustrations lose all of their gradations in feeling & tone with every outline a harsh edge. The landscapes lose any sense of lushness and atmosphere, instead taking a high-clarity HDR look. Faces have blemishes inserted the original actor never had. Fruit is replaced with wax imitation.
As an artist, I would never run any of my art through anything like this.
- hug 1 year ago
- quaintdev 1 year agoHere's free and open source alternative that works pretty well
- jasonjmcghee 1 year agoBoth of these links to replicate 404 for me
- davidbarker 1 year agoAh, the user changed their username.
https://replicate.com/philz1337x/clarity-upscalerhttps://replicate.com/philz1337x/multidiffusion-upscaler
- davidbarker 1 year ago
- aspyct 1 year ago
- godelski 1 year agoLook for SuperResolution. These models will typically come as a GAN, Normalizing Flow (or Score, NODE), or more recently Diffusion (or SNODE) (or some combination!). The one you want will depend on your computational resources, how lossy you are willing to be, and your image domain (if you're unwilling to tune). Real time (>60fps) is typically going to be a GAN or flow.
Make sure to test the models before you deploy. Nothing will be lossless doing superresolution but flows can get you lossless in compression.
- sitkack 1 year agoOr else you get Ryan Gosling https://news.ycombinator.com/item?id=24196650
- sitkack 1 year ago
- hansvm 1 year agoI haven't explored the current SOTA recently, but super-resolution has been pretty good for a lot of tasks for few years at least. Probably just start with hugging-face [0] and try a few out, especially diffusion-based models.
[0] https://huggingface.co/docs/diffusers/api/pipelines/stable_d...
- codercowmoo 1 year agoCurrent SOTA open source is I believe SUPIR (Example - https://replicate.com/p/okgiybdbnlcpu23suvqq6lufze), but it needs a lot of VRAM, or you can run it through replicate, or here's the repo (https://github.com/Fanghua-Yu/SUPIR)
- lsb 1 year agoYou’re looking for what’s called upscaling, like with Stable Diffusion: https://huggingface.co/stabilityai/stable-diffusion-x4-upsca...
- cuuupid 1 year agoThere are a bunch of great upscaler models although they tend to hallucinate a bit, I personally use magic-image-refiner:
- physPop 1 year agoThis is called super resolution (SR). 2x SR is pretty safe and easy (so every pixel in becomes 2x2 out, in your example 800x600->1600x1200). Higher scalings are a lot harder and prone to hallucination, weird texturing, etc.
- jfdi 1 year agothank you! will enjoy reviewing each of these
- davidbarker 1 year ago
- calebm 1 year agoAll learning is compression
- esafak 1 year agoIt is not going to take off if it is not significantly better, and has browser support. WebP took off thanks to Chrome, while JPEG2000 floundered. If not native browser support, maybe the codec could be shipped by WASM or something?
The interesting diagram to me is the last one, for computational cost, which shows the 10x penalty of the ML-based codecs.
- geor9e 1 year agoThe thing about ML models is the penalty is a function of parameters and precision. It sounds like the researchers cranked them to max to try to get the very best compression. Maybe later they will take that same model, and flatten layers and quantize the weights to can get it running 100x faster and see how well it still compresses. I feel like neural networks have a lot of potential in compression. Their whole job is finding patterns.
- dylan604 1 year agoDid JPEG2000 really flounder? If your concept of it being a consumer facing product as a direct replacement for JPEG, then I could see being unsuccessful in that respect. However, JPEG2000 has found its place in the professional side of things.
- esafak 1 year agoYes, I do mean broad- rather than niche adoption. I myself used J2K to archive film scans.
One problem is that without broad adoption, support even in niche cases is precarious; the ecosystem is smaller. That makes the codec not safe for archiving, only for distribution.
The strongest use case I see for this is streaming video, where the demand for compression is highest.
- userbinator 1 year agoThat makes the codec not safe for archiving, only for distribution.
Could you explain what you mean by "not safe for archiving"? The standard is published and there are multiple implementations, some of which are open-source. There is no danger of it being a proprietary format with no publicly available specification.
- sitkack 1 year agoFor archiving, I'd recommend having a wasm decompressor along with some reference output. Could also ship an image viewer as an html file with all the code embedded.
- actionfromafar 1 year agoHuh, one more point for considering J2K for film scan archiving.
- dylan604 1 year agoBut that's like saying it's difficult to drive your Formula 1 car to work every day. It's not meant for that, so it's not the car's fault. It's a niche thing built to satisfy the requirements of a niche need. I would suggest this is "you're holding it wrong" type of situations that isn't laughable.
- userbinator 1 year ago
- esafak 1 year ago
- dinkumthinkum 1 year agoI think it is an interesting discussion, learning experience (no pun intended). I think this is more of a stop on a research project than a proposal; I could be wrong.
- ufocia 1 year agoBetter or cheaper, e.g. AV1?
- 1 year ago
- geor9e 1 year ago
- holoduke 1 year agoHow much vram is needed? And computing power? To open a webpage you soon need 24gb and 2 seconds of 1000 watts energy to uncompress images. Bandwidth is reduced from 2mb to only 20kb.
- guappa 1 year ago> Bandwidth is reduced from 2mb to only 20kb.
Plus the entire model, which comes with incorrect cache headers and must be redownloaded all the time.
- 1 year ago
- guappa 1 year ago
- amelius 1 year agoHow do we know we don't get hands with 16 fingers?
- ogurechny 1 year agoValid point. Conventional codecs draw things on screen that are not in the original, too, but we are used to low quality images and videos, and learned to ignore the block edges and smudges unconsciously. NN models “recover” much complex and plausible-looking features. It is possible that some future general purpose image compressor would do the same thing to small numbers lossy JBIG2 did.
- ufocia 1 year agoHow do we know whether it's an image with 16 fingers or it just looks like 16 fingers to us?
I looked at the bear example above and I could see how either the AI thought that there was an animal face embedded in the fur or we just see the face in the fur. We see all kinds of faces on toast even though neither the bread slicers nor the toasters intend to create them.
- 1 year ago
- 1 year ago
- ogurechny 1 year ago