Hacker News

SMLL: Using 200MB of Neural Network to Save 400 Bytes

16 points by fcjr ago | 3 comments

f_devd |next [-]

Having worked on compression algos, any NN is just way to slow for (de-)compression. A potential usage of them is for coarse prior estimation in something like rANS, but even then the overhead cost would need to carefully weighted against something like Markov chains since the relative cost is just so large.

msephton |previous [-]

No mention of decompression speed and validation, or did I miss something?

savalione |root |parent [-]

It's in the post: Benchmarks -> Speed

tl;dr: SMLL is approximately 10,000x slower than Gzip