Friday, January 23, 2015

LZHAM v1.0 is being tested on iOS/OSX/Linux/Win

Currently testing it on a few machines using random codec settings with ~3.5 million files. We also just switched over our title's bundle decompression step from LZMA to LZHAM, so the decompressor will be tested on many iOS devices too.

I've also tested the compression portion of the code on iOS, but I won't be able to get much coverage there before releasing it. Honestly the decompressor is much more interesting for mobile devices (that's really the whole point of LZHAM).

I'll be porting and testing LZHAM on Android within a week or so - should be easy by this point.


  1. It's great to see the progress you are making with LZHAM. I can't wait to find an excuse to use it in upcoming projects.

    If I remember correctly LZMA can use ungodly amounts of memory for decompression (depending on the compression settings). How does LZHAM compare in this regard?

    1. Thanks Shorty - I'm going to keep working on LZHAM until I can achieve more traction with the codec.

      LZHAM's compressor is a heavyweight beast at the moment. It likes a lot of RAM. LZHAM's research codec roots show through the most in its compressor.

      Its decompressor (which is more refined than the compressor at the moment) uses approximately the same amount of RAM as LZMA. You can massively reduce the amount of RAM used by either LZMA or LZHAM by decreasing the dictionary sizes. So if you set the dict. size to 32KB or 64KB, you'll only need this plus whatever the codec needs to hold its statistical/decompression tables in order to decompress.

      It's totally possible to allow smaller dictionary sizes in LZHAM, for example 4K, 8K, etc. but I just haven't worked on it (should be very easy to do).

      I'm going to measure and optimize memory consumption sometime after v1.0.