Jean-loup is the kind of person I love to see us interview here. He's important in the sense that work he's done (positively) affects almost every Linux or Unix user, but the chance of Jean-loup ever getting any "mainstream" media attention is zero. Or possibly less. Without people like Jean-loup there would be no Open Source movement, and I consider the chance to present him as a Slashdot interview guest a *huge* honor. The readers who asked the excellent questions, and the moderators who helped select them, also deserve major kudos. So thanks to all of you for an excellent Q&A session!
1) bzip2 Support
by Aaron M. Renn
When is gzip going to provide (transparent) support for bzip2 files and the Burrows-Wheeler algorithm?
Will BW be an algorithm option within the gzip file format itself ever?
I have worked very closely with Julian Seward, the author of bzip and bzip2. The goal was to integrate a Burrows-Wheeler algorithm inside zlib 2.0 (upon which gzip 2.0 is based). One of the requirements was to avoid the kind of arithmetic coding used in bzip because of both patent and decoding speed concerns, so Julian wrote the Huffman coding code now used in bzip2. Another requirement was to put the code in library form and Julian did that too.
Unfortunately, Julian decided to release bzip2 independently instead of staying within the gzip 2.0 project. It was mainly my fault, since I couldn't spend enough time on the other parts of the project, and the project was not advancing fast enough. Since Julian left, the project progressed even more slowly, and new blood is obviously necessary, because other responsibilities no longer leave me enough time for gzip. If you're an expert in data compression, e-mail me to convince me that you are the most qualified person to turn the zlib/gzip 2.0 project into an overwhelming success :-)
2) The Data Compression Book
I am a happy owner of The Data Compression Book (2nd Ed). With the increasing availability of compression routines within libraries (Java's GZIP streams spring to mind), does this make your book a little unnecessary?
Should software authors continue to write their own compression routines, or simply trust the versions available to them in library form?
I can see some definite advantages to library code, i.e. the ability to upgrade routines, and having standardized algorithms which can be read by any program which utilizes the library.
The compression routines in The Data Compression Book were written mainly for clarity, not for efficiency. The source code is present to help understand how the compression algorithms work. It is not designed to be used as is in other software packages, although it does work if efficiency is not a concern. Consider the book as teaching material, not as a data compression library distributed in printed form.
This doesn't mean that the book is unnecessary. Good data compression libraries don't appear magically; their authors had to learn compression techniques one day. If the book helps one person to get started in the data compression area and this person later writes a great compression library, the book will have been useful.
Judging by the success of my zlib data compression library, I think that a vast majority of software authors prefer using an existing library rather than reinventing the wheel. This is how the open-source model works: building upon the work of others is far more efficient than rewriting everything.
3) Compression patents
The compression world has many patents, notably for Lempel-Ziv compression as used in GIF. What is your view on companies patenting non-obvious algorithms for such processes as data compression?
The worst problem is companies patenting obvious algorithms. There are far more patents on obvious ideas than patents on really innovative ideas. In the data compression area, even something as basic as run-length encoding (replace "aaaaa" with a special code indicating repeat "a" 5 times) has been patented at a time where this technique had been well known and widely used for many years.
It is distressing to see the U.S. patent office granting such patents, in contradiction with the law requiring an idea to be both novel and non-obvious to be patentable. Philip Karn has made a good analysis of the problem.
Patents on non-obvious algorithms are a different matter. One view is that algorithms should not be patentable at all, whether obvious or not. This used to be the case, until the US patent office started to grant patents on methods which were nothing else than pure algorithms. I'm afraid that a switch back to the original situation is extremely unlikely.
Several reforms are necessary:
- The patent term should be significantly shortened, at least for algorithms. The patent system was designed to benefit society as a whole, ensuring that new ideas would eventually be made public after a limited period of time instead of being kept as trade secrets. But 20 years is incredibly long in the software area. Granting a monopoly for such a long time no longer benefits society.
- The non-obviousness requirement should be applied much more strictly. A little bit of common sense would avoid a lot of patents on trivial ideas.
- Prior art should be checked more thoroughly. Even non-obvious ideas should not be patented if they have been in use for several years already.
4) A question about Mandrake...
by Mr. Penguin
As we all know, at first Mandrake was little more than a repackaged version of Red Hat. That's changed a bit with the newer versions. My question is this: to what degree will Mandrake continue to differ from RedHat and will there ever be a "developer" version (i.e. one that is centered towards those who are a bit more technically competant)?
That's changed more than a bit. Our distribution is now completely made by us. Believe me, doing everything ourselves represents a significant amount of work. Few people understand how much work is involved in making an independent distribution. We have our own development teams producing things like our graphical install DrakX, our disk partionner DiskDrake, management of security levels in msec, hardware detection with Lothar, etc... Our packages are more recent than those of Red Hat and have more functionality (such as supermount support in the kernel). Red Hat is now even copying packages made by MandrakeSoft (e.g. rpmlint). I hate having to speak like a salesman here, but it is really unfair to say that Mandrake just repackages RedHat; this is simply not true anymore.
Have you looked at Linux-Mandrake 7.0? It does include a developer version. At install time, select the option "Custom" then "Development". You will get all necessary development tools. We, as developers, use our own distribution :-)
5)Why is Mandrake better than Red Hat?
I guess that you have at least a little something to say about this.
Is the 586 optimization enough to justify Mandrake's position? Are you especially proud of any of the architectural differences between the distributions (from what I have been told, the Apache-PHP layout is quite a bit different).
How do feel about the steps that Red Hat has taken to change their distribution in reaction to yours?
Mandrake is far more than Red Hat plus 586 optimization. It is an independent distribution. (See the answer to A question about Mandrake above.) We have enhanced some packages (such as the kernel or Apache) to provide additional functionality for users.
It's clear that Mandrake pushes Red Hat to improve its own version and nowdays Red Hat includes some development from Mandrakesoft. There is a coopetition: Red Hat and MandrakeSoft both benefit from the same open-source community, but they compete for the customer. This coopetition is fully beneficial for the Linux users since we both need to constantly improve our version. We just make sure that Mandrake stays ahead :-)
I noticed that you allowed the people who make the Winzip product to incorporate code written for Gzip. I think it's cool that you did that, because it would be horrible if winzip couldn't handle the gzip format, but at the same time, what are your thoughts about allowing free software code to be included in closed-source products?
Just out of curiosity, (tell me it's none of my business if you want to and I'll be OK with that) did you receive a licensure fee from the company that makes Winzip for the code?
I started writing compression code simply because my 20 MB hard disk, the biggest size one could get at the time, was always full. I didn't write it for money. Even after I got a bigger hard disk, I continued writing compression code for fun. In particular I was not interested in writing a Windows interface. This is why I allowed my code to be used in Winzip. I received exactly 0$ for this.
The zlib license also allows it to be used in closed-source products. This was an absolute requirement for the success of the PNG image format, which relies on zlib for data compression. If we had used a GPL license, Netscape and Microsoft Explorer wouldn't support PNG, and the PNG format would be dead by now. I also received 0$ for zlib, if you're curious...
Even though I allowed my code to be used in closed-source products, I am a strong supporter of the open-source model. That's also why I work for MandrakeSoft. The open-source model is getting so much momentum that it will in the end dominate the software industry.
7) What about wavelets? by Tom Womack
The Data Compression Book was an excellent reference when it came out, but there are some hot topics in compression that it doesn't cover - frequency-domain lossy audio techniques (MP3), video techniques (MPEG2 and especially MPEG4), wavelets (Sorenson video uses these, I believe, and JPEG2000 will), and the Burrows-Wheeler transform from bzip.
Do you have any plans for a new edition of the book, or good Web references for these techniques? BZip is covered well by a Digital research note, but documentation for MPEG2 seems only to exist as source code and I can't find anything concrete about using wavelets for compression. The data is all there on the comp.compression FAQ, but the excellent exposition of the book is sorely lacking.
These are all very worthy topics, and Mark Nelson and I would like to incorporate them into a new version of the book someday. However, the decision to produce a new version is taken by the publisher, not us.
Note also that these are all very big topics, and it would be quite easy to write an entire book on each one. I don't think they will fit well in a chapter or two. Covering JPEG in one chapter was difficult, and Mark Nelson has been criticized for not describing the specifics of the standard algorithm.
8) Compression software
It is a "truism" in the Free Software community that code should be released early and released often.
However, much of the software you've written has started gathering a few grey hairs. Gzip, for example, has been at 1.2.4 for many, many moons, and looks about ready to collect it's gold watch.
Is compression software in a category that inherently plateus quickly, so that significant further work simply isn't possible? Or is there some other reason, such as Real Life(tm) intruding and preventing any substantial development?
(I noticed, for example, a patch for >4Gb files for gzip, which could have been rolled into the master sources to make a 1.2.5. This hasn't happened.)
I knew this question would come when I accepted a Slashdot interview. But I had to face it :-(
In short, you are completely right. While working on gzip 2.0, I continued to maintain gzip 1.x, accumulating small patches, and answering a lot of e-mail. But I was hoping to be able to release gzip 2.0 directly, without having to make an intermediate 1.x release. See my answer to the question bzip2 support concerning the state of gzip 2.0 and the Real Life interference. I'd be glad to hand over all my patches for 1.2.4 to the person who can help me getting the gzip 2.0 project to full speed.
9) Proprietary algorithms
by Tom Womack
The field of compression has been thronged with patents for a long time - but patents at least reveal the algorithm.
What do you think of the expansion of trade-secret algorithms (MP3 quantisation tables, Sorensen, RealAudio and RealVideo, Microsoft Streaming Media) where the format of the data stream is not documented anywhere?
The hardware specifications for some video cards were kept as trade secrets. As a result, the XFree86 project couldn't support these cards. Increasing pressure from users who didn't buy those cards because they couldn't be supported has led the manufacturers to release the hardware specifications, and those cards are now well supported.
Similarly, I think that pressure from the open-source community can become strong enough to force companies to open their formats. We're not completely there yet, but I believe that the open-source model will win in the end. Even a giant like Microsoft starts considering Linux as a real threat.
10) Go and Compression
When I think of a game like go or chess, I think that each player develops there own algorithm to beat their opponent. If you agree, what relationships or similarities do you see between your intrest in Go and your intrest in compression?
What a nice question!
Even though the rules of go are very simple, the complexity of go is astonishing. The best go programs can be beaten by a human beginner. The search space in go is so large that is impossible to apply the techniques that are so successful in chess. Professional go players never evaluate all possible moves. They are able to compress an enormous amount of information into a relatively small number of concepts.
Where a human beginner would have to painfully examine many possibilities to realize that a certain group is doomed, and would most likely fail in the process, a go expert can immediately recognize certain shapes and can very quickly determine the status of a group. One gets stronger at the game by reaching higher levels of abstraction, which are in effect better compression ratios. A professional go player can elaborate concepts that an average player would have great difficulties to understand.
Current go programs are overwhelmed by the amount of information present in a game of go. They are unable to understand what is really going on. Since brute force techniques can't work in go, programs will only improve by compressing the available information down to a manageable level.