I've met resistance to using FLAC from those that believe that any compressed audio file format is subject to more severe corruption than a linear PCM (i.e., WAV) file when subjected to bit-read errors or "bit rot." While we know that a perfectly-handled FLAC file is bit-for-bit identical to the source file when decoded, what do we know about the severity of problems that might occur if there are missing or corrupted bits in a block. In real life, such errors do occur. Is that not right?
It's said that such errors are inconsequential as they are limited to a single block in which the bit error(s) occurred and thus do not affect preceding or following blocks. Still, errors are errors and if bit rot occurs, then the decoded file cannot match the source file. Just how bad can a block be affected by a single bit error or a short burst of errors?
I doubt that it is possible to predict the consequences of bit errors in any quantified manner. But it seems like it would be good to minimize the consequences of bit rot. For that reason, a linear PCM or WAV implementation with all the other attributes of a normal FLAC file container seems preferable as long as disc space and file transfer times are not of concern. In particular, our application is for a large (100,000+ albums) library in which we want all of the attributes of a FLAC file with its Vorbis comments but the (apparently) better resistance to bit rot.
So my question is this: Why is it not possible for dBpoweramp to create a "pure" FLAC file wherein the actual payload is truly a uncompressed WAV file even with digital zeros or some kinds of noise? By "pure," I mean a payload that is bit-for-bit identical to the WAV file that created it. Clearly the loss of a single bit or even a short burst of bit read errors only affects the sample(s) within which the errors occurred.
"d2b" aka Dennis...
P.S. Perhaps there would be decoding problems?
It's said that such errors are inconsequential as they are limited to a single block in which the bit error(s) occurred and thus do not affect preceding or following blocks. Still, errors are errors and if bit rot occurs, then the decoded file cannot match the source file. Just how bad can a block be affected by a single bit error or a short burst of errors?
I doubt that it is possible to predict the consequences of bit errors in any quantified manner. But it seems like it would be good to minimize the consequences of bit rot. For that reason, a linear PCM or WAV implementation with all the other attributes of a normal FLAC file container seems preferable as long as disc space and file transfer times are not of concern. In particular, our application is for a large (100,000+ albums) library in which we want all of the attributes of a FLAC file with its Vorbis comments but the (apparently) better resistance to bit rot.
So my question is this: Why is it not possible for dBpoweramp to create a "pure" FLAC file wherein the actual payload is truly a uncompressed WAV file even with digital zeros or some kinds of noise? By "pure," I mean a payload that is bit-for-bit identical to the WAV file that created it. Clearly the loss of a single bit or even a short burst of bit read errors only affects the sample(s) within which the errors occurred.
"d2b" aka Dennis...
P.S. Perhaps there would be decoding problems?
Comment