The reason I need control at the individual disc level is that I want to be able to process batches that contain a mix of audio CDs (which would be ripped with dBpoweramp), data CD-ROMs and DVDs (which are disc types that need to be processed differently using different software; in my case probably IsoBuster). Once batch ripper takes control of the ripping process, it will simply reject any non-audio CDs (unless I'm missing something and there's a way to launch Isobuster from within batch ripper, but I don't think so?).
Some context (possibly too much, but here goes): I'm trying to build a workflow that we can use to make disc images of a national libray's collection of optical carriers (CD-ROMs, audio CDs, DVDs).
For this I'd like to end up with a batch structure that is more or less like this:
Code:
| metacarriers.csv
|
+---disc001
| track01.wav
| track02.wav
| ::
| track12.wav
| tracks.md5
|
+---disc002
| image01.iso
| image01.iso.md5
|
\---disc003
+---session01
| track01.wav
| track02.wav
| ::
| track12.wav
| tracks.md5
|
\---session02
image3.iso
image3.iso.md5
In this example the batch contains 3 carriers, with each carrier represented by a directory. Here disc001 is an audio CD, disc002 a CD-ROM and disc003 an 'enhanced' audio CD with a data track on the 2nd session. Metacarriers.csv is a CSV file with metadata that allows us to link the discs back to existing records in our library catalogue. For example:
Code:
carrierID,catalogID,filePath
1,121274306,./disc0001/
2,121274306,./disc002/
3,236599380,./disc003/
Which tells us that disc001 and disc002 both belong to the same catalogue record (e.g. this could be an audio CD and a CD-ROM that are both supplements to the same physical book). After a batch is completed, its contents are reformatted into archival packages, where each package contains the data for all carriers that are part of a catalogue record.
Of course we could process audio and data CDs using two separate workflows (with each having their dedicated machine + disc robot). However, in many cases that would mean that the information needed to generate 1 archival package (catalogue record) is spread out across multiple batches (audio batch + data batch). This would make the processing chain a lot more complex and error-prone. It would also complicate the processing of multi-session enhanced discs. For these reasons I'm really keen on avoiding this, if possible.
Does this make sense?