The current workflow is (1) load CD, (2) CD Ripper fingerprints the CD, (3) CD Ripper goes to Internet and grabs metadata (artist, song title, etc.), (4) user looks through the grabbed metadata and chooses/edits the metadata, (5) user maybe sends edits to FreeDB, (6)user starts the ripping.
A more pipelined workflow would be for CD Ripper to start the ripping at step 2.
Granted, the ripping would occur without necessarily having the AccurateRip data or the files named correctly. But it would be faster to check the accuracy, change file/path names, and edit the metadata in memory than it would to rip the content. This would speed up a mass ripping workflow greatly because the limiting factor in how fast a mass ripping can occur is the time spent on the ripping. I don't see that all of the metadata is needed before ripping can start. While track 8 is moving from the CD player to computer memory, CD Ripper can be checking the AccurateRip data and changing the metadata for track 2. Granted, CD Ripper might have to go back to track 2 and try again if there is an error, but that is no slower than verifying each track as it is read from the CD.
A more pipelined workflow would be for CD Ripper to start the ripping at step 2.
Granted, the ripping would occur without necessarily having the AccurateRip data or the files named correctly. But it would be faster to check the accuracy, change file/path names, and edit the metadata in memory than it would to rip the content. This would speed up a mass ripping workflow greatly because the limiting factor in how fast a mass ripping can occur is the time spent on the ripping. I don't see that all of the metadata is needed before ripping can start. While track 8 is moving from the CD player to computer memory, CD Ripper can be checking the AccurateRip data and changing the metadata for track 2. Granted, CD Ripper might have to go back to track 2 and try again if there is an error, but that is no slower than verifying each track as it is read from the CD.
Comment