View Full Version : AccurateRip: Developments

11-24-2007, 05:46 AM
This perl script developed by Christopher Key allows access to accuraterip externally, using the database direct:


It could be adapted for use in other programs.

Please note the license requirements for developing access to accuraterip:


11-24-2007, 05:47 AM
The old submission emails for accuraterip have been removed (many 10000's of spam each month), update your programs to latest EAC and dBpoweramp and use the email address shown (if using email submissions).

11-24-2007, 05:48 AM
Work has begun on replacing the AccurateRip submission system, to overcome the 32KB submission limit.

12-02-2007, 05:52 AM
As mentioned above, the accuraterip posting is being updated, it will be moved onto a new server (from linux to win server 2003), the internal posting routine has been replaced with one from asp.net, will not have an upload size limit. I will try to test next week, before a full server switch over (test accuraterip.com left as is with submissions directed to another domain).

There will be a new accuraterip.dll to have no limits on posting plus a few other little mods.

02-23-2009, 04:04 PM
AccurateRip 2

Work has fully begun on Accuraterip2, developments are required to AccurateRip to make it:

Immune to different CD pressings,
Make its design more workable with the huge amount of data submitted.

Different Pressings Immunity

2 databases, side by side, the existing database is DB1 and new is DB2

[DB2] In addition 2xCRC32's should be generated:


So CRC1 is the first say 5 frames and last 5 frames of the track, CRC2 is all the track. These 2 CRCs could be submitted to a 2nd database, where the CRC1 will go into the current offset finding slot, no changes on the backend! (apart from creating the 2nd database)

Why do this? It would allow a match if your CD is a different pressing and not really in the database, no rolling CRCs are needed as the CRC from the existing database that is used to find offsets of drives can find the offset of the pressing and as long as it is < 5 frames +-, the pressing can be verified. It also has the benifit with track 1 (which currently is only calculated from 5 frames in) for any drive with a + offset it would have the correct CRC1, so all of track 1 could be verified in its entireity (not possible for the last track as majority of drives cannot overread).

When I started AccurateRip the idea of pressings messing the audiodata was not known (to me), if you had 40 different pressings of the same CD (could be with worldwide releases over 10 years) that lowers the 1 in 4 billion of a working 32-CRC routine to 1:100 Million of the chance of a CRC clash, adding the 2nd CRC would boost CRC to 64 bits effectively. Then AccurateRip could return:

Match using old CRC method,
Partial pressing match (10 frames of the file missing)
Match using CRC fix method (32 bit), in additon CRC32 match (on CRC1 and CRC2, so whole track)

Making the data more workable

Think of a database such as freedb, which might run to GB's. Then think if freedb was populated in the backend database from everyone who rips it (it submitted over and over again by everyone who uses it). The database would be 100 to 1,000x larger, this is effectively what accuraterip is (each record though we store less than freedb - our database is many times larger than freedb). Our systems are now struggling to accomodate all this data. A specific top of the line 'server grade' system will be aquired for the purpose of working with accuraterip data. AccurateRip is growing at a rate where we have to design the system to accommodate x10 the current data to see it able to handle expected submissions for the next 2-3 years.

04-21-2009, 05:45 AM
Making Data More Workable - Completed

We have re-written the backend of AccurateRip from scratch, this was done because:

The database has been improved for speed, previously the database had grown over the 4 years AccurateRip has been running and the design was breaking - this was detected in Feb 2009 and we have held off updating the online database because of high chances of corruption. The new routines / hardware is x100 faster (we use an x64 OS and fill with memory to run the database in memory). Also we scrapped the old database - and re-ran everyones submissions (going back to 2005) into the new database. The design of the database is custom, as SQL will not 'hack' 40 Million submissions in a reasonable turn of speed.

Better Administration - the back end tools are imprved for managing users / drives

Better user tracking - we have relaxed the internal rules of submission, as we now take a users submission as a whole rather than on individual track basis - this allows for more submissions in the database - this new database is 3x larger than the old one - so there are 3x more discs in the database....

This new database has gone live today.