This is pretty cool, it's like a Python and *nix version of Exact Audio Copy (EAC) for Windows. I enjoy the ritual of perfectly ripping a CD. I can spend an hour or two scanning the artwork, verifying the track names, performing the rip, losslessly compressing (I prefer FLAC), generating .cue files, checksuming everything, then sitting down to experience the best possible digital copy of a physical item I can self produce.
Years ago, I had most of my music ripped to AAC in iTunes, easily accessible, browsable, and shuffleable.
I now rip each of my CDs to a single FLAC and cue sheet (using XLD on Mac), on the theory that it's the most accurate way to archive a disc. However, I haven't found anything that offers the same accessibility to such a collection as iTunes did. I look through folders, and drag a few cue sheets at a time into Foobar 2000.
I found Swinsian[1] after I ditched iTunes, It works quite good and I’m very happy with it.
While I rip all tracks separately the FAQ states: „Swinsian also supports albums ripped as a single file together with a cue file, and FLAC, Ogg Vorbis and WavPack files with embedded cue information„.
Very promising: supports macOS 10.8 and later, last updated in 2021, and it read my whole CD collection in just a couple of minutes. Thank you for the suggestion.
I wonder if it's still being developed. It's concerning that it's still an Intel-only build instead of a Universal build that can run natively on Apple Silicon hardware.
I recently contacted the developer about this. He replied that the only reason he hasn’t released a native Apple Silicon build is that he doesn’t have an Apple silicon Mac yet. I was tempted to fund raise and send him one!
My understanding is one can build a Universal app on either platform. Debugging the Apple Silicon side does require an Apple Silicon Mac but a well-developed app with a constrained feature set might "just work." They could make a Universal build and offer it as a beta to those who wish to try.
I don't know where they're located but a refurbished Mac mini from Apple is $589. AWS EC2 M1 Mac instances are ~$16/day ($0.65/hour, 24-hour minimum).
The developer is regularly putting out beta releases [1] for the 3.0; With the other comments I found, I assume it's a single person development - probably in spare time - and only using an Intel mac.
"... on the theory that it's the most accurate way to archive a disc ..."
I won't argue with this but I think there needs to be a qualifier - the most accurate way to archive a disc while using compression ...
Given that an audio cd is encoded in WAV/PCM and we have a WAV file specification, I think ripping to a WAV file remains the gold standard:
"... digital audio extraction software can be used to read CD-DA audio data and store it in files. Common audio file formats for this purpose include WAV and AIFF, which simply preface the LPCM data with a short header ..." [1]
I like the idea that ripping to a WAV file means I never have to rip that disc again.
FLAC is lossless, so there shouldn't be any difference in accuracy between the WAV and the FLAC. Of course, this is why people often choose FLAC for archival of ripped CDs.
Edit: I only mention this because I can't think why the extra "while using compression" qualifier is relevant except for information loss during compression.
Does FLAC preserve the exact bytestream of the CD tracks? (As an analogy, PNG is "lossless," but it won't natively preserve the metadata of a raw digital camera image.) The audio will be preserved but there might be interesting details on the CD like easter eggs, copyright text, etc. and I'm not sure FLAC would capture those.
> Does FLAC preserve the exact bytestream of the CD tracks? (As an analogy, PNG is "lossless," but it won't natively preserve the metadata of a raw digital camera image.)
No, but if that is the definition being used, neither does WAV store that data either.
> The audio will be preserved but there might be interesting details on the CD like easter eggs, copyright text, etc. and I'm not sure FLAC would capture those.
Flac does allow for storing a lot of metadata in the flac file itself, including: CUE sheet, a picture (image file), and arbitrary tags (key value pairs). So much of those could be stored in the flac file, but obtaining them (Easter eggs and the like) is dependent upon the program ripping the CD, not the file format storing the result of the rip (flac does not do CD ripping itself).
Audio CDs have less error correction than CD-ROMs. When you rip an audio CD, the drive is doing the error correction, and may give you PCM data different from what was written on it. Software like ExactAudioCopy tries the best it can, but that’s exactly that. Reading CD-ROM data is different. You can get an iso image of a data cd, but not an audio one.
This was done to both increase play time and discourage copying.
So since you’ll never get the raw redbook data off a cd, there is no reason to prefer WAV over FLAC for CD audio.
> This was done to both increase play time and discourage copying.
I doubt copying figured into the decision process (at least not a deliberate "anti-piracy" thought process).
The CD was released in 1982 [1]. This was only one year after IBM had released the original IBM PC, which came with one or two 320KB 5.25 inch floppy drives. The IBM PC did not have an official "hard drive" variant from IBM until the PC XT [3] released in 1983 (one year after the CD) and the hard drive option was a whole 10MB. When the target was replacement of the Vinyl LP player and compact cassette deck, and when their target customer base might have had 10-20MB of total storage in their personal computers (if they even had a personal computer), it would seem unbelievable to Sony and Phillips that those customers might even be capable of copying a disk holding somewhere upwards of 850-950MB of digital audio data.
And of course in the above I am overlooking that in order to "copy a CD" (as in make a digital copy) one first requires a CD drive that can interface with a PC in a digital way, and such drives did not appear until some years after the CD's release.
So with an amount of data on each disk that exceeds end users available storage by orders of magnitude, and no "CD drives interfaced digitally to computers" at the same time it seems unlikely that Sony or Phillips ever even considered that end users would be able to digitally copy CD disks.
I found that some Easter eggs weren't archived by EAC due to the fact they subly broke the CD spec. An example would be the first song of some versions of White Ladder by David Gray have a song "before" the first song. You press play then press "rewind" and the timer goes to -0:01 ... -3:45 or whatever and a song plays before the regular song at 0:00. I was tricky to hear it with a CD player because the rewind behavior was kind of undefined.
An audio CD can be done in a way that your wav will not represent the full data on the disc. If you want an exact copy, you need to do a disc image with preserve empty space or something similar. The CD can contain both audio tracks and data tracks. Some audio tracks can be hidden from normal play and a ripper might miss those too.
So if you really want to save an exact copy of the CD, you will have to find something better than wav to save the data. There are a few programs that can do an image to ISO format, that might be what you want. I guess it's a bit harder to find a player that can handle the audio tracks from an ISO though.
(Memory is a bit vague on formats and names but this is the general idea.)
You can use foobar to index your music files and assign a hotkey to run a search of your music library. I have found that foobar is faster at locating files than browsing my [excessively large] music collection.
hate to say, but this is my way... EAC to flac and leave it on a NAS, then XLD to apple lossless on my macbook and sync to my iPhone... working great for me so far
Why not use ALAC and individual tracks? It's lossless and compatible with iTunes/Music.app and iOS devices (as well as basically any hardware or software player worth using).
Using whole files rather than individual tracks is just asking for problems with mainstream players in my opinion.
However if you're serious about your music library I recommend Roon. Not cheap but a great solution.
cmus [1] is the closest I found to foobar2000. It is my main music player now, after years of disappointment. It supports FLAC and they claim they support CUE sheets, although I haven't tested your particular scenario. The way I use it is I have all my library in it at once, iTunes style. It has good search & playlists, but no drag&drop, since it's just command line...
For your setup-- if I do original CD -> accrual's rip -> burned CD, does burned CD == original CD for all values of original CD?
I'm mostly thinking of those interstitial lead-in thingies on live and concept albums. E.g., if you let a CD play from track 1 to track 2, there may be 5 seconds of crowd noise leading in to track 2. CD players would show this as track 2 at timing "-5", then count down to zero to get to the actual beginning of track 2.
If the user skipped directly to track two, this interstitial thingy wouldn't be played.
Edit: wording
Also: same question for what.cd ripping process. IIRC they broke the tracks down into different files.
Single file + CUE sheet solves lead-in thingie problem, by storing both timestamps (time when player should start showing 'Track 2', and time which servers as zero for track-relative timestamps)
It pretty much mirrors actual CD structure of TOC (containing all necessary timestamps and metadata) and continuous sequence of frames with audio data
Cdrdao preserves all lead-in data for rips in DAO mode. I split tracks to separate files with lead-in of the following track included at the end. Combined with Lame's gapless encoding support you can have seamless playback of CDs that blend tracks.
> For your setup-- if I do original CD -> accrual's rip -> burned CD, does burned CD == original CD for all values of original CD?
I was curious about this as well, and the answer was "no". I meticulously followed EAC setup guides for three drives in EAC, I used the recommended gap settings, the results were completely verified by AccurateRip, I was storing the results as a CUE sheet and single WAVE file, all drives would produce the same file, I was using EAC to burn the CUE sheet and WAVE file back to new Verbatim CD-Rs, re-ripping was done with the same setup in EAC, and the files still didn't match. I've been meaning to dig deeper and compare the files in binary, but haven't gotten around to it.
While not all has been lost, I strongly feel that every time the community migrates from one platform to another, the long tail of small time users gets cut off. Ties are severed, people fall out of reach, etc. I initially joined a few sites that popped up after What, but since I didn't find any of the users there that I used to talk to regularly, I don't visit much ever anymore.
Not only that, it’s almost a certainty that not every little rarity alike to ‘bootleg of local ska band from the 80s that never released an album’ makes the jump each time.
Of course, those are mostly oddities that virtually no one cares about in a sonic sense, but it’s still a piece of musical history.
Kind of makes me wish whoever sued what.cd offered them a sort of clemency deal: keep the site up until everything has been archived by a legit third party (that wouldn’t share it, but so everything has at least been preserved), then shut it down.
Instead they burned down the musical library of Alexandria..
Woah Oink. I had forgotten what it was called, but I learned so much about digital quality and was exposed to so much new music that I still listen to today.
> This is pretty cool, it's like a Python and * nix version of Exact Audio Copy (EAC) for Windows.
And it's now packaged with many Linux distro. At some point it wasn't packaged with Debian and I simply couldn't get it to build so... For a while I ran Fedora on an old PC only to get whipper! Nowadays whipper* ships on even Debian / Devuan so life is good.
It’s also the only tool that private torrent trackers will accept. It’s been rigorously tested and accepted to be pretty much perfect, complete software.
Orpheus will rate appropriate XLD and whipper logs as 100% fwiw
Also I could be wrong, but don't most trackers still accept rips that they consider inferior, but then they could be trumped by someone else that made an EAC rip?
From my experience an XLD rip can be equal to an EAC rip in terms of rating on the site (RED and OPS). I've always been ripping this way and always had a perfect score; It's not easy though. There is a lot of nuances that influence the score, so you need to do a lot of reading and checking.
I suggest looking, maybe, at ALAC as well, even though it is generally slept on.
It is open source now just like FLAC, and actually has built-in support in Windows and many Linux music players despite being an Apple format, making it somewhat more universally compatible than FLAC (which isn’t supported in iOS or macOS).
In my school days I was able to add FLAC support to an iPod Touch 4th gen with a quick Cydia module(?) download. I'm skeptical of your claim that macOS doesn't support FLAC. I'm sure mpv and other programs can play it fine. Do you just mean support in the default-installed macOS software? That seems like pretty limiting criteria. I wouldn't expect something like Windows Media Player to cover all that much either.
Storage is cheap, hurray! Another solution: you can archive a music CD using cdrdao and play the result using mplayer.
# $1 = output file
# /dev/sr0 is assumed
cdrdao disk-info
cdrdao read-cd --with-cddb --device /dev/sr0 --read-raw --datafile $1.cdrdao.bin $1.toc
dd conv=swab if=$1.cdrdao.bin of=$1.bin
# To play the BIN: mplayer -demuxer rawaudio BIN"
I'm not an "audiophile" (perfectionist seeker of equipment) but I enjoy listening to classical and (some) jazz compositions and performances as they were conceived, not as "tracks".
Why MP3? It's simply not a great codec to target. If you have a music server, then why not flac? If space is an issue, then why not Opus (which almost everything supports now) or HE-AAC which has nearly as much support as MP3. You can target the same MP3 bitrate and end up with a more transparent encoding. You can decrease the bitrate and have the same quality as your mp3 encode with a smaller bitrate.
I honestly don't understand why mp3 is such a sticky codec. It's long been surpassed by others.
Well, I've done a number of ABX listening tests (apt install abx) and to my chagrin simply could not tell the difference.
I challenge everyone who feels strongly about this to actually bust out abx and do some listening tests comparing flac to LAME-encoded extreme MP3s and prove to yourself that flac matters at all on your equipment. And then share your results with us if you want!
I don't disagree that given a high enough bitrate, MP3 will become transparent. More my point was why not use a more efficient codec (like opus) if you are trying to save bits (128kbps is transparent for stereo music vs the 200+kbps for lame extreme).
If you aren't trying to save bits, then why use a lossy codec at all?
That's more my point. MP3s will be smaller than flac, for sure, but opus or he-aac files can be even smaller (half the size or more).
The commenter suggested not only FLAC but also Opus, which is a lossy codec that compresses better than MP3. So you could save some disk space.
But generally I agree with you, if you're happy with how your setup works and sounds, MP3 is no great sin.
As for what the commenter said, firstly, MP3 was synonymous with digital music in most of our minds for so long; second, even if modern equipment handles all the better codecs, a lot of us still have memories of times it didn't in the past. Third, I think some of the tooling around metadata is not as developed or ubiquitous for, say, an ogg container. There are ogg comments, but ID3 is better supported.
> MP3 was synonymous with digital music in most of our minds for so long
I think this is probably the main reason for mp3's stickiness.
> second, even if modern equipment handles all the better codecs, a lot of us still have memories of times it didn't in the past.
Sure, and your dvd players didn't always handle x264. Things change :). It's been probably 10 years since new audio hardware had trouble with non-mp3 media.
> Third, I think some of the tooling around metadata is not as developed or ubiquitous for, say, an ogg container. There are ogg comments, but ID3 is better supported.
Granted, I think ogg will have worse support for equipment. However, I'd expect that aac in m4a will end up with the same level of support as mp3s do today simply because it's a lot more common than opus (and older).
> MP3 is no great sin.
It's not, I just don't like seeing generally superior tech getting sidelined because the inferior tech is more familiar. Perhaps that's a sin on my part :). I admit it probably doesn't ultimately matter if your music is 1MB vs 2MB.
I've written code to parse metadata out of an M4A. I'm sure Apple does a good job of it given they've done that in iTunes for 20+ years, but it's a lot less documented and straightforward than ID3v2, as imperfect and hacky as that format is. As a result, I can pretty trivially edit ID3 from a shell script, but dealing with M4A metadata still feels like a black box.
(After typing that, I realize that calling it a "black box" is a pretty good pun on MP4 box formats..)
My car's radio supports both MP3 and AAC-in-MPEG4, but with the latter it occasionally has trouble reading the metadata, though I haven't really figured out why exactly that might be.
"Why MP3? It's simply not a great codec to target. If you have a music server, then why not flac?"
I still encode to plain old 128k mp3. Here's why ...
First, I keep the WAV originals and those are what I listen to on my music system, in my music server, etc.
Second, if I am using the mp3s it is because I am going to some unknown place to interface with some unknown tool to play these - let's just make life simple and use something that will work everywhere - even the dumb creative audio bluetooth adapter that was in that airbnb that one time ...
Finally, 128k mp3 is typically a 10:1 compression ratio and makes size and space "budgeting" easy. It's easy to remember.
One other thing:
When I export my ripped CD wav collection to mp3 I also compress the filenames - I flatten to ASCII 256 and truncate the filenames to 64 characters, etc. LOTS of car audio interfaces just puke when they hit weird unicode characters or can't display long filenames ... it creates all manner of havoc.
I'm really sorry for dogpiling on your setup here, but why keep wav files rather than flac? Flac files are lossless compression, so you can always get the exact same wav out of them, they take less disk space and they can hold Metadata. Flac support is also really common these days.
The only argument I’ve heard that makes me go “OK, fine!” in favor of FLAC is that they forced a standardization of metadata.
FLAC is well-supported. But, PCM RIFF WAV files are triv-i-al. Any CS101 student can write a parser. They don’t need decoding.
2:1 lossless compression is nice. It’s quite a technical feat.
Meanwhile, as an old software engineer, I focus pretty hard on technical simplicity. And, the complexity ratio of PCM vs. pretty much else everything is a a very, very small number. Very small :p
If enough time or technology has passed that someone has to write a wav parser, how do you expect them to mount your file system, assuming it has survived?
But the whole point of software like whipper or EAC is that they crosscheck your rips with an online DB of rips (and you know, when your rip's checksum match, that it's 100% correct because there's no way you and x other people would have read the same wrong rip, misreading the exact same errors).
As I understand it cdparanoia does not such check. It's "paranoid" by its own but you're not verifying that your rip is 100% perfect.
But then if you then convert it to mp3 and discard the lossless files anyway, I take it you don't care much about data integrity.
whipper and EAC do serve another purpose than cdparanoia (I think, btw, that whipper is something uses cdparanoia under the hood) and I'm pretty sure that people who do care about bitperfect rips do not then go and convert their files to mp3s.
FWIW a .flac file (now a .wav but a .flac: that is lossless and compressed) is about twice the size of a 320 kbps mp3 I think.
Seen that songs are tinier than tiny files compared to modern standards I'm totally fine playing the .flac files I ripped using whipper.
Discovering music in the pre-gap before track 1 was my childhood equivalent of discovering a magickal incantation. In a world before mass use of the Internet, those who knew many of these secrets were our High Priests and Priestesses. I love the ease with which modern music can spread around the world but I miss the opportunity for surprise and discovery.
Every audio CD has a list of tracks with their respective start times, so you can jump to specific songs. The first song also specifies a start time, which does not have to be zero. If it isn’t, the player will skip some of the audio, but you can still reverse back into it.
It’s funny how a CD is more like a tape with contiguous content and an index, rather than a file system.
I bought hundreds of CDs from 1990 through to about 2005, some of them on that list, and I never had a clue about this technique.
I knew about hidden tracks at the end after a silent gap - e.g. Nevermind had a hidden track after 10 minutes of silence at the end, was fun to schedule it on a CD jukebox in a bar...
It's actually more like an analog record than a tape. It's even layout in a spiral (instead of circles, like for a DVD, or like tracks on a floppy or harddisk).
All of them. Audio CDs subdivide tracks with index marks. They are rarely used for general purpose and few hardware or software players expose index navigation now. However, every track has a lead-in at index 0 and the main program at index 1. When you skip tracks you start playback at N.1. You only hear N.0 when playing through from the previous track. There is no limit to what can be in the lead-in and some discs would hide bonus "tracks" in 1.0 which is often skipped over by hardware players but can be manually navigated to with the index buttons.
There is a commercially available digital format/medium that has higher res than CD? Like, there are 96kHz discs being sold out there? And there are labels out there mastering and producing these discs? Sorry this is news to me! I thought 96kHz encodes were upsamples or homemade rips from analog formats.
Most hi-res audio is being sold as digital downloads. I don't think there's a currently-sold physical medium that contains digital data that's higher quality than a CD.
Unlike DVDs, CDs don't have sector information. They're just a long continuous spiral of bits, so there is no easy way to tell exactly where on the spiral the head is pointing, especially as many CD players, when you tell them "go to here", will miss slightly.
To compensate for this, data CDs include intermittent data on the spiral that says "you are here", but audio CDs don't do this. The only way to ensure that you haven't either skipped over a piece, or duplicated a piece, is to rip the CD in overlapping chunks, and then compare the overlaps.
The real question is why you'd ever need a bit-perfect copy of an audio CD when CD players never gave bit-perfect playback in the first place and nobody could tell the difference.
Edit: honestly I don't get it. Do you export all raw images from your camera to PNG too? JPG is good enough, and single-pass CD rips are just fine too.
I mean, that's literally what RAW files from cameras are, and there are uses for them. I don't know if a multi pass CD rip is quite as useful as a RAW image, but your analogy is not as strong as you think it is.
So you archive RAWs, not just JPEGs? I get that professional photographers do, but I never would. And I think they do it to preserve byte depth or whatever, but FLAC from a single pass CD rip has all the bit depth of the original, even if it has a few bits wrong you'll never notice it.
I mean I do keep the RAWs but the use case there is a bit different anyways, even the JPEGs aren't a simple matter of "the compressed version" it's also "the preprocessed version". OTOH it takes no more storage to back a CD up correctly vs incorrectly so why be keen to add a generation of loss? You can lower the quality whenever you like but you can never add it back.
> Do you export all raw images from your camera to PNG too? JPG is good enough, and single-pass CD rips are just fine too.
No, i keep them in RAW and develop a few for albums and print and such.. No, JPGs and PNGs are not good enough, the loss is REAL when you try to actually develop the picture.
I must admit I _REALLY_ don't get this "good enough" mentality.. Like.. Here's this way to get a more precise representation of the music you bought, and you'll go "nah, that's not for me, I prefer whatever random data I happen to get on the first go".. for what? why ?
If you really want to have a different product on every playback, I guess vinyl is the way to go :P
> I must admit I _REALLY_ don't get this "good enough" mentality.. Like.. Here's this way to get a more precise representation of the music you bought, and you'll go "nah, that's not for me, I prefer whatever random data I happen to get on the first go".. for what? why ?
The purpose of music is to be perceived. There is no perceptible difference, therefore there is no difference. CD Paranoia/etc are a waste of time, these tools make ripping a CD take several times as long for no tangible benefit.
This is often not the case. We're not talking about subtle interpolation differences. Depending on the drive, and the disc, bad rips can be full of music skips, clicks, and pops. It's really annoying to find this out days later after you ripped something. Better to get it right first time.
Some CDs are very scratched. Some CDs are CDRs with dye that hasn't aged well. Some CDs just don't read well (poor manufacturing tolerance, perhaps). Some drives silently return bad data on error. Some drives return a lot of errors towards the last tracks even when there is no problem. Drives vary on where they think tracks start and end (generally a uniform offset per drive). Sometimes, reading audio before the claimed start of the CD takes extra trickery. There are many more issues, besides. By ripping multiple times to verify and comparing to a global database of checksums (and in some cases doing error correction), you can be sure of getting exactly the intended audio.
Audio CDs and data CD-ROMs use different encoding modes.
Data CDs have an extra layer of error correction. Audio CDs have less error correction because small bit errors are not a big deal for your listening experience. Most CD players quietly interpolate over small errors in a way that you probably don't even notice.
As early CD players were intended for audio playback, your CD player can lie to the software. It doesn't tell if it's reading at the exact beginning of a sector (because timing information is multiplexed into the audio stream, and some CD players were not byte-exact when interpreting this), and it also doesn't tell you when it silently corrects some errors. Because for a human listening to the audio won't notice.
But of course it makes a difference if you want to make high-fidelity copy.
That's why there always have been copy apps (cdparanoia on Linux, EAC on Windows) that do overlapping reads, and then detect and compensate these problems.
I never understood that either, maybe redbook doesn't have a strict error correction like data CDs have? That's the only thing I can think of but that seems weird though.
Audio CDs have weaker error correction. An audio player has to stream data off the disc without any retries. When a block of data can't be corrected it's passed along with the knowledge that the corruption is localized and usually not noticeable.
And I guess that's why cdparanoia exists - to put hyper-focus on those edge cases and get them right. I used to use it to rip in the 90s or early 2000s but never looked deeply into how it works.
This type of ripping, comparing to other rips (paranoia) for bit-for-bit error correction integrity is part of XLD for Mac et al. However, even as a full time audio person, I consider paranoia overkill, and in someways backwards - I want my rip to be my unique rip and not precisely anyone else's - though of course, it most likely is anyway.
You do realise, assuming you ripped to a lossless audio format, your cd rips are 8-12x more accurate than anything streamed of Spotify?
> 8-12x more accurate than anything streamed of Spotify
Maybe in theory. In practice, the difference will be very hard to hear (for most people, in most scenarios). Have you ever done an ABX test to determine whether you would be able to tell the difference between lossless and Spotify's quality?[1]
I did, a while back, and while I was able to tell the difference somewhat reliably for music I knew well (and only for that kind of music), the effort and time I had to spend on finding the minute differences, even with high-end equipment, convinced me that for everyday listening, Spotify was completely fine for me.
You make a valid point, except I really did mean to and did type 'accurate' and not sound quality. I do concur with your example of how they can sound the same compared with lossy.
>bit-for-bit error correction integrity is part of XLD for Mac et al
AccurateRip is a pretty important feature imo. Like at least I know someone else in the world made a rip which was exactly the same as mine regardless of the optical drive we used. Overkill? Maybe but there is some kind of "safeness" to it
The 'two factors' are not two factors. Frequency range, bit depth, and whatever lossy algorithm is used, all play a role. A lossless rip matches the original audio as stored on the CD. A lossy rip will commonly 'lose' between 8-12x of that information. Lossy audio, like mp3 and m4a, 'throw away' information that is otherwise maintained in a lossless audio file. Hearing the difference is not being questioned, but the integrity.
Oh OK, so you're specifically talking about bitrate difference. I wouldn't agree that that 8x higher bitrate means the file is 8x more accurate, but I get your point now.
You continue to misrepresent my responses. Are you trolling? I did not say and was not 'specifically talking about bitrate difference'. Frequency range, bit depth, and whatever lossy algorithm is used, all play a role, is plain to see.
There’s a lengthy explanation in the wiki linked from the github page. But basically cd drives don’t give you a raw stream, and does stuff like error correction.
The drive itself abstracts out the raw stream if I understood things correctly. Similar to how a PC floppy controller can’t do a low level disk image, because it does some interpretation of the signal before handing it over to the system.
Well, ddrescue can definitely “read audio tracks” and is perfectly suited for making archival copies of Audio CDs but it sounds like the problem you’re actually trying to solve is interpreting the data as Red Book CD-DA and decoding it into chunks you call files ;)
Why does ddrescue care about "file systems"? Again, I was under the impression that ddrescue does bit by bit copying that I can then mount (and tell it the "filesystem" required).
Is this false?
I have also noticed that, sometimes, ddrescue has trouble ripping dvds, chocking out or producing a low quality rip, that will play perfectly on the same hardware with VLC
ddrescue works with blocks and bytes. Audio cds are different. They contain a single physical bit stream interpreted as several interleaved logical bit streams representing e.g. audio, position, metadata, error correction. They are not a sequence of bytes. Cdroms and dvds put a block and byte abstraction on top of this so ddrescue can do something with them.
Agreed. I mean, great that people make alternatives, but this was a solved problem long ago. Also EAC is a lots more user friendly that some python software.
I only discovered Cyanrip somehow quite recently and rate it as highly as Whipper, which I've used since its Morituri days. Appreciate Cyanrip's desire to be suckless but still enjoy Whipper being able to produce TOC/CUE sheets. MKTOC can always prove handy too if one cares about Right Offsets..
Somewhat related question: what's the gold standard for organizing ripped music, ideally with some sort of metadata lookup? Ever since Google Music shut down, I have my exported music in a folder, but it's badly organized and with duplicates. Too daunting a task for me to do manually - especially because some of the id3 tags somehow got messed up.
That will be a contentious subject, there really is no golden standard. I subscribe to The QF Bible personally, and I have configured Picard as such (feel free to ask how if you're interested)
https://musichoarders.xyz/reference/bibles/the-qf-bible/
What I appreciated most about 'abcde' was that just prior to the rip it would open up the CDDB output in 'vim' and allow quick and easy edits to what are, sometimes, horrific titling and track entries ...
It's a very nice workflow and avoids a lot of cleanup ...
Very recently decided to try getting off Spotify, so I'm in the CD-ripping workflow lately, after about 10 years off. (Also buying a bunch of direct download albums off Bandcamp.)
I like to think it's a trend?
Been using Asunder to rip, encoding as FLAC with max compression.
Playing back usually with Amarok but it's buggy for me these days so I'm not satisfied. Rhythmbox is good, too, but failed me by not inhibiting sleep during playback. I don't have a really solid go-to music organization / playback app right now but open to suggestions.
Can anyone recommend a (currently market available) USB drive that produces good results?
Back in the good old times I had a Yamaha CRW F1 which many still say was the best for accurate rips, as it supported many advanced modes required to read even scratched CDs better than others.
I have a very old CD collection I hadn’t looked at in years that I recently started ripping. Happened upon a tool called cyanrip and it works perfectly for me. Automatically downloads the art too and everything. Anyway it checks all the boxes for me and I haven’t looked back.
Whipper is great and I use it to backup my audio CDs in FLAC format.
There's a plugin called whipper-plugin-eaclogger which will generate the logs in a format compatible with EAC.
Then just run whipper with: `whipper cd rip -L eac`
Maybe it's just me but when I see Docker with a project like this I already zone out. This is still just based on cdparanoia/cdrdao [0]. I wish people would just push single portable binaries instead of starting the whole process with Docker (especially when the source code is alreaedy available)
Packages are available for just about any distro in the next heading. Source is available as well (obviously).
Still not a single binary but as you note with it being written in python and based on cdparanoia, etc how would that work?
It's based on python with relatively obscure requirements[0] that also calls out to system binaries. Looking at the Dockerfile[1] it is built with specific revs of component software to work around various issues. Take a look at the build docs and you'll see just how many existing projects (python and otherwise) it takes to deliver the end result.
IMO Docker is one of the "best" and most straightforward ways to package up all of this with the end result (as usual) putting any Linux user two commands away from ripping a disc.
"Why is the JavaScript ecosystem like this?" is a very similar complaint, where containers are again the dead-easy obvious answer (to almost everything, Mysql vs MariaDB difficulty remains).
Making stuff work together isnt always easy. Having a predictable easy to manage unit can be really nice, offload the particular ecosystem concerns & get folks to "just use it" places.
Yea, OP's complaint is valid, but this was the wrong project to post it on because this particular project actually does come with distro packages and source. Providing a Docker image as an option is great. Providing it as the only option, not so great.
Full disagree. Ripping a CD in 2023 should be as complicated as hello world.
> The post you're replying to gave several reasons
Let's go through them!
> Packages are available for just about any distro
I use windows.
> It's based on python with relatively obscure requirements
So include the modules along with the cosmopolitan python
> that also calls out to system binaries
Put that logic into the cosmopolitan binary: if Linux then do this, if Windows then do that etc
> Take a look at the build docs and you'll see just how many existing projects (python and otherwise) it takes to deliver the end result.
Do the same to these extra projects.
> most straightforward way
The lazy way? Yes.
Like if you can't be bothered to implement basic features, have a billion dependencies. If as a consequence you code is unstable, put it into a container, and orchestrate.
Here's a simple C equivalent: if you have memory leaks because you know malloc() but don't know about free(), the right solution isn't to kill and respawn when you go above some memory quota, but to learn about free().
It's only for Linux and they make that absolutely clear. If you use Windows there are plenty of other options referenced in this thread and elsewhere.
This project is at least nine years old with over 1,600 commits. There are 105 open issues.
If you bothered to spend a few minutes to look at the source and open issues you'd realize how complicated and difficult it actually is to enable as many (Linux) users as possible (with cheap, out of spec, and shoddy hardware) to make close-to-perfect rips (with metadata, in multiple formats, etc) of nearly any compact disc produced over the last four decades.
Finally, as someone who has created and contributed to open source projects calling this project "lazy" is downright offensive and completely unfair. Please feel free to spend your personal time and effort to create something better. I'm sure the people who successfully use whipper everyday will be anxiously awaiting and rejoice at the release of your perfect implementation.
Then, when it (never) appears someone like you will be here trashing a design decision or compromise you made. Or, as the saying goes, I'm sure the maintainers of the project would appreciate your pull requests.
Valid criticism and debate is great (and beneficial) but your comment and attitude go way too far.
Please try to put yourself in the shoes of people who donate their time to actually produce something of value and utility (for free) only to have keyboard warriors come out of the woodwork and call them lazy.
As a mere user, the thing I like about Docker is that it centralizes the configuration settings and storage.
For most Docker instances I run I have a simple script (or compose file) which does everything, and by reading it I know exactly which configuration tweaks I did and where the data is stored. No more forgetting that one line in some non-obvious file in /etc that made everything work. Backing it all up is trivial as the volume mapping makes it clear where the important data is stored.
Of course, if the project doesn't have any dependencies and doesn't require tweaking /etc settings to work, then sure a single self-sufficient binary would suffice. However this project relies on Python, so that's probably not gonna work.
Docker is a life saver if your shipping software on Linux. Otherwise you’re going to have to deal with endless bug reports from oddball distros. Docker lets you test with a stable set of dependencies and know that’s what your users will see.
Containerisation is a clever technology, but I can't help feeling it's a capitulation; an admission that the compatibility problem was just too hard for us to solve.
It's undeniably useful and pragmatic, but its existence should be a source of shame, a constant reminder that we failed.
I think of it exactly the opposite way. Containers show that an extremely high degree of compatibility has been achieved. Think about it, you can run new software, on an unrelated distro, even a newer distro that wasn’t planned when the host distro was created, because they all conform to the Linux ABI.
A thought that just popped up in my head: I wonder how a priest would react if someone came to the confessional to talk about Docker and the Open Container Initiative (:
You could also "just" ship a VM disk image and cut down those variables even more...
As a user/self-administrator, I really do not appreciate a developer throwing a ton of incidental complexity over the wall. Docker is basically a scaling up of the "works on my system" cop out.
I get that there are a lot of oddball distros, but it would seem that a policy of only digging into bug reports on specific well known distros would be more appropriate than basically forgoing the entire concept of a distribution in favor of what are essentially huge static binaries.
It's especially a problem when projects go nuts with this Dockerization anti-pattern, and become actively hostile to distributions shipping plainly administerable versions of their software (looking at you Home Assistant).
Why does it matter what distro you're running on? You shouldn't be relying on anything outside of the tarball you ship, besides maybe glibc. Is there anything which can't be made to run from a self contained directory?
I'm sure Docker's great if you want to guarantee that the thing inside can't access the host system, using all kinds of kernel mechanisms, but that's generally the opposite of what you want for application software.
The main advantage of Docker for application distribution that I see is that it's lazy. You don't have to worry about keeping track of what's required, you don't have to fiddle with paths, you just hack until something works and then ship it. That's fine if you prioritize your time over your user's time.
AppImage is just a handy single-file solution for the very standard "tarball of executable + dependencies" which is good enough for such complex projects as Firefox and Blender. You don't need to integrate with the OS, and you don't need to put everything in a container either. Just include all the binaries and libraries you need, and make sure everything looks in the correct path. That's it.
If you’ve run an open source project you will get these bug reports. It’s also not obvious if it’s a bug in your code or a dependency many times so you’re going to want to debug it.
As an example alpine has a smaller default stack size. It’s not obvious if that stack overflow is expected or not.
Sure, you might get them. But you have the option to ignore them, preferably with a friendly message back to the reporter, stating that, say, Alpine Linux is currently not supported.
I zone out too and thought it was just me. I think the problem is more that I don't use docker frequent enough so I have to visit the commands each time and do a refresher, which deters me.
I concur. And a standalone program written in Python. I like Python the language, but running someone else's python code is a mess. Look at the build/dependencies etc sections on the page. What it should be: "Download and install this binary for Win, this for Linux, or this for Mac. Run it to use (or install depending on complexity) the program.
Or, if building from source is desired for whatever reason, it should be "Clone the repo. Run `cargo build --release` or w/e.
Funny, I was already pre-sad that it probably wasn't dockerized. Wonder if I can combine this with another docker project and this will rip CD's and the other will rip DVD's automatically?
It'd be nice if people who insist on distributing software with Docker at least made sure their images worked with rootless Docker. An ecosystem that worked with rootless Docker and that didn't leave large stray image files behind silently eating up disk space wouldn't be that bad.