Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Great read, particularly relevant to my weekend spent debugging a Core Audio granular synthesis engine. It was definitely of the 'my code is haunted, that's the only explanation' variety -- audio files that were discarded were still faintly audible in the background. I read that article, sat down at my computer, stepped through my code in the debugger again and realized that I was setting my audio stream format to 2 channels/interleaved whilst converting to a mono stream. So, whenever I filled the buffer with a new audio file, my file-length parameter was incorrect, and some bytes were never freed. Because the files are all close to same length, I never noticed the issue before. It only surfaced when I parameterized the 'grain duration' in the engine. The irony is that just yesterday a friend was asking about getting started in Core Audio and the advice I offered was to spend a lot of time learning about Audio Stream Basic Descriptions, because they're usually the cause of most problems.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: