Another Audio Pedal Project (aapp) > phase 0: DSP
April 14, 2026
Why DSP
Ever since I started using my first DAW, I’ve been curious about the computing power needed to run all those effects and plugins on the sound you’re working with. I thought it would be interesting to dig a little deeper and understand the concepts behind these effects and plugins. So, what better project than an audio pipeline that mimics a pedal to see how everything comes together? I still need to decide the direction of the project but the main idea is to develop the backend and then maybe adding some UI elements.
phase 0: “Lets just start at zero; Level Zero”.
Getting mic input to come out of the speakers.
what I was trying to do
Phase 0 of my audio effects pedal project(The code is not public yet, but I will like to define a more long term plan first):
- plug mic in, hear audio out, no effects. Just prove the plumbing works before I start building the actual thing.
what actually happened
Installed PyAudio, wrote a scaffolding with the help of Claude and I was up and running the classic callback loop.
Running the python file and the output blessed my ears with a audio effect for crackling. Not quite what I was expecting but a start.
def callback(in_data, frame_count, time_info, status):
samples = np.frombuffer(in_data, dtype=np.float32)
return (samples.tobytes(), pyaudio.paContinue)
Looked fine to me. Wasn’t fine.
After some finicking I realized the issue: np.frombuffer returns a read-only array.
The moment a downstream effect tries to write to it instant xrun(fancy word for de-syncronization between software and hardware), instant crackle.
The fix is one word:
samples = np.frombuffer(in_data, dtype=np.float32).copy()
.copy(). That’s it of course.
The second issue was buffer size. I started with CHUNK = 512 because I recalled using this to export audio files from my DAW. Turns out 256 is the sweet spot on my machine small enough for
sub-6ms latency (my current laptop is not of the fastest).
CHUNK = 256 # ~5.8ms @ 44100 Hz tune this per machine
RATE = 44100
what I learned
Two things stuck with me:
-
Never touch
in_datadirectly always.copy()before processing. -
Buffer size is a tradeoff, not a setting, smaller = lower latency but more CPU pressure and more xrun risk.
256works for me, might not work for you. Profiling is a matter with which I will like to deal later on.
what’s next
Phase 1 — biquad EQ. I’ve already downloaded the Audio EQ Cookbook PDF. Looks intimidating :’)
Part of my aapp project.