Python Real-time Audio Frequency Monitor

A new project I’m working on requires real-time analysis of soundcard input data, and I made a minimal case example of how to do this in a cross-platform way using python 3, numpy, and PyQt. Previous posts compared performance of the matplotlib widget vs PyQtGraph plotwidget and I’ve been working with PyQtGraph ever since. For static figures matplotlib is wonderful, but for fast responsive applications I’m leaning toward PyQtGraph. The full source for this project is on a github page, but here’s a summary of the project.

demo

 

I made the UI with QT Designer. The graphs are QGraphicsView widgets promoted to a pyqtgraph PlotWidget. I describe how to do this in my previous post. Here’s the content of the primary program:

 

from PyQt4 import QtGui,QtCore
import sys
import ui_main
import numpy as np
import pyqtgraph
import SWHear

class ExampleApp(QtGui.QMainWindow, ui_main.Ui_MainWindow):
    def __init__(self, parent=None):
        pyqtgraph.setConfigOption('background', 'w') #before loading widget
        super(ExampleApp, self).__init__(parent)
        self.setupUi(self)
        self.grFFT.plotItem.showGrid(True, True, 0.7)
        self.grPCM.plotItem.showGrid(True, True, 0.7)
        self.maxFFT=0
        self.maxPCM=0
        self.ear = SWHear.SWHear()
        self.ear.stream_start()

    def update(self):
        if not self.ear.data is None and not self.ear.fft is None:
            pcmMax=np.max(np.abs(self.ear.data))
            if pcmMax>self.maxPCM:
                self.maxPCM=pcmMax
                self.grPCM.plotItem.setRange(yRange=[-pcmMax,pcmMax])
            if np.max(self.ear.fft)>self.maxFFT:
                self.maxFFT=np.max(np.abs(self.ear.fft))
                self.grFFT.plotItem.setRange(yRange=[0,self.maxFFT])
            self.pbLevel.setValue(1000*pcmMax/self.maxPCM)
            pen=pyqtgraph.mkPen(color='b')
            self.grPCM.plot(self.ear.datax,self.ear.data,
                            pen=pen,clear=True)
            pen=pyqtgraph.mkPen(color='r')
            self.grFFT.plot(self.ear.fftx[:500],self.ear.fft[:500],
                            pen=pen,clear=True)
        QtCore.QTimer.singleShot(1, self.update) # QUICKLY repeat

if __name__=="__main__":
    app = QtGui.QApplication(sys.argv)
    form = ExampleApp()
    form.show()
    form.update() #start with something
    app.exec_()
    print("DONE")

note: this project uses a gutted version of the SWHEar class which I still haven’t released on githib yet. It will likely have its own project folder. For now, take this project with a grain of salt. The primary advantage of this class is that it makes it easy to use PyAudio to automatically detect input sound cards, channels, and sample rates which are likely to succeed without requiring the user to enter any information.

All files used for this project are in a GitHub folder

2016-09-05: Okko adapted this project into a screenlet (cross platform) which also includes an installer for Windows. That Githib page is here: https://github.com/ninlith/audio-visualizer-screenlet Below is a screenshot of me running it on my Windows 10 machine

widget


     

12 thoughts on “Python Real-time Audio Frequency Monitor

  1. Thank you so much Scott for sharing your work with the community. I´m a student of Biological Engineering @ Universidad el Bosque (Colombia) and the newest fan of your work!.

    I took the liberty of using your partial SWHear library to build part of our semester project (for our Transforms/Transducers subject), which consists in an “acoustic pure-tone canceling device” based on python/raspberry. Basically, it gathers dominant ambient tones and detects peak frequencies to later create and reproduce a “counter-wave” in the opposite phase, cancelling the origin audio by destructive resonance.

    Your threaded aquisition mechanism came in very handy and saved us a lot of work.

    I´ll share the outcome when finshed, around nov 17th.

    Best!
    Pedro

  2. Hi Scott!
    Your work is exactly the stuff that I need to a personal project. However, it only works with external microphones? Because when I ran your code I have the following output: http://pastebin.com/5C4DRjX8
    I’m doing some research on how to use the numpy with fft but is kind difficult find good materials, and for me is harder, because I understand the logic but not the mathematics below this.
    I want to hear an audio in real time and get a frequency of a specific sound, your work fits that, but also I need that to work with python 2, and with your previous work https://www.swharden.com/wp/2013-05-09-realtime-fft-audio-visualization-with-python/ , I don’t have any luck either because the Qwt5 module with PyQt4 isn’t found(ArchLinux Operational System). However if I understand, I may be able to do something. Could you give me some tips? I would be very thankful.

    • Hi Lays,

      It should work fine with external microphones! If it doesn’t, I suspect you (a) don’t have permission to access your microphone or (b) have the incorrect microphone set. The later seems most likely. Is the microphone you’re trying to access set as the “default” microphone? If not (or you’re unsure) you can manually assign a microphone device to attach to.

      To manually assign the microphone device, be sure that PyAudio is opened using the specific device of interest (usually an integer between 0 and 10). With my most recent code, be sure to initialize “SWHear()” with “device=SomeNumber”. With regard to the current project, this can be done by adding “device=0” (or whatever number) into this line:
      https://github.com/swharden/Python-GUI-examples/blob/master/2016-07-37_qt_audio_monitor/go.py#L18

      If you’re using older code (which it looks like you are, such as that you provided a link for), add “input_device_index=0” in the PyAudio open() function. Something like this:

      self.inStream = self.p.open(format=pyaudio.paInt16,channels=1,
      rate=self.RATE,input=True,frames_per_buffer=self.BUFFERSIZE)

      Would become:

      self.inStream = self.p.open(format=pyaudio.paInt16,channels=1,
      rate=self.RATE,input=True,frames_per_buffer=self.BUFFERSIZE,
      input_device_index=0)

      and try a bunch of numbers from 0 to 10 and I’ll bet it will work eventually!
      Good luck!
      –Scott

      • Well, I was trying to use the actual code on my laptop, so I was trying to use the microphone that comes with my laptop. I will see if I can find the data related to my internal mic and test it.
        The reason that I want to use the Python2 code (Only of the fft and Real Time) is to run in an Intel Edison, the last Poky only have support for python 2.7. I want to get the data returned from the fft and send to a server and use it, on the Edison I plan to use an usb mic

  3. I successfully implemented the Scott’s lib on a raspberry pi with a C-media USB audio card. ($2 US on ebay). It should work on your laptop.

    The error you see is most likely the thread taking a single audio sample without re-spawning.

    Do you see a specific error msg? (Halt) Or does the loop keeps running without getting any more samples?

    Best
    Pedro