Realtime Audio Visualization in Python

Python’s “batteries included” nature makes it easy to interact with just about anything… except speakers and a microphone! As of this moment, there still are not standard libraries which which allow cross-platform interfacing with audio devices. There are some pretty convenient third-party modules, but I hope in the future a standard solution will be distributed with python. I appreciate the differences of Linux architectures such as ALSA and OSS, but toss in Windows and MacOS in the mix and it gets to be a huge mess. For Linux, would I even need anything fancy? I can run “cat file.wav > /dev/dsp” from a command prompt to play audio. There are some standard libraries for operating system specific sound (i.e., winsound), but I want something more versatile. The official audio wiki page on the subject lists a small collection of third-party platform-independent libraries. After excluding those which don’t support microphone access (the ultimate goal of all my poking around in this subject), I dove a little deeper into sounddevice and PyAudio. Both of these I installed with pip (i.e., pip install pyaudio)

For a more modern, cleaner, and more complete GUI-based viewer of realtime audio data (and the FFT frequency data), check out my Python Real-time Audio Frequency Monitor project.

I really like the structure and documentation of sounddevice, but I decided to keep developing with PyAudio for now. Sounddevice seemed to take more system resources than PyAudio (in my limited test conditions: Windows 10 with very fast and modern hardware, Python 3), and would audibly “glitch” music as it was being played every time it attached or detached from the microphone stream. I tried streaming, but after about an hour I couldn’t get clean live access to the microphone without glitching audio playback. Furthermore, every few times I ran this script it crashed my python kernel! I very rarely see this happening. iPython complained: “It seems the kernel died unexpectedly. Use ‘Restart kernel’ to continue using this console” and I eventually moved back to PyAudio. For a less “realtime” application, sounddevice might be a great solution. Here’s the minimal case sounddevice script I tested with (that crashed sometimes). If you have a better one to do live high-speed audio capture, let me know!

import sounddevice #pip install sounddevice

for i in range(30): #30 updates in 1 second
    rec = sounddevice.rec(44100/30)
    sounddevice.wait()
    print(rec.shape)

Here’s a simple demo to show how I get realtime microphone audio into numpy arrays using PyAudio. This isn’t really that special. It’s a good starting point though. Note that rather than have the user define a microphone source in the python script (I had a fancy menu system handling this for a while), I allow PyAudio to just look at the operating system’s default input device. This seems like a realistic expectation, and saves time as long as you don’t expect your user to be recording from two different devices at the same time. This script gets some audio from the microphone and shows the values in the console (ten times).

import pyaudio
import numpy as np

CHUNK = 4096 # number of data points to read at a time
RATE = 44100 # time resolution of the recording device (Hz)

p=pyaudio.PyAudio() # start the PyAudio class
stream=p.open(format=pyaudio.paInt16,channels=1,rate=RATE,input=True,
              frames_per_buffer=CHUNK) #uses default input device

# create a numpy array holding a single read of audio data
for i in range(10): #to it a few times just to see
    data = np.fromstring(stream.read(CHUNK),dtype=np.int16)
    print(data)

# close the stream gracefully
stream.stop_stream()
stream.close()
p.terminate()

01

I tried to push the limit a little bit and see how much useful data I could get from this console window. It turns out that it’s pretty responsive! Here’s a slight modification of the code, made to turn the console window into an impromptu VU meter.

import pyaudio
import numpy as np

CHUNK = 2**11
RATE = 44100

p=pyaudio.PyAudio()
stream=p.open(format=pyaudio.paInt16,channels=1,rate=RATE,input=True,
              frames_per_buffer=CHUNK)

for i in range(int(10*44100/1024)): #go for a few seconds
    data = np.fromstring(stream.read(CHUNK),dtype=np.int16)
    peak=np.average(np.abs(data))*2
    bars="#"*int(50*peak/2**16)
    print("%04d %05d %s"%(i,peak,bars))

stream.stop_stream()
stream.close()
p.terminate()

The results are pretty good! The advantage here is that no libraries are required except PyAudio. For people interested in doing simple math (peak detection, frequency detection, etc.) this is a perfect starting point. Here’s a quick cellphone video:

I’ve made realtime audio visualization (realtime FFT) scripts with Python before, but 80% of that code was creating a GUI. I want to see data in real time while I’m developing this code, but I really don’t want to mess with GUI programming. I then had a crazy idea. Everyone has a web browser, which is a pretty good GUI… with a Python script to analyze audio and save graphs (a lot of them, quickly) and some JavaScript running in a browser to keep refreshing those graphs, I could get an idea of what the audio stream is doing in something kind of like real time. It was intended to be a hack, but I never expected it to work so well! Check this out…

Here’s the python script to listen to the microphone and generate graphs:

import pyaudio
import numpy as np
import pylab
import time

RATE = 44100
CHUNK = int(RATE/20) # RATE / number of updates per second

def soundplot(stream):
    t1=time.time()
    data = np.fromstring(stream.read(CHUNK),dtype=np.int16)
    pylab.plot(data)
    pylab.title(i)
    pylab.grid()
    pylab.axis([0,len(data),-2**16/2,2**16/2])
    pylab.savefig("03.png",dpi=50)
    pylab.close('all')
    print("took %.02f ms"%((time.time()-t1)*1000))

if __name__=="__main__":
    p=pyaudio.PyAudio()
    stream=p.open(format=pyaudio.paInt16,channels=1,rate=RATE,input=True,
                  frames_per_buffer=CHUNK)
    for i in range(int(20*RATE/CHUNK)): #do this for 10 seconds
        soundplot(stream)
    stream.stop_stream()
    stream.close()
    p.terminate()

Here’s the HTML file with JavaScript to keep reloading the image… 

<html>
<script language="javascript">
function RefreshImage(){
document.pic0.src="03.png?a=" + String(Math.random()*99999999);
setTimeout('RefreshImage()',50);
}
</script>
<body onload="RefreshImage()">
<img name="pic0" src="03.png">
</body>
</html>

Here’s the result! I couldn’t believe my eyes. It’s not elegant, but it’s kind of functional!

Why stop there? I went ahead and wrote a microphone listening and processing class which makes this stuff easier. My ultimate goal hasn’t been revealed yet, but I’m sure it’ll be clear in a few weeks. Let’s just say there’s a lot of use in me visualizing streams of continuous data. Anyway, this class is the truly terrible attempt at a word pun by merging the words “SWH”, “ear”, and “Hear”, into the official title “SWHear” which seems to be unique on Google. This class is minimal case, but can be easily modified to implement threaded recording (which won’t cause the rest of the functions to hang) as well as mathematical manipulation of data, such as FFT. With the same HTML file as used above, here’s the new python script and some video of the output:

import pyaudio
import time
import pylab
import numpy as np

class SWHear(object):
    """
    The SWHear class is made to provide access to continuously recorded
    (and mathematically processed) microphone data.
    """

    def __init__(self,device=None,startStreaming=True):
        """fire up the SWHear class."""
        print(" -- initializing SWHear")

        self.chunk = 4096 # number of data points to read at a time
        self.rate = 44100 # time resolution of the recording device (Hz)

        # for tape recording (continuous "tape" of recent audio)
        self.tapeLength=2 #seconds
        self.tape=np.empty(self.rate*self.tapeLength)*np.nan

        self.p=pyaudio.PyAudio() # start the PyAudio class
        if startStreaming:
            self.stream_start()

    ### LOWEST LEVEL AUDIO ACCESS
    # pure access to microphone and stream operations
    # keep math, plotting, FFT, etc out of here.

    def stream_read(self):
        """return values for a single chunk"""
        data = np.fromstring(self.stream.read(self.chunk),dtype=np.int16)
        #print(data)
        return data

    def stream_start(self):
        """connect to the audio device and start a stream"""
        print(" -- stream started")
        self.stream=self.p.open(format=pyaudio.paInt16,channels=1,
                                rate=self.rate,input=True,
                                frames_per_buffer=self.chunk)

    def stream_stop(self):
        """close the stream but keep the PyAudio instance alive."""
        if 'stream' in locals():
            self.stream.stop_stream()
            self.stream.close()
        print(" -- stream CLOSED")

    def close(self):
        """gently detach from things."""
        self.stream_stop()
        self.p.terminate()

    ### TAPE METHODS
    # tape is like a circular magnetic ribbon of tape that's continously
    # recorded and recorded over in a loop. self.tape contains this data.
    # the newest data is always at the end. Don't modify data on the type,
    # but rather do math on it (like FFT) as you read from it.

    def tape_add(self):
        """add a single chunk to the tape."""
        self.tape[:-self.chunk]=self.tape[self.chunk:]
        self.tape[-self.chunk:]=self.stream_read()

    def tape_flush(self):
        """completely fill tape with new data."""
        readsInTape=int(self.rate*self.tapeLength/self.chunk)
        print(" -- flushing %d s tape with %dx%.2f ms reads"%\
                  (self.tapeLength,readsInTape,self.chunk/self.rate))
        for i in range(readsInTape):
            self.tape_add()

    def tape_forever(self,plotSec=.25):
        t1=0
        try:
            while True:
                self.tape_add()
                if (time.time()-t1)>plotSec:
                    t1=time.time()
                    self.tape_plot()
        except:
            print(" ~~ exception (keyboard?)")
            return

    def tape_plot(self,saveAs="03.png"):
        """plot what's in the tape."""
        pylab.plot(np.arange(len(self.tape))/self.rate,self.tape)
        pylab.axis([0,self.tapeLength,-2**16/2,2**16/2])
        if saveAs:
            t1=time.time()
            pylab.savefig(saveAs,dpi=50)
            print("plotting saving took %.02f ms"%((time.time()-t1)*1000))
        else:
            pylab.show()
            print() #good for IPython
        pylab.close('all')

if __name__=="__main__":
    ear=SWHear()
    ear.tape_forever()
    ear.close()
    print("DONE")

I don’t really intend anyone to actually do this, but it’s a cool alternative to recording a small portion of audio, plotting it in a pop-up matplotlib window, and waiting for the user to close it to record a new fraction. I had a lot more text in here demonstrating real-time FFT, but I’d rather consolidate everything FFT related into a single post. For now, I’m happy pursuing microphone-related python projects with PyAudio.

UPDATE: Displaying a single frequency

Use Numpy’s FFT() and FFTFREQ() to turn the linear data into frequency. Set that target and grab the FFT value corresponding to that frequency. I haven’t tested this to be sure it’s working, but it should at least be close…

import pyaudio
import numpy as np
np.set_printoptions(suppress=True) # don't use scientific notation

CHUNK = 4096 # number of data points to read at a time
RATE = 44100 # time resolution of the recording device (Hz)
TARGET = 2100 # show only this one frequency

p=pyaudio.PyAudio() # start the PyAudio class
stream=p.open(format=pyaudio.paInt16,channels=1,rate=RATE,input=True,
              frames_per_buffer=CHUNK) #uses default input device

# create a numpy array holding a single read of audio data
for i in range(10): #to it a few times just to see
    data = np.fromstring(stream.read(CHUNK),dtype=np.int16)
    fft = abs(np.fft.fft(data).real)
    fft = fft[:int(len(fft)/2)] # keep only first half
    freq = np.fft.fftfreq(CHUNK,1/RATE)
    freq = freq[:int(len(freq)/2)] # keep only first half
    assert freq[-1]>TARGET, "ERROR: increase chunk size"
    val = fft[np.where(freq>TARGET)[0][0]]
    print(val)

# close the stream gracefully
stream.stop_stream()
stream.close()
p.terminate()

 


     

Calculate QRSS Transmission Time with Python

How long does a particular bit of Morse code take to transmit at a certain speed? This is a simple question, but when sitting down trying to design schemes for 10-minute-window QRSS, it doesn’t always have a quick and simple answer. Yeah, you could sit down and draw the pattern on paper and add-up the dots and dashes, but why do on paper what you can do in code? The following speaks for itself. I made the top line say my call sign in Morse code (AJ4VD), and the program does the rest. I now see that it takes 570 seconds to transmit AJ4VD at QRSS 10 speed (ten second dots), giving me 30 seconds of free time to kill.

program output
Output of the following script, displaying info about “AJ4VD” (my call sign).

Here’s the Python code I whipped-up to generate the results:

xmit=" .- .--- ....- ...- -..  " #callsign
dot,dash,space,seq="_-","_---","_",""
for c in xmit:
    if c==" ": seq+=space
    elif c==".": seq+=dot
    elif c=="-": seq+=dash
print "QRSS sequence:n",seq,"n"
for sec in [1,3,5,10,20,30,60]:
    tot=len(seq)*sec
    print "QRSS %02d: %d sec (%.01f min)"%(sec,tot,tot/60.0)

How ready am I to implement this in the microchip? Pretty darn close. I’ve got a surprisingly stable software-based time keeping solution running continuously executing a “tick()” function thanks to hardware interrupts. It was made easy thanks to Frank Zhao’s AVR Timer Calculator. I could get it more exact by using a /1 prescaler instead of a /64, but this well within the range of acceptability so I’m calling it quits!

Output frequency is 1.0000210 Hz. That'll drift 2.59 sec/day. I'm cool with that.
Output frequency is 1.0000210 Hz. That’ll drift 2.59 sec/day. I’m cool with that.

     

Crystal Oven Testing

To maintain high frequency stability, RF oscillator circuits are sometimes “ovenized” where their temperature is raised slightly above ambient room temperature and held precisely at one temperature. Sometimes just the crystal is heated (with a “crystal oven”), and other times the entire oscillator circuit is heated. The advantage of heating the circuit is that other components (especially metal core instructors) are temperature sensitive. Googling for the phrase “crystal oven”, you’ll find no shortage of recommended circuits. Although a more complicated PID (proportional-integral-derivative) controller may seem enticing for these situations, the fact that the enclosure is so well insulated and drifts so little over vast periods of time suggests that it might not be the best application of a PID controller. One of my favorite write-ups is from M0AYF’s site which describes how to build a crystal oven for QRSS purposes. He demonstrates the MK1 and then the next design the MK2 crystal oven controller.  Here are his circuits:

Briefly, desired temperature is set with a potentiometer. An operational amplifier (op-amp) compares the target temperature with measured temperature (using a thermistor – a resistor which varies resistance by tempearture). If the measured temperature is below the target, the op-amp output goes high, and current flows through heating resistors. There are a few differences between the two circuits, but one of the things that struck me as different was the use of negative feedback with the operational amplifier. This means that rather than being on or off (like the air conditioning in your house), it can be on a little bit. I wondered if this would greatly affect frequency stability. In the original circuit, he mentions

The oven then cycles on and off roughly every thirty or forty seconds and hovers around 40 degrees-C thereafter to within better than one degree-C.

I wondered how much this on/off heater cycle affected temperature. Is it negligible, or could it affect frequency of an oscillator circuit? Indeed his application heats an entire enclosure so small variations get averaged-out by the large thermal mass. However in crystal oven designs where only the crystal is heated, such as described by Bill (W4HBK), I’ll bet the effect is much greater. Compare the thermal mass of these two concepts.

How does the amount of thermal mass relate to how well it can be controlled? How important is negative feedback for partial-on heater operation? Can simple ON/OFF heater regulation adequately stabalize a crystal or enclosure? I’d like to design my own heater, pulling the best elements from the rest I see on the internet. My goals are:

  1. use inexpensive thermistors instead of linear temperature sensors (like LM335)
  2. use inexpensive quarter-watt resistors as heaters instead of power resistors
  3. be able to set temperature with a knob
  4. be able to monitor temperature of the heater
  5. be able to monitor power delivered to the heater
  6. maximum long-term temperature stability

Right off the bat, I realized that this requires a PC interface. Even if it’s not used to adjust temperature (an ultimate goal), it will be used to log temperature and power for analysis. I won’t go into the details about how I did it, other than to say that I’m using an ATMEL ATMega8 AVR microcontroller and ten times I second I sample voltage on each of it’s six 10-bit ADC pins (PC0-PC5), and send that data to the computer with USART using an eBay special serial/USB adapter based on FTDI. They’re <$7 (shipped) and come with the USB cable. Obviously in a consumer application I’d etch boards and use the SMT-only FTDI chips, but for messing around at home I a few a few of these little adapters. They’re convenient as heck because I can just add a heater to my prototype boards and it even supplies power and ground. Convenient, right? Power is messier than it could be because it’s being supplied by the PC, but for now it gets the job done. On the software side, Python with PySerial listens to the serial port and copies data to a large numpy array, saving it every once and a while. Occasionally a bit is sent wrong and a number is received incorrectly (maybe one an hour), but the error is recognized and eliminated by the checksum (just the sum of all transmitted numbers). Plotting is done with numpy and matpltolib. Code for all of that is at the bottom of this post.

That’s the data logger circuit I came up with. Reading six channels ten times a second, it’s more than sufficient for voltage measurement. I went ahead and added an op-amp to the board too, since I knew I’d be using one. I dedicated one of the channels to serve as ambient temperature measurement. See the little red thermistor by the blue resistor? I also dedicated another channel to the output of the op-amp. This way I can measure drive to whatever temperature controller circuity I choose to use down the road. For my first test, I’m using a small thermal mass like one would in a crystal oven. Here’s how I made that:

I then build the temperature controller part of the circuit. It’s pretty similar to that previously published. it uses a thermistor in a voltage divider configuration to sense temperature. It uses a trimmer potentiometer to set temperature. An LED indicator light gives some indication of on/off, but keep in mind that a fraction of a volt will turn the Darlington transistor (TIP122) on slightly although it doesn’t reach a level high enough to drive the LED. The amplifier by default is set to high gain (55x), but can be greatly lowered (negative gain actually) with a jumper. This lets me test how important gain is for the circuitry.

controller

When using a crystal oven configuration, I concluded high high gain (cycling the heater on/off) is a BAD idea. While average temperature is held around the same, the crystal oscillates. This is what is occurring above when M0AYF indicates his MK1 heater turns on and off every 40 seconds. While you might be able to get away with it while heating a chassis or something, I think it’s easy to see it’s not a good option for crystal heaters. Instead, look at the low gain (negative gain) configuration. It reaches temperature surprisingly quickly and locks to it steadily. Excellent.

high gain
high gain configuration tends to oscillate every 30 seconds
low gain / negative gain configuration is extremely stable
low gain / negative gain configuration is extremely stable (fairly high temperature)
Here's a similar experiment with a lower target temperature. Noise is due to unregulated USB power supply / voltage reference. Undeniably, this circuit does not oscillate much if any.
Here’s a similar experiment with a lower target temperature. Noise is due to unregulated USB power supply / voltage reference. Undeniably, this circuit does not oscillate much if any.

Clearly low (or negative) gain is best for crystal heaters. What about chassis / enclosure heaters? Let’s give that a shot. I made an enclosure heater with the same 2 resistors. Again, I’m staying away from expensive components, and that includes power resistors. I used epoxy (gorilla glue) to cement them to the wall of one side of the enclosure.

I put a “heater sensor” thermistor near the resistors on the case so I could get an idea of the heat of the resistors, and a “case sensor” on the opposite side of the case. This will let me know how long it takes the case to reach temperature, and let me compare differences between using near vs. far sensors (with respect to the heating element) to control temperature. I ran the same experiments and this is what I came up with!

heater temperature (blue) and enclosure temperature (green) with low gain (first 20 minutes), then high gain (after) operation. High gain sensor/feedback loop is sufficient to induce oscillation, even with the large thermal mass of the enclosure
CLOSE SENSOR CONTROL, LOW/HIGH GAIN: TOP: heater temperature (blue) and enclosure temperature (green) with low gain (first 20 minutes), then high gain (after) operation. High gain sensor/feedback loop is sufficient to induce oscillation, even with the large thermal mass of the enclosure. BOTTOM: power to the heater (voltage off the op-amp output going into the base of the Darlington transistor). Although I didn’t give the low-gain configuration time to equilibrate, I doubt it would have oscillated on a time scale I am patient enough to see. Future, days-long experimentation will be required to determine if it oscillates significantly.
Even with the far sensor (opposite side of the enclosure as the heater) driving the operational amplifier in high gain mode, oscillations occur. Due to the larger thermal mass and increased distance the heat must travel to be sensed they take much longer to occur, leading them to be slower and larger than oscillations seen earlier when the heater was very close to the sensor.
FAR SENSOR CONTROL, HIGH GAIN: Even with the far sensor (opposite side of the enclosure as the heater) driving the operational amplifier in high gain mode, oscillations occur. Blue is the far sensor temperature. Green is the sensor near the heater temperature. Due to the larger thermal mass and increased distance the heat must travel to be sensed they take much longer to occur, leading them to be slower and larger than oscillations seen earlier when the heater was very close to the sensor.

Right off the bat, we observe that even with the increased thermal mass of the entire enclosure (being heated with two dinky 100 ohm 1/4 watt resistors) the system is prone to temperature oscillation if gain is set too high. For me, this is the final nail in the coffin – I will never use a comparator-type high gain sensor/regulation loop to control heater current. With that out, the only thing to compare is which is better: placing the sensor near the heating element, or far from it. In reality, with a well-insulated device like I seem to have, it seems like it doesn’t make much of a difference! The idea is that by placing it near the heater, it can stabilize quickly. However, placing it far from the heater will give it maximum sensation of “load” temperature. Anywhere in-between should be fine. As long as it’s somewhat thermally coupled to the enclosure, enclosure temperature will pull it slightly away from heater temperature regardless of location. Therefore, I conclude it’s not that critical where the sensor is placed, as long as it has good contact with the enclosure. Perhaps with long-term study (on the order of hours to days) slow oscillations may emerge, but I’ll have to build it in a more permanent configuration to test it out. Lucky, that’s exactly what I plan to do, so check back a few days from now!

Since the data speaks for itself, I’ll be concise with my conclusions:

  • two 1/4 watt 100 Ohm resistors in parallel (50 ohms) are suitable to heat an insulated enclosure with 12V
  • two 1/4 watt 100 Ohm resistors in parallel (50 ohms) are suitable to heat a crystal with 5V
  • low gain or negative gain is preferred to prevent oscillating tempeartures
  • Sensor location on an enclosure is not critical as long as it’s well-coupled to the enclosure and the entire enclosure is well-insulated.

I feel satisfied with today’s work. Next step is to build this device on a larger scale and fix it in a more permanent configuration, then leave it to run for a few weeks and see how it does. On to making the oscillator! If you have any questions or comments, feel free to email me. If you recreate this project, email me! I’d love to hear about it.

Here’s the code that went on the ATMega8 AVR (it continuously transmits voltage measurements on 6 channels).

#define F_CPU 8000000UL
#include <avr/io.h>
#include <util/delay.h>
#include <avr/interrupt.h>

/*
8MHZ: 300,600,1200,2400,4800,9600,14400,19200,38400
1MHZ: 300,600,1200,2400,4800
*/
#define USART_BAUDRATE 38400
#define BAUD_PRESCALE (((F_CPU / (USART_BAUDRATE * 16UL))) - 1)

/*
ISR(ADC_vect)
{
    PORTD^=255;
}
*/

void USART_Init(void){
	UBRRL = BAUD_PRESCALE;
	UBRRH = (BAUD_PRESCALE >> 8);
	UCSRB = (1<<TXEN);
	UCSRC = (1<<URSEL)|(1<<UCSZ1)|(1<<UCSZ0); // 9N1
}

void USART_Transmit( unsigned char data ){
	while ( !( UCSRA & (1<<UDRE)) );
	UDR = data;
}

void sendNum(long unsigned int byte){
	if (byte==0){
		USART_Transmit(48);
	}
	while (byte){
		USART_Transmit(byte%10+48);
		byte-=byte%10;
		byte/=10;
	}
}

int readADC(char adcn){
	ADMUX = 0b0100000+adcn;
	ADCSRA |= (1<<ADSC); // reset value
	while (ADCSRA & (1<<ADSC)) {}; // wait for measurement
	return ADC>>6;
}

int sendADC(char adcn){
	int val;
	val=readADC(adcn);
	sendNum(val);
	USART_Transmit(',');
	return val;
}

int main(void){
	ADCSRA = (1<<ADEN)  | 0b111;
	DDRB=255;
	USART_Init();
	int checksum;

	for(;;){
		PORTB=255;
		checksum=0;
		checksum+=sendADC(0);
		checksum+=sendADC(1);
		checksum+=sendADC(2);
		checksum+=sendADC(3);
		checksum+=sendADC(4);
		checksum+=sendADC(5);
		sendNum(checksum);
		USART_Transmit('n');
		PORTB=0;
		_delay_ms(200);
	}
}

Here’s the command I used to compile the code, set the AVR fuse bits, and load it to the AVR.

del *.elf
del *.hex
avr-gcc -mmcu=atmega8 -Wall -Os -o main.elf main.c -w
pause
cls
avr-objcopy -j .text -j .data -O ihex main.elf main.hex
avrdude -c usbtiny -p m8 -F -U flash:w:"main.hex":a -U lfuse:w:0xe4:m -U hfuse:w:0xd9:m

Here’s the code that runs on the PC to listen to the microchip, match the data to the checksum, and log it occasionally. 

import serial, time
import numpy
ser = serial.Serial("COM16", 38400, timeout=100)

line=ser.readline()[:-1]
t1=time.time()
lines=0

data=[]

def adc2R(adc):
    Vo=adc*5.0/1024.0
    Vi=5.0
    R2=10000.0
    R1=R2*(Vi-Vo)/Vo
    return R1

while True:
    line=ser.readline()[:-1]
    lines+=1
    if "," in line:
        line=line.split(",")
        for i in range(len(line)):
            line[i]=int(line[i][::-1])

    if line[-1]==sum(line[:-1]):
        line=[time.time()]+line[:-1]
        print lines, line
        data.append(line)
    else:
        print  lines, line, "<-- FAIL"

    if lines%50==49:
        numpy.save("data.npy",data)
        print "nSAVINGn%d lines in %.02f sec (%.02f vals/sec)n"%(lines,
            time.time()-t1,lines/(time.time()-t1))

Here’s the code that runs on the PC to graph data.

import matplotlib
matplotlib.use('TkAgg') # <-- THIS MAKES IT FAST!
import numpy
import pylab
import datetime
import time

def adc2F(adc):
    Vo=adc*5.0/1024.0
    K=Vo*100
    C=K-273
    F=C*(9.0/5)+32
    return F

def adc2R(adc):
    Vo=adc*5.0/1024.0
    Vi=5.0
    R2=10000.0
    R1=R2*(Vi-Vo)/Vo
    return R1

def adc2V(adc):
    Vo=adc*5.0/1024.0
    return Vo

if True:
    print "LOADING DATA"
    data=numpy.load("data.npy")
    data=data
    print "LOADED"

    fig=pylab.figure()
    xs=data[:,0]
    tempAmbient=data[:,1]
    tempPower=data[:,2]
    tempHeater=data[:,3]
    tempCase=data[:,4]
    dates=(xs-xs[0])/60.0
    #dates=[]
    #for dt in xs: dates.append(datetime.datetime.fromtimestamp(dt))

    ax1=pylab.subplot(211)
    pylab.title("Temperature Controller - Low Gain")
    pylab.ylabel('Heater (ADC)')
    pylab.plot(dates,tempHeater,'b-')
    pylab.plot(dates,tempCase,'g-')
    #pylab.axhline(115.5,color="k",ls=":")

    #ax2=pylab.subplot(312,sharex=ax1)
    #pylab.ylabel('Case (ADC)')
    #pylab.plot(dates,tempCase,'r-')
    #pylab.plot(dates,tempAmbient,'g-')
    #pylab.axhline(0,color="k",ls=":")

    ax2=pylab.subplot(212,sharex=ax1)
    pylab.ylabel('Heater Power')
    pylab.plot(dates,tempPower)

    #fig.autofmt_xdate()
    pylab.xlabel('Elapsed Time (min)')

    pylab.show()

print "DONE"

     

AJ4VD (QRSS VD) MEPT

This page documents the progress of my MEPT (manned experimental propagation transmitter) endeavors. If you have questions, feel free to E-mail me! My contact information can be found by clicking the link on the right navigation menu.

THE TRANSMITTER

here it is
IMG_3459IMG_3466IMG_3467localgrabber02 57

THE SIGNAL

aj4vd_gatoraj4vd_gator_zoomantenna

THE REPORTS

Florida – 288.3 miles away (W4HBK) May 22, 2010

2010_05_22_W4HBK_Florida

Massachusetts – 1,075.5 miles away (W1BW) May 27, 2010

2010_05_27_W1BW_MA

Belgium – 4,496.3 miles away (ON5EX) May 27, 2010

2010_05_27_ON5EX_Belgium

Germany- 4,869.2 miles away (DL4MGM) May 28, 2010

2010_05_28_DL4MGM_Germany

Essex – 4,356.4 miles away (G6AVK) May 28, 2010

2010_05_28_G6AVK_Essex

New Zealand – 8,077.6 miles away (ZL2IK) May 29, 2010

2010_05_29_ZL2IK_NewZealandNewZealand