P18102: RIT Launch Initiative Hybrid Rocket

Test Computer

Table of Contents

= Test Stand Computering =

The test stand required instrumentation with a computer so that it could be both operated remotely, and the test could be observed. We wanted to have a few video feeds from the stand to our testing table, as well as full valve control and sensor readouts. This is a brief overview of how we achieved that.

Network Layout

I put 3 Raspberry Pi computers into the bunker, an 8 port gigabit switch, 4 PS3 Eye cameras, and an Arduino. (Truth be told two of the Pis were actually an Odroid C2, but essentially the same thing). I wired the Pis all to the switch with static IPs ( and used SSH for primary communication with them during the actual test (along with custom applications written in Python).

Onboard Sensors and Valve Control

To control the valves and sensors connected to our teensy microcontroller, I wrote a few different applications, and communicated over USB with a Serial protocol. This means that I literally plugged the teensy in using USB.

I wrote a Python script to communicate with the teensy, another one to send those communications over a network, and a third to receive the information and display it on a remote laptop.




This script is what actually communicates over serial with the various input Serial devices. It's currently set up to talk to a teensy, and Arduino, and an Openscale board.

It works by scanning all serial ports on the computer, and then if their description matches one of the ones above, opens a communication line with them. All devices are stored in an array, and another script can call "check_data()" on the communications object to check for new data, or "read_data()" to get any new data. All serial devices must be outputting all of their data on some sort of time interval (IE the script doesn't ask them for data, it just reads what they spit out). All serial devices must output data in the following format:


The communicator will initialize all of the label fields, and output as such to the terminal. This script is not run by itself, it just holds the communicator class.

If no serial devices can be initialized or you just want to test the plotter, it will generate "random" serial data for you.



This script actually opens a communicator object, which initializes connections to all serial devices attached. Then this script starts a TCP socket server, and listens for connections. When it has a connection, it loops, calling check_read() on the communicator every 10 (?) ms, and trying to read data from the client on the other end. The client must send a command to the networker to recieve any data. The commands are as follows:

The last command allows the computer to send ASCII characters to the teensy, which is what the teensy used to control valves.



This class mimics a local communicator, but actually receives the data over the network from a networker. Here is where you change the IP of the computer running networker.



This is a middleware class between the plotter and the communicator (whether it's a netcommunicator or a communicator). The initializer allows the user to choose between networked or local. If local is desired, QThreader initializes a communicator, which initializes serial communications locally. Else, it will just open a netcommunicator which opens a socket with a networker running on a networked computer.

This class polls the communicator every 40 ms looking for new data. If it finds new data, it polls again 20ms later just in case there's more in the buffer. When new data is acquired, it emits a QTSignal which tells the plotter to plot the new data.

QThreader also handles writing the data out to a file. Currently it writes it in object form, so "time,(label,value),(label,value),..." which is almost but not quite a csv. To make it a csv, either fix the code, or just find and delete all of the parentheses.



This is the meat and potatos of the plotter application. It takes one parameter: "net". That is to say, to run a local plotter (where the teensy/arduino/openscale) are all hooked up locally, just run "python3 plotter.py" (assuming all of the requisite software is installed: PyQT4,pyqtgraph). It will initialize a local qthreader and communicator and begin plotting.

If you want to do it over the network, change the IP/port in netcommunicator.py and then run "python3 plotter.py net". It will connect to the networker on the networked computer and start getting data from there. The networker has to already be running on the other computer (python3 networker.py). If you just want to test this functionality, you can run both locally, and communicator will generate random values. To do this you would set the IP to "localhost" or "".

The plotter takes the parameters recieved from the initialized communicator and sets up a graph and label for each one. The plotter will plot a point after every X recieved (currently set at 5, change DATA_COUNT) and average the X between. This running average helps (but doesn't eliminate) jitter. Although every data point is not plotted, they will all be recorded.

External Sensors

Thrust Load Cell

We didn't use too many external sensors for this project, although we could have done with a few more. I had one large load cell to measure thrust, and four smaller load cells to try to measure mass flow rate through the system. The large load cell was hooked up to an Openscale board with custom firmware loaded. The custom firmware sent the scale measurement at 10hz over Serial in the following format:


just like the rest of the sensors did (IE used the format "time,label,value...") In this way it interfaced immediately with the software that I'd already written. I could have pushed the speed more (> 10 hz) but hadn't tested that day of, and wanted to make sure that our data collection worked.


Mass flow reading

The other four load cells purchased were FC2231 load cells. These cells are different than the previous, because they have a built in ASIC to read the strain gauge and amplify the signal to something readable. I hooked all four up to an arduino, with +5v power from the board, and the signal wire from each going into A0-A3. I added these values, and calibrated by subtracting the initial reading to zero the scale, and then multiplying by a constant (don't worry the code link will follow). We found the constant by setting a gallon of water on the scale.


Camera reading

We also wanted visuals inside of the bunker. To this end, I decided to use PS3 eye cameras, because I had a bunch sitting around (at the time they were ~$5 on Amazon). The USB interface meant I couldn't hook it directly up to the router, but saved a couple hundred dollars. I plugged them into some Raspberry Pis that I had sitting around, and used mjpg-streamer to stream the video from them. I probably should only have put one camera per board, but two worked good enough. I compiled from that github repo, which has handy instructions, and then ran the cameras off two separate ports.

Then to actually view the cameras, there's a number of solutions. Mjpg-streamer automagically creates a local website you can use, or there's any number of DIY home security solutions. I've used OpenCV before, and had it installed, so I wrote another custom application using that.

It connected to each computer using a list of IP/ports, and then displayed the cameras each in a separate window. By clicking the windows I could start and stop recording. The biggest issue with this solution is recording speed. Recording 4 camera feeds at the same time requires definitely an SSD, and perhaps should be done on separate computers. My laptop couldn't process the incoming feeds quickly enough, and started lagging after running for a few minutes. This was fixed somewhat by not recording, but is a bug that needs addressed.

Python Script For Recording

The script requires OpenCV/Numpy. There's a built in array list of IP addresses and ports that will need to be changed depending on where your cameras are located (if you plan on using this).

Things that could be done better