Your website should document your journey through MSD, so include workinprogress as well as latest results. Use pdf's for display whenever possible so that information is easily viewable without the need to download files and open applications. (Your EDGE file repository should still contain original editable files).
Content related to this node should go in the Detailed Design Documents directory unless you have made other arrangements with your guide ahead of time.
All template text must be removed prior to your Detailed Design Review
Team Vision for Detailed Design Phase
Our Plan
Our main goals for this phase were to choose a clock sync method, determine model sensitivity, pick an antenna, and design the electrical housing Model Antenna
 Create RCVM
 Choose Preamp and Bandpass filter
 Choose SDR
 Test Antennas on Satellites
 Apply to IEEE Grant
 Research Errors in OD
 Choose Lightning Arrester
 Choose TDoA Distance
 Choose Clock Sync Method
 Choose Antenna Design
 Define TDoA Sensitivity for Multiple Distances
 Proof on Concept on Determining Time Differences
 Choose SDR Software
 Design Electronics Housing
 Define Network Architecture & Hardware Requirements
 Refine Test Plan
 Create Orekit OD for azimuth and elevation, and error inputs
 Consider Failure Modes
 Test OD Software with Simulated Data
 Refine BOM
 Heat Transfer Analysis of Electronics
 Electrostatic Discharge Design Considerations
 Order Parts
Our Accomplishments
The following describes what we actually accomplished during the Detailed Design Phase: Ordered some parts, and plan to order long lead time items on Friday, December 13th
 Refined BOM
 Designed Electronics Housing
 Defined Network Architecture & Hardware Requirements
 Refined Test Plan
 Created Orekit OD for azimuth and elevation, and error inputs
 Defined TDoA Sensitivity for Multiple Distances with circular orbit
 Proof of Concept on Determining Time Differences
 Chose Clock Sync Method
 Chose Lightning Arrester
 Modeled Antenna with GNEC
 Created RCVM
 Chose Preamp and Bandpass filter
 Chose SDR
 Ran more tests on Antennas on Satellites
Progress Report
By the end of your assigned class period during the week of Thanksgiving break, each team is responsible for sending their customer and guide an update on progress toward the detailed design review: What does the team plan to accomplish by the Detailed Design Review?
 What tasks have been accomplished so far?
 What tasks remain, and who is the owner of each?
 What decisions have been made so far?
 What questions does the team have for the customer and/or guide in order to continue moving forward?
Team progress reports will also be posted here
Prototyping, Engineering Analysis, Simulation
Signal Acquisition
Prototype Antenna Testing
 After finishing our prototyping for our QFH and Double Turnstile antennas we attempted to acquire satellite signals with them
 Satellite used for testing was PRISM
 Testing was done over two different passes
 Each test had roughly the same max elevation
Prototype Antenna Testing Results
 Results turned out very well
 Both antennas yielded roughly the same strength signal, with the QFH having slightly stronger results
 We decided that of the two designs the QFH is more optimal due to having a much easier construction design
 With a low noise amplifier signal acquisition could be much stronger
Digital Signal Processing
Cross Correlation
Cross Correlation is a viable method for extracting time differences from recorded signal data. An analysis was done looking into the tradeoff associated with this method. Simulated transmission of simulated and real data was carried out over known distances.
 Realistic signal attenuation was added.
 Random noise was added.
There are 4 different types of cross correlation. Based on a study done here: http://www.panoradiosdr.de/correlationfortimedelayanalysis/ we are going to look into complex cross correlation and amplitude cross correlation. The folling observations were made about cross correlation:
 The larger bandwidth a signal has relative to the sample size, the more accurate the cross correlation will be as noise is increased.
 The lower the SNR (Signaltonoise Ratio), the less accurate the cross correlation will be.
 Cross correlation is never 100% accurate, but error is minimized as SNR and Bandwidth are maximized.
The analysis is detailed in the Detailed Design Reveiw Presentation. Below are a few notable images from the simulation.
Cross Correlation Result from the simulated transmission of actual recorded I/Q data of an FM signal with significant noise added.
For the FM I/Q data that was used above, the sample rate was 3.2MSPS and the cross correlation resulted in an error of 3 samples, or 0.9375us. The noise added reduced the SNR to approximately 0.004. This accuracy with this much noise added was achieved due to the large bandwidth of the signal.
FFT of the raw FM I/Q data with noise added, where increasing the noise level begins to degrade the cross correlation accuracy.
A similar test was conducted on a much smaller bandwidth FM simulated transmission and it was shown that the accuracy of the cross correlation degraded with much less noise added. The large bandwidth FM signal was actually recorded at two different locations and cross correlation was carried out on the actual signals to measure an actual delay.
The accuracy of the cross correlation of the the real transmission was not measured because there was no way to synchronize the recording currently. However, the cross correlation plot of the real transmission resembles the plot of the simulated transmission with added noise. This analysis shows that the real data cross correlation will be feasable for extracting time differences, given a large enough bandwidth or low enough noise in conjuction with synchronized recording and significant data recording.
Unique Symbol Search
Unique symbol search is a method for extracting time differences among stations that requires lower bandwidth than cross correlation, but initial prototyping indicates it is sensitive to noise.
To determine the time difference, take the SDR Ouput signals from two stations (A and B) and use Amplitude Shift Key (ASK) demodulation to quantize the signal. Next, find a list of unique symbols that occured in the digital signal for station A. Search the digital signal of station B for the unique symbols from station A, then find which sample these unique symbols occurred on. From the sample difference, the time difference between signals A and B can be found.
The analog signal from the SDR can be quantized into a multileveled signal. With more levels, a unique symbol is more likely to appear. With fewer levels, the chance of noise impacting the results decreases. In addition, a longer symbol length increases the liklihood that a unique symbol will be found and decreases the chance of false matches between signal A and B. However, a shorter symbol requires less computational power, but increases sensitivity to noise. If the unique symbol cannot be found in Signal B due to to noise differences, no time difference can be extracted.
The unique symbol search method was implemented on MATLAB and testing using simulated and real data. The simulated, ideal input was created by copying a signal and adding an offset of 762 samples. The original signal and the offset copy were set as the two input signals. The correct sample offset was found.
With a simulated input and signal to noise ratio of 45, the output started having variations from the expected sample difference of 762. This is a high quality signal for such variation, so this result was not expected.
Finally tests were run on real data collected simultaneously from a weather transmitter, from two separate locations.
Next, these signals were ASK Modulated. This portion of the test was atttempted by quantizing the signals into 2 bit (4 level) digital arrays and 4 bit (8 level) digital arrays. The 2 bit digital arrays are shown below.
This data did not yield useful results despite changing the symbol length. When the symbol length was 64, mainly false matches were found. Instead of a plot of a nearly flat line which would indicate the likely sample difference, the calculated sample difference oscillates inconsistently.
When the symbol length was set to 128 bits, no symbol matches were found.
To investigate and improve this method of time difference detections, the following steps will be taken:
 Meet with an advisor to get feedback on decreasing noise sensitivity
 Analyze the signals and noise in the frequency domain
 Test method using IQ data instead of audio data
 Filter input signals with an FIR filter
Time Difference of Arrival Algorithm
This phase we refined the TDoA algorithm further and used it in 2 sensitivity analyses. The liberal oneatatime approach gives us insight into how the solution changes with uncertainty in the input parameters. The conservative Monte Carlo gives us an upper bound for the uncertainties we can expect.We ran each analysis for 10 triangles around the greater Rochester area. Each simulation takes 612 hours. So since Thanksgiving, we used 120240 hours of computing time.
Since the preliminary detailed design review, we further improved the TDoA algorithm by using WGS84 instead of a spherical model for the Earth. We solve the 3D TDoA problem with the 50km, 400km, and 1200km plane. The results from a ground track (same ground track as in pddr) is below. We half the error seen in the pddr from 1.25 degrees to 0.6 degrees.
Maximum error over this ground track is 0.6 degrees in Elevation. For higher elevations, the error drops to 0.4 degrees.
We plan on refining this further with 1 of 3 methods.
 Use cones instead of hyperboloids
 Solve hyperboloids on very far away planes
 Fit a hyperbola instead of a line to the plane output.
It is unknown right now which of these methods is least sensitive to noise. We will also be trying a least squares approach since the hyperbolas do not always intersect on a plane.
Sensitivity
The Oneatatime analysis numerically estimates the partial derivative of the output, azimuth and elevation in this case, with respect to each input parameter.
For our TDoA setup, our inputs are Receiver 1,2,3 x,y,z components and clocks. This is 12 parameters. Based on our expected GPS measurements and time difference calculation error, we use an uncertainty of 9m for the location and an uncertainty of 100ns and 5us for the time difference error. The former time difference error is liberal, the latter conservative.
The output of the oneatatime code is contour plots of each sensitivity parameter. An example is below:
Using these contours, we can determine the azimuth, elevation, and magnitude of the highest sensivity for each value. These are tabulated below:
Table showing the max sensitivities values for MeesBrockportWebster. The time difference between Brockport and Webster is especially sensitive.
To better understand why certain parameters are important, we can look back at the map.
MeesBrockportWebster Triangle. The shortest distance between the triangles is associated with the highest sensitivity. The opposite point on the triangle, Mees, has the highest location sensitivity.
We are still trying to figure out why the most sensitive value is at the Azimuth that it is calculated at. 80 degrees is close to being along the line between Brockport and Webster, but ends up being around 1520 degrees off.
We can obtain uncertainty from sensitivty by multiplying it by the associated input uncertainty (keeping units in mind) and taking the root mean squared. This results in an uncertainty plot.
Uncertainty plot showing where the algorithm will struggle calculating TDoA. Azimuth error dominates at high elevation and Elevation error at low elevation.
Time Difference plot. As elevation increases, all time differences approach zero. Along an azimuth, the time differences have follow a sinusoid
After running the oneatatime for all 10 triangles, we compile the results based on the following criteria:

 TDoA coverage: what percentage of sky has a time difference above our uncertainty in clocks?
 Uncertainty coverage: what percentage of sky has an uncertainty less than 1 degree for Azimuth and Elevation?
 What is the median and IQR for Azimuth and Elevation?
 What is the minimum elevation where more than 50% of the Azimuths have uncertainty below 1 degree?
The triangles are ordered from smallest to largest distance. Notice for the same mean distance, having angles closer to 60 degrees is optimal, as can be seen between triangle 1 and 2. Having a larger mean distance is more important than an optimal angle as can be seen between triangles 6 and 2. The best triangle is 6.
While a powerful tool, the oneatatime analysis breaks down with large uncertainties. Consider the following:
The TDoA problem is nonlinear. If the uncertainty is too large, a linear approximation has large error.
Oneatatime assumes the partial derivative is constant. It uses a linear approximation to estimate uncertainty. This is valid over small ranges for TDoA, but even out to 6m, it starts breaking down. We see similar results for uncertainties out to 100ns. So when we consider 5us, we have to use Monte Carlo.
Our Monte Carlo code uses the same number of test cases around the sky, 650. It solves each point 30 times. It uses the mean and 2 times the standard deviation of the error from each trial as the uncertainty. We include mean here because the TDoA algorithm has inherent error. We chose 30 to satisfy the central limit theorem from Statistics.
Histogram showing the error in Azimuth and Elevation for 30 trials. We approximate these distributions as normal.
Using these uncertainties instead, we can construct a new uncertainty plot. These plots are much more noisy due to statistical variation.
We used 13 computers and 600 hours of computing time to estimate these uncertainties. The results are tabulated below:
Monte Carlo Results. Triangles are ordered from smallest to largest mean distance. The optimal median error is 3 degrees.
The TDoA algorithm, dubbed the Symbolic Solver is slow because it uses the Matlab symbolic solver. We are currently experimenting with using least squares instead. Least Squares can reduce runtime by 60 times, but its an open question whether it maintains similar accuracy.
For a comprehensive overview of the TDoA Algorithm, see the TDoA Presentation.
Orekit Max Elevation and Orbit Determination Analysis
Maximum Elevation Simulations
The action I set out to accomplish from the PDDR phase was to determine how probable satellite passes with a maximum elevation of 0 to 40 degrees was. To do this, I used Orekit to simulate 10 different satellites that have inclinations that allow for a pass over Rochester. I used Two Line Elements (TLE) to define the satellites and then propagated them over one year, recording the maximum elevation of each pass. The program flowchart is as follows:
After running the simulation, histograms of the number of instances of each max elevation were plotted for each satellite.
Here you can see that the majority of satellite passes occur in this 040 [deg] range. However, the results are not bad for us, because there are still a decent number of passes above this range. Additionally, Luca's testing in real life is promising as there were frequent enough passes in the sky that we wouldn't have to wait too long to capture satellite information.
Based on the Orekit simulation results, the plot above shows what percent view of satellite passes we can capture based on the minimum elevation viewing angle of the antenna.
Orbit Determination First Steps
This phase, researched was done on how to use Orekit to perform orbit determination. Right now, we can perform orbit determination on simulated azimuth and elevation data with error perturbations artificially added into each measurement. The simple flowchart below describes the process:
To perform my analysis, I used a very, very simple orbit type to generate simulated data and to solve for orbital parameters. This orbit does not take into account any perturbations such as atmospheric drag or solar radiation. This analysis was done to set up a framework for performing orbit determination using orekit.
To test the robustness of orekit's orbit determination, I first started by seeing what the maximum error perturbation (in degrees) was allowed before the OD solver crashed was. I tried this when data for 1, 2, and 3 ground tracks was provided into the solver:
Here you can see that the solver is very robust when 3 sets of ground track data are provided. When one ground track set is provided, only a very small amount of degree error perturbation can be added before orekit cannot find a solution.
Then we wanted to see how bad the solution errors were compared to the actual orbital parameters inputted to generate the data were, so the solver was run at these maximum errors:
Of course, with 26 degrees of uncertainty in the azimuth and elevation data in the three ground track case, the results are extremely far off from what is expected. On the other hand, the results with 0.31 degrees of uncertainty with one ground track provided, the results are relatively close to nominal. However, it is unlikely with our constraints that we can achieve such small uncertainty in our measurements.
Lastly, we wanted to see the sensitivity of these orbital parameters using a test case of having +2.5 degrees of error in our measurements. This error is based off Anthony's TDoA simulations. A Monte Carlo simulation with 50 iterations was run. Two standard deviations was taken as the uncertainty in the orbital parameter solution.
This analysis gives us some initial analysis on how sensitive the orbital parameters are. However, the next step is to perform more realistic orbit determination using a more accurate orbit type.
Drawings, Schematics, Flow Charts, Simulations
Serviceability
We want to be able to service our electronics easily. This includes assembly and disassembly. We needed a way to easily work on our electronics as well, with them being fixed inside. Our solution was to create a 3D printed tray with standoffs on the bottom. These standoffs could be attached to the bottom of the housing, and the electronics could fit snugly in predetermined slots on the tray. This way, we only need to assemble and disassemble the tray from the housing to work on the electronics. The first figure below shows the preliminary idea for the 3D printed tray, and the following figure shows the electronics assembled on it.Assembly
We needed to consider ways to cool down the electrical components inside the housing. We have added some vents to the plastic housing, as well as a fan to blow air over the components. This housing was designed to be placed indoors, as we believe we wont need to have any electrical housing outside anymore.Changes From DDR
After going through our Detailed Design Review, our guide and customer advised us not to use any plastic for the housing due to electrical discharge. They advised us to use metal instead. Metal housing would also help to eliminate noise for the SDR. We are considering modifying the electrical housing to be weatherproof once again. This is to ensure that our project is scalable for universities around the country. If they ever duplicate our design, they may not have access to indoor facilities for their stations. Because of this, we will modify our current design to include weatherproofing.Bill of Material (BOM)
During the Detailed Design Phase, risk of necessary funding was decreased since L3Harris officially offered to support Team LASSO with a donation of $5,000.Budget Overview:
 $1,500: Original MSD Budget
 $5,000: L3Harris Donation
 $114: Prototyping Materials
This brings the current budget to $6,386.
Team LASSO will build 4 stations (3 for TDoA and 1 for redundancy). Based on the BOM, Team LASSO expects to use $4,418. Increasing this by 20% to account to unexpected expenses or issues, a more realistic expectation of spending $5,302 can be estimated.
The BOM as of the end of the Detailed Design Phase is shown below.
Test Plans
A detailed write up of all of the test plans and procedures, including failure modes, is linked here:Requirements Verifications and Compliance Matrix
The RVCM is a way of defining which engineering requirements are related to each customer requirement. In this respect, it is similar to the House of Quality. However, it also describes how these customer requirements will be met, and how they will be tested. There is also a section determining what makes the test successful. Requirements Verification and Compliance MatrixPurpose
Demonstrate objectively the degree to which the Engineering Requirements are satisfiedInstructions
 Instructions and EXAMPLE must be deleted before the first Detailed Design Review AND Identify an owner for this document.
 Complete test plans specifying the data to be collected, instrumentation to be used, testing procedures and personnel who will conduct the tests.
 Plans should use data collected to define the accuracy of models generated during feasibility analysis.
 Tests demonstrate that you met engineering requirements, which map back to your customer requirements. You should include a snapshot of your test plans here, but maintain the continuity of using your team's master requirements and testing document.
 If your team's testing will involve human subjects, you must review the RIT Human Subjects Research Office "Protecting Human Subjects" page for details on securing approval for work with human subjects
Inputs and Source
 Engineering Requirements.
 Test standards (e.g., ASTM). The RIT library maintains an infoguide with links to standards databases, many of which provide industrystandard test procedures for a variety of components and systems.
 Feasibility Models.
Outputs and Destination
 Report that summarized the degree to which Eng Reqs are satisfied.
 Assessment of accuracy of feasibility models.
Design and Flowcharts
Software Flowchart
A highlevel software flowchart was created as a reference for the development phase next semester.
If symbol detection is used to extract the time differences among, the symbol detection software flowchart will be used.
The customer will use the Graphical User Interface (GUI), which interfaces with the main program. The main program commicates with the Raspberry Pi's (one at each ground station) through the Server Command Interface. The main program on the server will also find the time offsets to synchronize the clocks on the 3 ground stations, run orbit determination, and TDoA. Once orbit determination is completed, the TLE data will be saved without awaiting instruction from the server's main program.
At the ground stations, the main program will interface with the SDR Software, GPS Interface, and run GetSymbolIndex. GetSymbolIndex will respond to the server's requests for timestamps of when a unique symbol occurred.
If cross correlation is used to extract time differences, the cross correlation software flowchart will be used. This is similar to the Symbol Search flowchart, but does not require a GetSymbolIndex program on the Raspberry Pi's.
Risk Assessment
 Updated assessment from Preliminary Detailed Design or link to other location. Have you driven the likelihood and/or severity down as you worked through the details of your design?
 Include a snapshot of your current risk assessment as well as a link to the live document.
Design Review Materials
Include links to: PreRead
 Presentation and/or handouts
 Notes from review
 Action Items
Plans for next phase
 As a team, what do you need to accomplish between now and the end of the semester?
 As a team, what do you need to do to prepare for MSD II?
 As an individual on the team, what are you doing to help your team achieve these goals? (Use the individual 3week plan template for this)
This work is licensed under a Creative Commons AttributionNonCommercialShareAlike 4.0 International License.
https://creativecommons.org/licenses/byncsa/4.0/
Home  Planning & Execution  Imagine RIT
Problem Definition  Systems Design  Preliminary Detailed Design  Detailed Design
Build & Test Prep  Subsystem Build & Test  Integrated System Build & Test  Customer Handoff & Final Project Documentation