P20151: Satellite Localization
/public/

Detailed Design

Table of Contents

Your website should document your journey through MSD, so include work-in-progress as well as latest results. Use pdf's for display whenever possible so that information is easily viewable without the need to download files and open applications. (Your EDGE file repository should still contain original editable files).

Content related to this node should go in the Detailed Design Documents directory unless you have made other arrangements with your guide ahead of time.

All template text must be removed prior to your Detailed Design Review

Team Vision for Detailed Design Phase

Our Plan

Our main goals for this phase were to choose a clock sync method, determine model sensitivity, pick an antenna, and design the electrical housing

Our Accomplishments

The following describes what we actually accomplished during the Detailed Design Phase:

Progress Report

By the end of your assigned class period during the week of Thanksgiving break, each team is responsible for sending their customer and guide an update on progress toward the detailed design review:

Team progress reports will also be posted here

Prototyping, Engineering Analysis, Simulation

Signal Acquisition

Prototype Antenna Testing

Prototype Antenna Testing Results

 Signal results using QFH Antenna

Signal results using QFH Antenna

 Signal results using Double Turnstile Antenna

Signal results using Double Turnstile Antenna

Digital Signal Processing

Cross Correlation

Cross Correlation is a viable method for extracting time differences from recorded signal data. An analysis was done looking into the trade-off associated with this method.

There are 4 different types of cross correlation. Based on a study done here: http://www.panoradio-sdr.de/correlation-for-time-delay-analysis/ we are going to look into complex cross correlation and amplitude cross correlation. The folling observations were made about cross correlation:

The analysis is detailed in the Detailed Design Reveiw Presentation. Below are a few notable images from the simulation.

 Cross Correlation Result from the simulated transmission of actual recorded I/Q data of an FM signal with significant noise added.

Cross Correlation Result from the simulated transmission of actual recorded I/Q data of an FM signal with significant noise added.

For the FM I/Q data that was used above, the sample rate was 3.2MSPS and the cross correlation resulted in an error of 3 samples, or 0.9375us. The noise added reduced the SNR to approximately 0.004. This accuracy with this much noise added was achieved due to the large bandwidth of the signal.

 FFT of the raw FM I/Q data with no noise added, resulting in perfect corss correlation accuracy.

FFT of the raw FM I/Q data with no noise added, resulting in perfect corss correlation accuracy.

 FFT of the raw FM I/Q data with noise added, where increasing the noise level begins to degrade the cross correlation accuracy.

FFT of the raw FM I/Q data with noise added, where increasing the noise level begins to degrade the cross correlation accuracy.

A similar test was conducted on a much smaller bandwidth FM simulated transmission and it was shown that the accuracy of the cross correlation degraded with much less noise added. The large bandwidth FM signal was actually recorded at two different locations and cross correlation was carried out on the actual signals to measure an actual delay.

 Real transmission cross correlation for an FM broadcast signal.

Real transmission cross correlation for an FM broadcast signal.

The accuracy of the cross correlation of the the real transmission was not measured because there was no way to synchronize the recording currently. However, the cross correlation plot of the real transmission resembles the plot of the simulated transmission with added noise. This analysis shows that the real data cross correlation will be feasable for extracting time differences, given a large enough bandwidth or low enough noise in conjuction with synchronized recording and significant data recording.

Unique symbol search is a method for extracting time differences among stations that requires lower bandwidth than cross correlation, but initial prototyping indicates it is sensitive to noise.

To determine the time difference, take the SDR Ouput signals from two stations (A and B) and use Amplitude Shift Key (ASK) demodulation to quantize the signal. Next, find a list of unique symbols that occured in the digital signal for station A. Search the digital signal of station B for the unique symbols from station A, then find which sample these unique symbols occurred on. From the sample difference, the time difference between signals A and B can be found.

 Unique Symbol Search method of extracting time differences between two stations.

Unique Symbol Search method of extracting time differences between two stations.

The analog signal from the SDR can be quantized into a multi-leveled signal. With more levels, a unique symbol is more likely to appear. With fewer levels, the chance of noise impacting the results decreases. In addition, a longer symbol length increases the liklihood that a unique symbol will be found and decreases the chance of false matches between signal A and B. However, a shorter symbol requires less computational power, but increases sensitivity to noise. If the unique symbol cannot be found in Signal B due to to noise differences, no time difference can be extracted.

The unique symbol search method was implemented on MATLAB and testing using simulated and real data. The simulated, ideal input was created by copying a signal and adding an offset of 762 samples. The original signal and the offset copy were set as the two input signals. The correct sample offset was found.

 Resulting sample difference matches expected, ideal output of unique symbol search.

Resulting sample difference matches expected, ideal output of unique symbol search.

With a simulated input and signal to noise ratio of 45, the output started having variations from the expected sample difference of 762. This is a high quality signal for such variation, so this result was not expected.

 Small variations in expected result despite high SNR.

Small variations in expected result despite high SNR.

Finally tests were run on real data collected simultaneously from a weather transmitter, from two separate locations.

 Input Signal from Nathaniel Rochester Hall (NRH).

Input Signal from Nathaniel Rochester Hall (NRH).

 Input Signal from Mission Control.

Input Signal from Mission Control.

Next, these signals were ASK Modulated. This portion of the test was atttempted by quantizing the signals into 2 bit (4 level) digital arrays and 4 bit (8 level) digital arrays. The 2 bit digital arrays are shown below.

 Quantized Signal from NRH.

Quantized Signal from NRH.

 Quantized Signal from Mission Control.

Quantized Signal from Mission Control.

This data did not yield useful results despite changing the symbol length. When the symbol length was 64, mainly false matches were found. Instead of a plot of a nearly flat line which would indicate the likely sample difference, the calculated sample difference oscillates inconsistently.

 False symbol matches between NRH and Mission Control signal with a symbol length of 64.

False symbol matches between NRH and Mission Control signal with a symbol length of 64.

When the symbol length was set to 128 bits, no symbol matches were found.

 No symbol matches between NRH and Mission Control signal with a symbol length of 128.

No symbol matches between NRH and Mission Control signal with a symbol length of 128.

To investigate and improve this method of time difference detections, the following steps will be taken:

Time Difference of Arrival Algorithm

This phase we refined the TDoA algorithm further and used it in 2 sensitivity analyses. The liberal one-at-a-time approach gives us insight into how the solution changes with uncertainty in the input parameters. The conservative Monte Carlo gives us an upper bound for the uncertainties we can expect.

We ran each analysis for 10 triangles around the greater Rochester area. Each simulation takes 6-12 hours. So since Thanksgiving, we used 120-240 hours of computing time.

Since the preliminary detailed design review, we further improved the TDoA algorithm by using WGS84 instead of a spherical model for the Earth. We solve the 3D TDoA problem with the 50km, 400km, and 1200km plane. The results from a ground track (same ground track as in pddr) is below. We half the error seen in the pddr from 1.25 degrees to 0.6 degrees.

 Maximum error over this ground track is 0.6 degrees in Elevation. For higher elevations, the error drops to 0.4 degrees.

Maximum error over this ground track is 0.6 degrees in Elevation. For higher elevations, the error drops to 0.4 degrees.

We plan on refining this further with 1 of 3 methods.

  1. Use cones instead of hyperboloids
  2. Solve hyperboloids on very far away planes
  3. Fit a hyperbola instead of a line to the plane output.

It is unknown right now which of these methods is least sensitive to noise. We will also be trying a least squares approach since the hyperbolas do not always intersect on a plane.

Sensitivity

The One-at-a-time analysis numerically estimates the partial derivative of the output, azimuth and elevation in this case, with respect to each input parameter.

For our TDoA setup, our inputs are Receiver 1,2,3 x,y,z components and clocks. This is 12 parameters. Based on our expected GPS measurements and time difference calculation error, we use an uncertainty of 9m for the location and an uncertainty of 100ns and 5us for the time difference error. The former time difference error is liberal, the latter conservative.

 Table of possible triangle locations.

Table of possible triangle locations.

 Triangles to scale on a map. The triangle around RIT is not shown.

Triangles to scale on a map. The triangle around RIT is not shown.

The output of the one-at-a-time code is contour plots of each sensitivity parameter. An example is below:

 Contour plot showing locations of high sensitivity for z uncertainty of receiver 1

Contour plot showing locations of high sensitivity for z uncertainty of receiver 1

Using these contours, we can determine the azimuth, elevation, and magnitude of the highest sensivity for each value. These are tabulated below:

 Table showing the max sensitivities values for Mees-Brockport-Webster. The time difference between Brockport and Webster is especially sensitive.

Table showing the max sensitivities values for Mees-Brockport-Webster. The time difference between Brockport and Webster is especially sensitive.

To better understand why certain parameters are important, we can look back at the map.

 Mees-Brockport-Webster Triangle. The shortest distance between the triangles is associated with the highest sensitivity. The opposite point on the triangle, Mees, has the highest location sensitivity.

Mees-Brockport-Webster Triangle. The shortest distance between the triangles is associated with the highest sensitivity. The opposite point on the triangle, Mees, has the highest location sensitivity.

We are still trying to figure out why the most sensitive value is at the Azimuth that it is calculated at. 80 degrees is close to being along the line between Brockport and Webster, but ends up being around 15-20 degrees off.

We can obtain uncertainty from sensitivty by multiplying it by the associated input uncertainty (keeping units in mind) and taking the root mean squared. This results in an uncertainty plot.

 Uncertainty plot showing where the algorithm will struggle calculating TDoA. Azimuth error dominates at high elevation and Elevation error at low elevation.

Uncertainty plot showing where the algorithm will struggle calculating TDoA. Azimuth error dominates at high elevation and Elevation error at low elevation.

 Time Difference plot. As elevation increases, all time differences approach zero. Along an azimuth, the time differences have follow a sinusoid

Time Difference plot. As elevation increases, all time differences approach zero. Along an azimuth, the time differences have follow a sinusoid

After running the one-at-a-time for all 10 triangles, we compile the results based on the following criteria:

 The triangles are ordered from smallest to largest distance. Notice for the same mean distance, having angles closer to 60 degrees is optimal, as can be seen between triangle 1 and 2. Having a larger mean distance is more important than an optimal angle as can be seen between triangles 6 and 2. The best triangle is 6.

The triangles are ordered from smallest to largest distance. Notice for the same mean distance, having angles closer to 60 degrees is optimal, as can be seen between triangle 1 and 2. Having a larger mean distance is more important than an optimal angle as can be seen between triangles 6 and 2. The best triangle is 6.

 Boxplots mapping the median and IQR for azimuth and elevation.

Boxplots mapping the median and IQR for azimuth and elevation.

While a powerful tool, the one-at-a-time analysis breaks down with large uncertainties. Consider the following:

 The TDoA problem is nonlinear. If the uncertainty is too large, a linear approximation has large error.

The TDoA problem is nonlinear. If the uncertainty is too large, a linear approximation has large error.

One-at-a-time assumes the partial derivative is constant. It uses a linear approximation to estimate uncertainty. This is valid over small ranges for TDoA, but even out to 6m, it starts breaking down. We see similar results for uncertainties out to 100ns. So when we consider 5us, we have to use Monte Carlo.

Our Monte Carlo code uses the same number of test cases around the sky, 650. It solves each point 30 times. It uses the mean and 2 times the standard deviation of the error from each trial as the uncertainty. We include mean here because the TDoA algorithm has inherent error. We chose 30 to satisfy the central limit theorem from Statistics.

 Histogram showing the error in Azimuth and Elevation for 30 trials. We approximate these distributions as normal.

Histogram showing the error in Azimuth and Elevation for 30 trials. We approximate these distributions as normal.

Using these uncertainties instead, we can construct a new uncertainty plot. These plots are much more noisy due to statistical variation.

 Monte Carlo derived uncertainties. For several data points, the algorithm did not converge

Monte Carlo derived uncertainties. For several data points, the algorithm did not converge

We used 13 computers and 600 hours of computing time to estimate these uncertainties. The results are tabulated below:

 Monte Carlo Results. Triangles are ordered from smallest to largest mean distance. The optimal median error is 3 degrees.

Monte Carlo Results. Triangles are ordered from smallest to largest mean distance. The optimal median error is 3 degrees.

The TDoA algorithm, dubbed the Symbolic Solver is slow because it uses the Matlab symbolic solver. We are currently experimenting with using least squares instead. Least Squares can reduce runtime by 60 times, but its an open question whether it maintains similar accuracy.

For a comprehensive overview of the TDoA Algorithm, see the TDoA Presentation.

Orekit Max Elevation and Orbit Determination Analysis

Maximum Elevation Simulations

The action I set out to accomplish from the PDDR phase was to determine how probable satellite passes with a maximum elevation of 0 to 40 degrees was. To do this, I used Orekit to simulate 10 different satellites that have inclinations that allow for a pass over Rochester. I used Two Line Elements (TLE) to define the satellites and then propagated them over one year, recording the maximum elevation of each pass. The program flowchart is as follows:

 Orekit Max Elevation Simulation Flowchart

Orekit Max Elevation Simulation Flowchart

After running the simulation, histograms of the number of instances of each max elevation were plotted for each satellite.

 Orekit Max Elevation Simulation Histograms

Orekit Max Elevation Simulation Histograms

Here you can see that the majority of satellite passes occur in this 0-40 [deg] range. However, the results are not bad for us, because there are still a decent number of passes above this range. Additionally, Luca's testing in real life is promising as there were frequent enough passes in the sky that we wouldn't have to wait too long to capture satellite information.

 Orekit Percent View of Sky based on Antenna Minimum Viewing Angle

Orekit Percent View of Sky based on Antenna Minimum Viewing Angle

Based on the Orekit simulation results, the plot above shows what percent view of satellite passes we can capture based on the minimum elevation viewing angle of the antenna.

Orbit Determination First Steps

This phase, researched was done on how to use Orekit to perform orbit determination. Right now, we can perform orbit determination on simulated azimuth and elevation data with error perturbations artificially added into each measurement. The simple flowchart below describes the process:

 Orekit Orbit Determination Flowchart

Orekit Orbit Determination Flowchart

To perform my analysis, I used a very, very simple orbit type to generate simulated data and to solve for orbital parameters. This orbit does not take into account any perturbations such as atmospheric drag or solar radiation. This analysis was done to set up a framework for performing orbit determination using orekit.

To test the robustness of orekit's orbit determination, I first started by seeing what the maximum error perturbation (in degrees) was allowed before the OD solver crashed was. I tried this when data for 1, 2, and 3 ground tracks was provided into the solver:

 Orekit Orbit Determination Maximum Degree Error before Failure

Orekit Orbit Determination Maximum Degree Error before Failure

Here you can see that the solver is very robust when 3 sets of ground track data are provided. When one ground track set is provided, only a very small amount of degree error perturbation can be added before orekit cannot find a solution.

Then we wanted to see how bad the solution errors were compared to the actual orbital parameters inputted to generate the data were, so the solver was run at these maximum errors:

 Orekit Orbit Determination Maximum Degree Error Parameter Results

Orekit Orbit Determination Maximum Degree Error Parameter Results

Of course, with 26 degrees of uncertainty in the azimuth and elevation data in the three ground track case, the results are extremely far off from what is expected. On the other hand, the results with 0.31 degrees of uncertainty with one ground track provided, the results are relatively close to nominal. However, it is unlikely with our constraints that we can achieve such small uncertainty in our measurements.

Lastly, we wanted to see the sensitivity of these orbital parameters using a test case of having +-2.5 degrees of error in our measurements. This error is based off Anthony's TDoA simulations. A Monte Carlo simulation with 50 iterations was run. Two standard deviations was taken as the uncertainty in the orbital parameter solution.

 Orekit Orbit Determination Monte Carlo Sensitivity Results

Orekit Orbit Determination Monte Carlo Sensitivity Results

This analysis gives us some initial analysis on how sensitive the orbital parameters are. However, the next step is to perform more realistic orbit determination using a more accurate orbit type.

Drawings, Schematics, Flow Charts, Simulations

Serviceability

We want to be able to service our electronics easily. This includes assembly and disassembly. We needed a way to easily work on our electronics as well, with them being fixed inside. Our solution was to create a 3-D printed tray with standoffs on the bottom. These standoffs could be attached to the bottom of the housing, and the electronics could fit snugly in predetermined slots on the tray. This way, we only need to assemble and disassemble the tray from the housing to work on the electronics. The first figure below shows the preliminary idea for the 3-D printed tray, and the following figure shows the electronics assembled on it.
 3-D Printed Removeable Electronics Tray

3-D Printed Removeable Electronics Tray

 Electronics Tray Assembled

Electronics Tray Assembled

Assembly

We needed to consider ways to cool down the electrical components inside the housing. We have added some vents to the plastic housing, as well as a fan to blow air over the components. This housing was designed to be placed indoors, as we believe we wont need to have any electrical housing outside anymore.
 Housing for Electronics

Housing for Electronics

Changes From DDR

After going through our Detailed Design Review, our guide and customer advised us not to use any plastic for the housing due to electrical discharge. They advised us to use metal instead. Metal housing would also help to eliminate noise for the SDR. We are considering modifying the electrical housing to be weatherproof once again. This is to ensure that our project is scalable for universities around the country. If they ever duplicate our design, they may not have access to indoor facilities for their stations. Because of this, we will modify our current design to include weatherproofing.

Bill of Material (BOM)

During the Detailed Design Phase, risk of necessary funding was decreased since L3Harris officially offered to support Team LASSO with a donation of $5,000.

Budget Overview:

This brings the current budget to $6,386.

Team LASSO will build 4 stations (3 for TDoA and 1 for redundancy). Based on the BOM, Team LASSO expects to use $4,418. Increasing this by 20% to account to unexpected expenses or issues, a more realistic expectation of spending $5,302 can be estimated.

The BOM as of the end of the Detailed Design Phase is shown below.

 Bill of Materials during the Detailed Design Phase

Bill of Materials during the Detailed Design Phase

Test Plans

A detailed write up of all of the test plans and procedures, including failure modes, is linked here:

Test Plan

Requirements Verifications and Compliance Matrix

The RVCM is a way of defining which engineering requirements are related to each customer requirement. In this respect, it is similar to the House of Quality. However, it also describes how these customer requirements will be met, and how they will be tested. There is also a section determining what makes the test successful.
 Requirements Verification and Compliance Matrix

Requirements Verification and Compliance Matrix

Requirements Verification and Compliance Matrix

Purpose

Demonstrate objectively the degree to which the Engineering Requirements are satisfied

Instructions

  1. Instructions and EXAMPLE must be deleted before the first Detailed Design Review AND Identify an owner for this document.
  2. Complete test plans specifying the data to be collected, instrumentation to be used, testing procedures and personnel who will conduct the tests.
  3. Plans should use data collected to define the accuracy of models generated during feasibility analysis.
  4. Tests demonstrate that you met engineering requirements, which map back to your customer requirements. You should include a snapshot of your test plans here, but maintain the continuity of using your team's master requirements and testing document.
  5. If your team's testing will involve human subjects, you must review the RIT Human Subjects Research Office "Protecting Human Subjects" page for details on securing approval for work with human subjects

Inputs and Source

  1. Engineering Requirements.
  2. Test standards (e.g., ASTM). The RIT library maintains an infoguide with links to standards databases, many of which provide industry-standard test procedures for a variety of components and systems.
  3. Feasibility Models.

Outputs and Destination

  1. Report that summarized the degree to which Eng Reqs are satisfied.
  2. Assessment of accuracy of feasibility models.

Design and Flowcharts

Software Flowchart

A high-level software flowchart was created as a reference for the development phase next semester.

If symbol detection is used to extract the time differences among, the symbol detection software flowchart will be used.

 Software Flowchart (Symbol Detection)

Software Flowchart (Symbol Detection)

The customer will use the Graphical User Interface (GUI), which interfaces with the main program. The main program commicates with the Raspberry Pi's (one at each ground station) through the Server Command Interface. The main program on the server will also find the time offsets to synchronize the clocks on the 3 ground stations, run orbit determination, and TDoA. Once orbit determination is completed, the TLE data will be saved without awaiting instruction from the server's main program.

At the ground stations, the main program will interface with the SDR Software, GPS Interface, and run GetSymbolIndex. GetSymbolIndex will respond to the server's requests for timestamps of when a unique symbol occurred.

If cross correlation is used to extract time differences, the cross correlation software flowchart will be used. This is similar to the Symbol Search flowchart, but does not require a GetSymbolIndex program on the Raspberry Pi's.

 Software Flowchart (Cross Correlation)

Software Flowchart (Cross Correlation)

Risk Assessment

Design Review Materials

Include links to:

Plans for next phase

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

https://creativecommons.org/licenses/by-nc-sa/4.0/


Home | Planning & Execution | Imagine RIT

Problem Definition | Systems Design | Preliminary Detailed Design | Detailed Design

Build & Test Prep | Subsystem Build & Test | Integrated System Build & Test | Customer Handoff & Final Project Documentation