P18542: Real Time Terrain Mapping
/public/

Detailed Design

Table of Contents

Team Vision for Detailed Design Phase

This phase focused more on testing on some of the key components such as wireless communication, aerial imagery, and object orientation. As these components were being tested, it called for a redesign of some other components. The simulation environment was redesigned, which called for the necessity of redesigning the robot, and because of the robot redesign, the PCB layout was altered. The team was able to complete these tasks. Future phases will require the implementation of these components into one system.

Progress Report

The plan was to keep moving forward with testing and to start implementing the systems together. As you can never completely design something from the start without a redesign at some point, we believe we are at a good place to start moving forward with purchases and construction of the project. Some milestones that are worth noting accomplished during this semester are color detection, object recognition, objection image orientation, sensor prototyping, imagery, and considerable research into path finding algorithms. The team decided to move away from the STM32 development board and move to the Teensy 3.2 due to the complexity of adaptation of the the STM32 into our project. The Teensy with the Arduino IDE allows us to use some pre-built libraries in a familiar platform to a majority of the team. For the same reason of simplicity, the team moved away from WiFi and moved towards a more familiar ZigBee platform using two XBee modules. Plans for the future reviews will be to begin implementing these systems together and eventually into one final project.

Prototyping, Engineering Analysis, Simulation

Included in this section is the engineering analysis along with any prototyping and simulation documentation conducted in the Preliminary Detailed and Detailed Design phases.

Engineering Requirements

Engineering Analysis

Engineering Analysis

The live document can be found here: Link to live document: Engineering Requirements Live Document.

Flame Sensor

Due to the project scope and budget constraints, the thermal imagery is going to be obtained by a flame sensor module. This will simulate a heat signature and relay information back to the processing hub, signaling the victim has been found. The victim in our project is going to be a tea-light candle. The flame sensor module has a 60-degree viewing angle and a range of about 3.2 feet which works perfect for simulation purposes. The flame sensor that we will be using is the Flame Sensor Module form RobotLinking. The following images show the wiring of the flame sensor on an Arduino, as well as a flame being detected using the sensor. When the flame is not detected, it returns a high analog value and no LED is lit. When a flame is detected, it returned lower analog values and lights an LED. An example output of the Serial Monitor is shown with the analog outputs.
Flame Not Detected

Flame Not Detected

Flame Detected

Flame Detected

Flame Sensor Output on Serial Monitor

Flame Sensor Output on Serial Monitor

The experimental values recorded in graphical form can be found in the graph below.

Flame Sensor Output Over Varying Ranges

Flame Sensor Output Over Varying Ranges

Graphical User Interface

The Graphical User Interface (GUI) will be used to view the simulation occurring with an image overlay of the path that was and will be taken. It also allows the user to make some inputs including start, pause, and stop buttons, danger, speed, and environment settings. The final piece displays either the time elapsed and/or the estimated time remaining until simulation completion. Below is an example of the GUI that will be used.
Graphical User Interface

Graphical User Interface

Serial and Wireless Communication

Our project will be using two radio frequency modules called XBees. These devices are capable of sending and receiving information. They are only able to transfer data, but with the use of the Raspberry Pi and Teensy development boards, their data can be interpreted serially and converted into data used for actions useful to project success. This requires a two-part process. One part is involving serial communication and the other is wireless communication. For wireless communication the two XBee modules are placed on the same next work and use the same radio frequency. Any module using these parameters can communicate. The specific module used in testing is the XBee-Pro 900HP which has a range of up to 28 miles (direct line of sight) which is more than sufficient for our project. Serial communication takes place between the XBee and what Digi calls "intelligent devices" through the serial interface. In our case, this is both the Raspberry Pi image processing unit and the Teensy on the robot. This is a two-way communication where the Raspberry Pi is the Coordinator (main hub) and the robot is the Router. You can have only Coordinator and an infinite amount of Routers. There is one more branch called an End Point, but that does not allow for two-way communication and will not be used. The following image shows one message sent by the Coordinator and the other sent from the Router. The image shows the counsel view from the Coordinator. Notice how the Router message shows up as red. The next image shows the Serial Monitor on the Teensy and the only text that shows up is from the Coordinator.
Two-Way Communication Between XBee Modules

Two-Way Communication Between XBee Modules

Information Received on Development Board

Information Received on Development Board

The next step was to send information from the Coordinator, to the Router, have the Teensy read the Serial data, and turn that into a useful command. For simulation purposes when the letter h was set, it would light an LED, when the letter l was sent, it would turn off the LED. This is shown in the following images.

Sending High Command via XBee

Sending High Command via XBee

LED sent High via XBee

LED sent High via XBee

Sending Low Command via XBee

Sending Low Command via XBee

LED sent Low via XBee

LED sent Low via XBee

A script was written and hardware requirements investigated in order to achieve wireless communication using ZigBee radios between the Aerial Platform's Raspberry Pi and a microcontroller. This script demonstrates the ability to transmit a single byte each way. Pi Script: main.py Microcontroller Firmware: wireless-proto.ino

Test of Two-Way Communication from Python on Pi to Arduino via XBee

Test of Two-Way Communication from Python on Pi to Arduino via XBee

Ultrasonic Sensor

Distance sensing was not an initial need for the project, but we added it to add a three-dimensional and future project goal aspect to our robot. The robot will travel based on the directions it receives from the camera and processing unit, but the larger scope should be able to traverse on its own. Own customer stated that many of the operations performed in New York do not allow for aerial imagery. The ultrasonic sensor provides information as to what lies ahead of the robot. It will also be used to throw three-dimensional objects onto a two-dimensional grid that is being processed. The robot will notify the processing unit of the obstruction and the processing unit will find a new path. The distance sensor will be an HCSR04 Ultrasonic Sensor. It has a trigger time of 10 microseconds, sends signals at 40kHz (not audible to the human ear), runs off of a 5V power supply, and has a range of 2-500cm with a resolution of 0.3cm. The following images show the wiring and example outputs on the Serial Monitor.
HCSRO4 Wiring

HCSRO4 Wiring

Ultrasonic Sensor Example Output

Ultrasonic Sensor Example Output

Motor and Shield

The robot will travel using two DC motors controlled via pulse width modulation signals to give it speed variation. These different speeds will simulate different terrains that the robot is traveling through. It will move slower in mud than it would on gravel, for example. The motors must also move in both directions, forward and reverse, to navigate quickly and efficiently. We will be using two 12V Pololu motors, but the exact specs have not been determined other than the fact that it needs to have a wide range of speed values that it can achieve. A low torque, high speed motor will be able to accomplish this. The robot will be rather light. For a geared motor, it has been decided that anything below a 75:1 and above a 25:1 gear ratio will provide a sufficient amount of torque, with a wide speed variability for the project's purpose. The gear ratio does not impact the cost.
Motor and H-Bridge Shield

Motor and H-Bridge Shield

Servo Sweep

The ultrasonic and flame sensor modules are limited in their viewing capacity without being able to sweep back and forth quickly while the robot is moving along its path. The implementation of a servo motor is necessary to survey a larger area from one given position. The servo will rotate 45-degrees in both directions, giving a total of a 90-degree sweep. Both sensors have about a 60-degree viewing angle, meaning that the sensors will have a 150-degree viewing angle while being swept back-and-forth. The servo that will be used is the SG90 Micro Servo Motor. It has a stall torque of 1kg/cm and operating range of 3-7V. Since the only function of the servo is to sweep a flame and ultrasonic sensor, this will be more than sufficient. The picture below shows the wiring and servo that will be used.
Servo Wiring

Servo Wiring

Audible Feedback

This was not entirely necessary, but can be useful in debugging or just for fun. The plan was to have the buzzer make a sound when found a victim or play happy or sad tones based on the environment or situation. The audio feedback will be provided using a piezo buzzer.

Future Prototyping

Computer Vision Simulations

Because our project involved tracking objects representing different terrain to navigate, Opencv is being used to track the objects via color. To prove that this is possible, multiple objects were tracked and outlined by their contour with the color that was identified in the video. This is real time so when objects in the terrain are moved, they will be tracked in real time.

Object Tracking Via HSV Color

Object Tracking Via HSV Color

The boundaries of the terrain need to be known for two reasons. First so the autonomous vehicle will not travel off of the terrain and secondly to construct the graph that will be used for the search algorithms. To do this, canny edge detection will be used to identify the edges of the terrain along with an outline of each tile. As a demonstration of how this will be done, the edges of the features in the original image were detected and outlined. The result is shown in the figure on the bottom.

Original Image of Fruit Basket Used

Original Image of Fruit Basket Used

Result of Canny Edge Detection on Fruit Basket

Result of Canny Edge Detection on Fruit Basket

In order to track the autonomous vehicle, it needs to be singled out from the terrain surrounding it. Can be done in a few different ways. Either a mask can be used to filter out all other HSV colors in the video allowing only the robot to be seen or a background subtraction can be done accomplishing the same task.

HSV Color Masking Applied

HSV Color Masking Applied

Original Image From Video

Original Image From Video

Result of Background Subtraction Model Built and Applied

Result of Background Subtraction Model Built and Applied

Because the camera will not be able to take a picture from directly over top of the terrain, there will be a perspective difference between the image taken and what we want to process. Therefore, a perspective transformation will need to be performed on the original image. This will be automated by placing a yellow circle in each corner that will be detected using opencv. The detection will be done using both color and shape of the objects in the corner. Once the corners of the terrain are identified, the transform can be completed resizing the image without distorting it.

Shape Detection

Shape Detection

Original Image Taken at an Angle

Original Image Taken at an Angle

After Perspective Transform is Completed

After Perspective Transform is Completed

After the perspective transform is done, the pixel dimensions of the image are known. Using the HSV color scale, a baseline value will be determined for each color that will be used in the terrain. Using the pixel coordinates of the center of each hexagon, an average color will be determined and its absolute distance from each of the baseline values for each color will be calculated, that of the smallest distance will be the color of the tile and a "node" of that color will be placed in the center. This will be later used for encrypting the JSON file.Sample JSON File

Nodes Determined Using HSV Color Scale

Nodes Determined Using HSV Color Scale

Aerial Processing

A small script was written to test working with the PiCamera API in Python. This script commands the camera to capture an image and saves that image to the filesystem. This demonstrated the ability to take an image with a Python command, which will be crucial for the asar-camera-engine. Script: take-picture.py Image: Prototyping Image

Search Algorithm

Multiple methods of path-finding were considered and compared against others for the specific application of this project. These are described in Search Algorithm Research.

Search Algorithm Output file that consists of movement commands for the robot. A* Output.

Simulation Environment

The Simulation Environment was designed to be used with multiple Disaster Scenarios that were based on our discussions with Subkect Matter Expert Discussion.

Aerial Camera Mount

Preliminary discussion of the design of the Aerial Camera Mount Concepts was completed.

Bill of Material (BOM)

An updated Bill of Materials is shown below. Although many items have "No Progress", the team has pooled together a few of the major components for prototyping and to verify success before a purchase through MSD.
Bill of Materials

Bill of Materials

Test Plans

Computer Vision Test Plan

The vision system is tasked with identifying and tracking objects located in the terrain map depending on the HSV color, identifying and tracking the autonomous vehicle, outlining the terrain map and each object to overlay their locations on the map, and create a graph on the terrain for the search algorithm. Preliminary testing is being conducted to verify that the individual functions will work but will be translated to our specific application as time goes on.

Terrain Boundary/Graph for Search Algorithm

  1. Determine the outer edge along with the edges of each tile in the terrain map using canny edge detection.
  2. interpolate nodes of the graph from the edges that were detected. This graph will be used for the search algorithm.

Color Detection

  1. Determine the HSV color of the objects within the terrain map.
  2. A mask will be applied for each color range to determine where that color is present on the map. An outline of the contour of each object will be drawn on the image.
  3. Assign parameters to each color that is present in the map to be used in the path finding algorithm.
  4. Store the images in a matrix to be referenced in the future.

Contour

  1. Outline each object in the terrain map encompassing the entire area of that boundary.
  2. Overlay the outlined objects on the terrain map to obtain the location of each object.

Background Subtraction

  1. Build a background subtraction model that will apply to out terrain leaving only the robot present in the image.
  2. Apply this filter to the images being taken such that the position of the robot will always be known.

Image Transformation

  1. Because the camera will not be perpendicular to the terrain, an image transformation will be done to make it appear perpendicular for the processing
  2. To test this, images will be taken at angles and given the transformations, will appear perpendicular to the camera

ASAR Applications Test Strategy Document

File: ASAR Linux Applications Test Strategy

Design and Flowcharts

This section contains the designed components from the last two phases.

Autonomous Search and Rescue Robot Design

Although this is not the 100% design for the robot, it gives a general idea of the robots design. The robot will be smaller than 9x9 inches to allow for travel through the simulation environment. Ideally, it will contain four 12V motors. The robot's shell will be made of of sheet metal. It is shown in the figure below along with the unfolded model. There will be a few extra cuts to allow the acrylic top to be held in place, but the general idea is pictured.
Unfolded Sheet Metal Shell

Unfolded Sheet Metal Shell

Robot Sheet Metal Shell

Robot Sheet Metal Shell

The top will be laser cut out of acrylic placed on top to allow for a visual of the internal components. The robot's complete general design is shown below.

Robot Concept Model

Robot Concept Model

The rear wheels will have motors and and encoders attached and be driven using differential drive techniques. The front wheels will be controlled using belts connected to the back wheels.

The PCB, and battery will sit in the bottom of the robot box. The top will be covered with a removable piece of acrylic, allowing visibility and easy access for troubleshooting.

The Ultrasonic and Flame sensors will sit on a post out the front of the robot, connected to a servo that is continuously sweeping.

ASAR Robot Software Flowchart

Robot Software Flowchart

Robot Software Flowchart

Robot Firmware

Robot Firmware

Robot Firmware

ASAR Robot Interface PCB

ROBOT PCB

ROBOT PCB

ROBOT PCB SCHEMATIC

ROBOT PCB SCHEMATIC

ROBOT PCB Bill of Materials

ROBOT PCB Bill of Materials

PCB Assembly(Start)

PCB Assembly(Start)

PCB Assembly(Start)

PCB Assembly(Start)

PCB Assembly(Start)

PCB Assembly(Start)

  1. Linux Environment Sequence Diagram: Sequence Diagram
  2. Obstacle Detection Sequence Diagram: Obstacle Detection
  3. Robot Firmware Flowchart: Robot Firmware

Risk Assessment

Below is the updated Risk Assessment evaluation for this phase. All risks have an accompanied method to reduce the risk in the upcoming semester.
Risk Assessment

Risk Assessment

Link to live document: Risk Assessment Live Document.

Plans for next phase


Home | Planning & Execution | Imagine RIT

Problem Definition | Systems Design | Preliminary Detailed Design | Detailed Design

Build & Test Prep | Subsystem Build & Test | Integrated System Build & Test | Customer Handoff & Final Project Documentation