Team Vision for Detailed Design PhaseThis phase focused more on testing on some of the key components such as wireless communication, aerial imagery, and object orientation. As these components were being tested, it called for a redesign of some other components. The simulation environment was redesigned, which called for the necessity of redesigning the robot, and because of the robot redesign, the PCB layout was altered. The team was able to complete these tasks. Future phases will require the implementation of these components into one system.
Progress ReportThe plan was to keep moving forward with testing and to start implementing the systems together. As you can never completely design something from the start without a redesign at some point, we believe we are at a good place to start moving forward with purchases and construction of the project. Some milestones that are worth noting accomplished during this semester are color detection, object recognition, objection image orientation, sensor prototyping, imagery, and considerable research into path finding algorithms. The team decided to move away from the STM32 development board and move to the Teensy 3.2 due to the complexity of adaptation of the the STM32 into our project. The Teensy with the Arduino IDE allows us to use some pre-built libraries in a familiar platform to a majority of the team. For the same reason of simplicity, the team moved away from WiFi and moved towards a more familiar ZigBee platform using two XBee modules. Plans for the future reviews will be to begin implementing these systems together and eventually into one final project.
Prototyping, Engineering Analysis, SimulationIncluded in this section is the engineering analysis along with any prototyping and simulation documentation conducted in the Preliminary Detailed and Detailed Design phases.
Engineering RequirementsEngineering Requirements Live Document.
Flame SensorDue to the project scope and budget constraints, the thermal imagery is going to be obtained by a flame sensor module. This will simulate a heat signature and relay information back to the processing hub, signaling the victim has been found. The victim in our project is going to be a tea-light candle. The flame sensor module has a 60-degree viewing angle and a range of about 3.2 feet which works perfect for simulation purposes. The flame sensor that we will be using is the Flame Sensor Module form RobotLinking. The following images show the wiring of the flame sensor on an Arduino, as well as a flame being detected using the sensor. When the flame is not detected, it returns a high analog value and no LED is lit. When a flame is detected, it returned lower analog values and lights an LED. An example output of the Serial Monitor is shown with the analog outputs.
The experimental values recorded in graphical form can be found in the graph below.
Graphical User InterfaceThe Graphical User Interface (GUI) will be used to view the simulation occurring with an image overlay of the path that was and will be taken. It also allows the user to make some inputs including start, pause, and stop buttons, danger, speed, and environment settings. The final piece displays either the time elapsed and/or the estimated time remaining until simulation completion. Below is an example of the GUI that will be used.
Serial and Wireless CommunicationOur project will be using two radio frequency modules called XBees. These devices are capable of sending and receiving information. They are only able to transfer data, but with the use of the Raspberry Pi and Teensy development boards, their data can be interpreted serially and converted into data used for actions useful to project success. This requires a two-part process. One part is involving serial communication and the other is wireless communication. For wireless communication the two XBee modules are placed on the same next work and use the same radio frequency. Any module using these parameters can communicate. The specific module used in testing is the XBee-Pro 900HP which has a range of up to 28 miles (direct line of sight) which is more than sufficient for our project. Serial communication takes place between the XBee and what Digi calls "intelligent devices" through the serial interface. In our case, this is both the Raspberry Pi image processing unit and the Teensy on the robot. This is a two-way communication where the Raspberry Pi is the Coordinator (main hub) and the robot is the Router. You can have only Coordinator and an infinite amount of Routers. There is one more branch called an End Point, but that does not allow for two-way communication and will not be used. The following image shows one message sent by the Coordinator and the other sent from the Router. The image shows the counsel view from the Coordinator. Notice how the Router message shows up as red. The next image shows the Serial Monitor on the Teensy and the only text that shows up is from the Coordinator.
The next step was to send information from the Coordinator, to the Router, have the Teensy read the Serial data, and turn that into a useful command. For simulation purposes when the letter h was set, it would light an LED, when the letter l was sent, it would turn off the LED. This is shown in the following images.
A script was written and hardware requirements investigated in order to achieve wireless communication using ZigBee radios between the Aerial Platform's Raspberry Pi and a microcontroller. This script demonstrates the ability to transmit a single byte each way. Pi Script: main.py Microcontroller Firmware: wireless-proto.ino
Ultrasonic SensorDistance sensing was not an initial need for the project, but we added it to add a three-dimensional and future project goal aspect to our robot. The robot will travel based on the directions it receives from the camera and processing unit, but the larger scope should be able to traverse on its own. Own customer stated that many of the operations performed in New York do not allow for aerial imagery. The ultrasonic sensor provides information as to what lies ahead of the robot. It will also be used to throw three-dimensional objects onto a two-dimensional grid that is being processed. The robot will notify the processing unit of the obstruction and the processing unit will find a new path. The distance sensor will be an HCSR04 Ultrasonic Sensor. It has a trigger time of 10 microseconds, sends signals at 40kHz (not audible to the human ear), runs off of a 5V power supply, and has a range of 2-500cm with a resolution of 0.3cm. The following images show the wiring and example outputs on the Serial Monitor.
Motor and ShieldThe robot will travel using two DC motors controlled via pulse width modulation signals to give it speed variation. These different speeds will simulate different terrains that the robot is traveling through. It will move slower in mud than it would on gravel, for example. The motors must also move in both directions, forward and reverse, to navigate quickly and efficiently. We will be using two 12V Pololu motors, but the exact specs have not been determined other than the fact that it needs to have a wide range of speed values that it can achieve. A low torque, high speed motor will be able to accomplish this. The robot will be rather light. For a geared motor, it has been decided that anything below a 75:1 and above a 25:1 gear ratio will provide a sufficient amount of torque, with a wide speed variability for the project's purpose. The gear ratio does not impact the cost.
Servo SweepThe ultrasonic and flame sensor modules are limited in their viewing capacity without being able to sweep back and forth quickly while the robot is moving along its path. The implementation of a servo motor is necessary to survey a larger area from one given position. The servo will rotate 45-degrees in both directions, giving a total of a 90-degree sweep. Both sensors have about a 60-degree viewing angle, meaning that the sensors will have a 150-degree viewing angle while being swept back-and-forth. The servo that will be used is the SG90 Micro Servo Motor. It has a stall torque of 1kg/cm and operating range of 3-7V. Since the only function of the servo is to sweep a flame and ultrasonic sensor, this will be more than sufficient. The picture below shows the wiring and servo that will be used.
Audible FeedbackThis was not entirely necessary, but can be useful in debugging or just for fun. The plan was to have the buzzer make a sound when found a victim or play happy or sad tones based on the environment or situation. The audio feedback will be provided using a piezo buzzer.
- Encoders - In order for the robot to travel the correct speed and distance, the use of encoders are essential. When given the command to rotate by a given measurement, the robot will be able to calculate how far it has moved based on the wheel’s rotation. Encoders will provide the necessary information for the robot's feedback system. These will be a package on the Pololu motor and the exact specs are unknown at this time.
- Visual Feedback - Similar to the audio feedback, different colored LEDs will also be used to provide visual feedback to the user for different states that the robot is in.
Computer Vision Simulations
Because our project involved tracking objects representing different terrain to navigate, Opencv is being used to track the objects via color. To prove that this is possible, multiple objects were tracked and outlined by their contour with the color that was identified in the video. This is real time so when objects in the terrain are moved, they will be tracked in real time.
The boundaries of the terrain need to be known for two reasons. First so the autonomous vehicle will not travel off of the terrain and secondly to construct the graph that will be used for the search algorithms. To do this, canny edge detection will be used to identify the edges of the terrain along with an outline of each tile. As a demonstration of how this will be done, the edges of the features in the original image were detected and outlined. The result is shown in the figure on the bottom.
In order to track the autonomous vehicle, it needs to be singled out from the terrain surrounding it. Can be done in a few different ways. Either a mask can be used to filter out all other HSV colors in the video allowing only the robot to be seen or a background subtraction can be done accomplishing the same task.
Because the camera will not be able to take a picture from directly over top of the terrain, there will be a perspective difference between the image taken and what we want to process. Therefore, a perspective transformation will need to be performed on the original image. This will be automated by placing a yellow circle in each corner that will be detected using opencv. The detection will be done using both color and shape of the objects in the corner. Once the corners of the terrain are identified, the transform can be completed resizing the image without distorting it.
After the perspective transform is done, the pixel dimensions of the image are known. Using the HSV color scale, a baseline value will be determined for each color that will be used in the terrain. Using the pixel coordinates of the center of each hexagon, an average color will be determined and its absolute distance from each of the baseline values for each color will be calculated, that of the smallest distance will be the color of the tile and a "node" of that color will be placed in the center. This will be later used for encrypting the JSON file.Sample JSON File
- Raspberry Pi Camera
A small script was written to test working with the PiCamera API in Python. This script commands the camera to capture an image and saves that image to the filesystem. This demonstrated the ability to take an image with a Python command, which will be crucial for the asar-camera-engine. Script: take-picture.py Image: Prototyping Image
Search AlgorithmMultiple methods of path-finding were considered and compared against others for the specific application of this project. These are described in Search Algorithm Research.
Search Algorithm Output file that consists of movement commands for the robot. A* Output.
Simulation EnvironmentThe Simulation Environment was designed to be used with multiple Disaster Scenarios that were based on our discussions with Subkect Matter Expert Discussion.
Aerial Camera MountPreliminary discussion of the design of the Aerial Camera Mount Concepts was completed.
Bill of Material (BOM)An updated Bill of Materials is shown below. Although many items have "No Progress", the team has pooled together a few of the major components for prototyping and to verify success before a purchase through MSD.
Computer Vision Test PlanThe vision system is tasked with identifying and tracking objects located in the terrain map depending on the HSV color, identifying and tracking the autonomous vehicle, outlining the terrain map and each object to overlay their locations on the map, and create a graph on the terrain for the search algorithm. Preliminary testing is being conducted to verify that the individual functions will work but will be translated to our specific application as time goes on.
Terrain Boundary/Graph for Search Algorithm
- Determine the outer edge along with the edges of each tile in the terrain map using canny edge detection.
- interpolate nodes of the graph from the edges that were detected. This graph will be used for the search algorithm.
- Determine the HSV color of the objects within the terrain map.
- A mask will be applied for each color range to determine where that color is present on the map. An outline of the contour of each object will be drawn on the image.
- Assign parameters to each color that is present in the map to be used in the path finding algorithm.
- Store the images in a matrix to be referenced in the future.
- Outline each object in the terrain map encompassing the entire area of that boundary.
- Overlay the outlined objects on the terrain map to obtain the location of each object.
- Build a background subtraction model that will apply to out terrain leaving only the robot present in the image.
- Apply this filter to the images being taken such that the position of the robot will always be known.
- Because the camera will not be perpendicular to the terrain, an image transformation will be done to make it appear perpendicular for the processing
- To test this, images will be taken at angles and given the transformations, will appear perpendicular to the camera
ASAR Applications Test Strategy DocumentFile: ASAR Linux Applications Test Strategy
Design and FlowchartsThis section contains the designed components from the last two phases.
Autonomous Search and Rescue Robot DesignAlthough this is not the 100% design for the robot, it gives a general idea of the robots design. The robot will be smaller than 9x9 inches to allow for travel through the simulation environment. Ideally, it will contain four 12V motors. The robot's shell will be made of of sheet metal. It is shown in the figure below along with the unfolded model. There will be a few extra cuts to allow the acrylic top to be held in place, but the general idea is pictured.
The top will be laser cut out of acrylic placed on top to allow for a visual of the internal components. The robot's complete general design is shown below.
The rear wheels will have motors and and encoders attached and be driven using differential drive techniques. The front wheels will be controlled using belts connected to the back wheels.
The PCB, and battery will sit in the bottom of the robot box. The top will be covered with a removable piece of acrylic, allowing visibility and easy access for troubleshooting.
The Ultrasonic and Flame sensors will sit on a post out the front of the robot, connected to a servo that is continuously sweeping.
ASAR Robot Software Flowchart
ASAR Robot Interface PCB
- Linux Environment Sequence Diagram: Sequence Diagram
- Obstacle Detection Sequence Diagram: Obstacle Detection
- Robot Firmware Flowchart: Robot Firmware
Risk AssessmentBelow is the updated Risk Assessment evaluation for this phase. All risks have an accompanied method to reduce the risk in the upcoming semester.
Link to live document: Risk Assessment Live Document.
Plans for next phase
- As a team, we have decided that most items can be handled after the semester is over and at the beginning of next semester. However, we would like to proactively start the construction of the robot and simulation environment.
- Before getting ready for MSDII, our most pressing matter is the purchasing of materials and the money involved with the purchases.