rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · web...

59
Final Report ECEn 490 – Robot Soccer Team 4 MAN-D Andrew Wendt Nick Ortega Dave Bogle Lesley Wu Mike McLind

Upload: others

Post on 16-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Final Report

ECEn 490 – Robot Soccer

Team 4 MAN-DAndrew Wendt

Nick OrtegaDave BogleLesley Wu

Mike McLind

Page 2: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Table of ContentsExecutive Summary.......................................................................................................................2Introduction....................................................................................................................................3

System Overview.............................................................................................................3Mechanical.....................................................................................................................................4

Intro..................................................................................................................................4 Original Design................................................................................................................4 Design Changes..............................................................................................................5 BUGS...............................................................................................................................6 Final Results....................................................................................................................7

Motion Control...............................................................................................................................8 Intro..................................................................................................................................8 Original Design................................................................................................................8 Design Changes..............................................................................................................9 Bugs.................................................................................................................................9 Final Results..................................................................................................................10

Vision...........................................................................................................................................11 Intro................................................................................................................................11 Original Design..............................................................................................................11 Design Changes............................................................................................................14 Bugs...............................................................................................................................15 Final Results..................................................................................................................16

Artificial Intelligence.....................................................................................................................17 Intro................................................................................................................................17 Original Design..............................................................................................................17 Design Changes............................................................................................................18 Bugs...............................................................................................................................19 Final Results..................................................................................................................19

Appendix A..................................................................................................................................20Appendix B..................................................................................................................................29Appendix C..................................................................................................................................42

1

Page 3: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Executive Summary

The robot soccer competition has been around for many years at BYU. It has proven to be a favorite among both the BYU professors and the students. It blends an understanding of high level software to handle vision processing to low level code that is interfacing with hardware components. It is a great way to summarize many of the concepts learned by computer engineers and electrical engineers.

The project this year consisted of building a single robot that was tracked by an overhead IP camera. The robot had an ODROID to handle the control of the motors, three wheels with 2 RoboClaws controlling them, and two large batteries for power. Additionally we added a solenoid kicker to the robot in order to give a quick, fast push to the ball. The robots had a “jersey” on that was a specific color the camera could track. The ball was also a distinct color as well.

Rules had been set forth for how the game should be played. They mimicked many rules that are present in soccer such as no rough hitting of another player. With these rules in place there was an even playing ground on which to compete. The field was a 5x10 foot field with a home and away goal. The matches lasted approx 60 seconds with each team having one technical time out they could use.

Throughout the project we encountered some problems. As we were about two weeks away from the final competition we realized that both the vision code and the motor controller code had to be ported to C++ in order to achieve the desired result. We had originally written the code in Python, but we found the C++ code performed significantly better. This was a bit of a setback due to the amount of time we had left until the final competition. We did successfully port both the vision and the motor control code to C++ and saw significant improvement in the performance of our robot. Unfortunately when we competed in the final competition we ran into some serious roadblocks. Our Odroid SD card had been corrupted. We also were unable to connect to the camera and therefore had no vision. Through the combined efforts of the team members and the help of the TA’s we were able to rebuild the ODROID and get the robot running again. However, we were never able to access the camera, therefore we did not compete.

Despite this setback we were confident that given more time we could have found the problem with the connection to the camera and seen a robot that would be a serious competitor against our peers.

2

Page 4: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Introduction

System OverviewThis is the basic overview of our project. On the left side you can see that the camera

sends data to our computer to begin processing the state of the field. Our computer has an algorithm it runs in order to figure out the position of our robot, the opponent robot, and the ball and then maps their positions to the world coordinates. Once this has been done it then broadcasts that data to the AI which then makes decisions about what to do in a given situation. The AI then commands sends commands through a serial connection to the RoboClaws which are then able to move the motors accordingly. The RoboClaws are then able to receive feedback on how fast the motors are moving.

Figure 2.1

3

Page 5: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Mechanical

IntroThe mechanical system has to do with everything physical about the robot. This

system covers the head to toe of the robot and everything in between. Most of the mechanical details of the robot deal with how the parts are laid out and attached to the robot as well as integration between the parts.

Original DesignThe main parts that needed to be integrated on the robot are three motors, three

wheels, two motor controllers, 2 batteries, a voltage regulator, and a small processor. The original design for the robot was for it to be an aluminum hexagon measuring eight inches at the widest point. Three of the sides that were opposite from each other had a side panel where we would mount the wheel motors. The other three sides were open to the air to allow access to the components as needed. The batteries were on the bottom and held in by the motors and two metal standoffs. The side panel also had an overhang toward the center to which we would attach a top plate. This top plate would have the rest of the hardware mounted to it. One of the open area was suppose to be cut away at a curve so we could have some ball control, with the idea of eventually mounting a dribbling mechanism. The original schematic is Figure 3.1.

Figure 3.1

4

Page 6: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Design ChangesAfter starting to build the robot, a bug with the motors, which will be discussed in the

bug section, caused us to rethink our design. We learned that the ECEN Shop had plastic 3D printable motor mounts that we could use. We decided to use these motor mounts as they were already set correctly. This meant that we needed to use the three sides without the walls to mount the motors. Instead of starting over decided to use the same base. We also decided that the top would be rather heavy and this was undesirable. With moving the motors off the side walls this opened up space to put the other parts. We still wanted an area to do ball control so this left us with two walls. We decided to mount the motor controls directly over the motors and mount the processor and voltage control on the sides. This also changed the batteries to be mounted to the bottom of the lid which we attached as a hinge to allow easy access to the inside of the robot. Figure 3.2 is a picture of the actual robot with these changes.

Figure 3.2

One of the functional design specifications is that the robot needs to be black on the whole outside. At one point we decided to try to use construction paper, but that didn’t work very well. We also decided we would like some protection for the wheels as they could be hit by other robots. This caused us to rethink our design again. We decided that the basic idea of a metal can could provide all the things that we wanted. We also came up with the thought to incorporate a switch and an external power jack so we would not have to be digging around in the robot to connect batteries. The design was to cut off the side walls of the bottom and leave the motors mounted where they were. We would attach metal ducting using square brackets to the main plate. The batteries were moved to another plate that was connected to the motor mounts. The rest of the parts could be attached to the walls at what ever spacing was needed. For ball control a small area was

5

Page 7: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

cut out of the side of the ducting that allowed the ball be receded from the edge. Figure 3.3 is the outcome of this design and was the final design used with a cardboard removable lid.

Figure 3.3

BUGSThere were 4 bugs that happened to the mechanical systems throughout the project.

The first as mentioned above dealt with the motors. After starting to build the base and drilling in holes, early on in the project, it became very apparent that some of the measurements taken to make holes and mounting areas were not correct. The most problematic spot was the motor mounts. It was very hard to get the holes in the exact position. This was causing the wheels to be tilted and the motors to not be secured correctly. As mentioned in the design review, we learned of 3D printed mounts. These parts and a design changed fixed this bug.

The second bug that we encountered is that we destroyed a protection diode on the power regulator board. This bug was a quick fix by just removing the destroyed part and soldering in a new one.

The third bug was that right before a competition. One of the linux libraries that we

were using on our processor became corrupted. We fixed the bug by downloading the correct version of the library from the web and overwriting the corrupted version. This took long enough that we missed that competition.

The fourth bug was related to the third. The entire memory system on the processor became corrupted. We lost everything that was on the card that had not been backed up. We were able eventually able to flash a new image to the memory and download all the needed parts to get the robot working.

6

Page 8: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Final ResultsThe final result of the mechanical systems is that we had an assembled robot ready to

be controlled. The final system is that of the one in Figure 3.3. It met all competition rules, protected the components, and allows for easy access to all the components.

7

Page 9: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Motion Control

IntroRobot soccer is all about controlling the ball and directing it into the goal. This is only

achieved by controlling the robot on the field. The AI calculates how the robot needs to move in order to manipulate the ball. The motion control, then, is the link between the AI and the mechanical systems of the robot, allowing the robot to do as desired.

Original DesignWe were initially supplied with an odroid microprocessor, two RoboClaw motor

controllers, and three motors with built in encoders. We had to get all of these communicating. We chose to write code on the odroid in python to take output (world velocity commands) from the AI over a ROS publisher and interpret these as and output individual motor commands.

The main processing involved matrix and vector algebra. To translate from a world view to the robot’s frame, we multiplied by a rotational matrix. We then multiplied by a conversion matrix which took into account the geometry of the robot (the position and direction of each motor).

To output to the motors, commands were sent to Roboclaw motor controllers. These accepted serial commands and used them to output voltages to each motor. There are multiple modes possible for this, and originally we used the simple drive command, which sent out a number which was directly translated into a voltage output. This actually controlled our motion quite well when given straight forwards/backwards commands, and it could run quite quickly. However, it did not turn very well while in motion, and did not adjust for different motor strengths.

8

Page 10: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Design ChangesGiven the issues we were seeing with the direct voltage control, we decided to look into

using the RoboClaw’s built in PID control to command the motors more accurately. The final system is depicted below.

We also switched the interpretation to C++ on the odroid. This allowed the AI (which was written in C++) to call output functions directly. While this took a bit of extra time, the change significantly improved performance.

In addition to changing commands given, the RoboClaw controllers needed information from the encoders to achieve the desired angular velocities. This was simple to wire, but also required accurate timing of ticks per revolution. Each wheel was timed given a constant command, and the results were used to scale outputs given to each motor.

Bugs

9

Page 11: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

The first major bug we encountered was that the motors could not be accurately commanded given voltages only. We needed the encoder information to exactly match each motor speed. Implementing this was difficult, however, as we ended up needing to scale the outputs based on encoding differences. We were only able to solve this problem with help from the TAs.

After accurately correcting the output to each motor, we found that using ROS to communicate between the AI and the motion control caused significant lag between AI outputs and realized action. We never found out why, but this led us to change from Python to C++, which improved accuracy as well.

One bug we never addressed was the inability of our odroid to check our encoder outputs. We would have needed to implement voltage stepping for that, and we did not feel it was necessary. Additionally, we may have a bug with RoboClaw errors given high inputs, but we only saw those once and never found out what caused them. Our final tests did not demonstrate this problem, however. We also found the motion became less accurate as we moved faster.

Final ResultsOur most exciting result was when the final motion control was assembled. We could

move very accurately given outputs from the AI software. We tested this using a Playstation controller. The AI simply passed through the inputs to the robot from the controller. We could move in straight lines in forward, sideways, and diagonal directions, shifting speeds quite swiftly.

10

Page 12: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Vision

IntroThe vision subsystem serves as the basis and key of robot soccer. The robots are able

to make relative decisions, control movements and achieve the goal only if they perceive the surrounding environment through the vision system.

In recent years, with the development of artificial intelligence, neural networking and statistic theory, studies about machine vision in target detection, target recognition and target tracking have made significant progress and have been put into widespread use in industry and everyday life. It is worth mentioning that target tracking plays an essential role in the robot soccer project. Target tracking is defined as finding the area of a target in a sequence of images which closely matches the template by defining characteristics of the target and applying the similarity principle. Characteristics of targets include visual characteristics (color, texture, shape and outline), statistical characteristics (histogram and moment), algebraic characteristics (singular value decomposition of image matrix) and so on. Common measurements of similarity includes weighted distance, Hausdorff distance, Euclidean distance, chessboard distance and so on. In order to avoid a large quantity of redundant information brought by global search, it is helpful to adopt tracking algorithms to estimate the target's position so as to narrow the search

Currently, there are four main kinds of algorithms to track moving objects. They consist of tracking objects on the basis of characteristics, tracking algorithms based on area, tracking algorithms on the basis of active outline and tracking algorithms based on model. Classic estimation algorithms include Kalman Filter, extended Kalman Filter, Mean-Shift algorithm, Cam-Shift algorithm, Particle Filter, etc.

Original DesignIn the robot soccer vision system, the components involved are the camera calibration,

color calibration, object recognition and object positioning. The flowchart in Figure 5.1 below shows the process of vision control.

11

Page 13: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Figure 5.1

Camera calibration is about eliminating the fish-eye effect of the overhead camera. A Fish-eye lens is a kind of ultra wide angle lens which is able to take a picture of the view in a range as wide as 180 degrees or even wider at once. However, the picture taken by the camera with a fish-eye lens is not exactly the reflection of the actual scene. Instead, you can tell that there are extremely severe distortions. So, the camera calibration needs to be called to restore objects in space making use of the pictures captured by the camera. It's assumed that there is a simple linear relation as following between images taken by camera and objects in three-dimensional space: [image]=M[object], where M is regarded as a geometric model of camera imaging. Generally, these parameters are acquired through experiments and calculations and the process of finding those parameters is called camera calibration

The targets that the robot soccer vision system looks for are patches of different colors of various sizes. However, the color shown in the images varies with the changes in the environment and illumination As a result, in order to segment the color accurately, the first thing to do is color calibration which means to extract color characteristics of target.

Object recognition is the next stage performed after color calibration. In the object recognition stage, filters for different colors will be set according to color characteristics extracted in color calibration system. The image is filtered using a filtering algorithm and the outline of the result will be traced after performing a few morphological operations

In the object positioning stage, we find the coordinate of the center of the contour traced by the contour tracing algorithm and then convert it into the real position. Meanwhile, the angle the robot is facing will be obtained and transmitted to the decision-making subsystem together with the real position.

Camera CalibrationCamera distortions are mainly divided into two kinds-radial distortion and tangential

distortion. Radial distortion, is usually classified as barrel distortion. It is the variation of endpoints of vectors with distance from the optical axis. It appears in the mid-range focal length and is worst at its widest end. Similarly, tangential distortion is the variation of endpoints of vectors with distance from the tangential direction on account of that lens is

12

Page 14: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

not absolutely parallel to imaging plans. This kind of distortion will result in some points in the image being closer than they are supposed to be. Aside from positioning, there is still other information required to accomplish camera calibration, such as intrinsic parameters and external parameters of the camera. The process of determining distortion coefficients, intrinsic parameters and external parameters is known as calibration. Basically, we need to provide a sequence of snapshots of obvious patterns, like a chessboard, to accomplish calibration. In this way, we are able to find positions in the image and actual positions of some special points, such as corners, that are available to solve those parameters through basic geometrical equations. For better results, we need at least eight good snapshots of chessboard taken in different positions. The result of camera calibration is shown in Figure 5.2

Figure 5.2 Color Calibration

Color calibration is essential since a robot soccer competition depends most on the object recognition application on the basis of color segmentation in determining team members, opponents and the ball. Every robot is characterized by a color tag according to its role. The color tags can be designed in different shapes, such as rectangle, triangle and round color patches. Although they might be in various shapes, every color tag must be in the team's identifying color like blue or red. This is to differentiate team members and opponents and avoid confusion. In this circumstance, color space plays an significant role in color segmentation and object recognition.

In this case, we call the cv2.setMouseCallback() function in OpenCV. When the image is clicked, the color information of the pixel clicked including values of R, G, B, H, S and V will be sampled. Then the range of the color filter will be set according to the color information. For better results, after sampling, several sliders will be created whose position shows the range of color filter and the mask as the output of color filter is shown as a reference for adjustment.

Object RecognitionObject recognition refers to the task of identifying the objects which have been

extracted from the image according to color information. There are several ways to identify objects including color-based, pattern-based, environment-based and behavior-based recognition. In the robot soccer project, owing to the simplicity, most teams use color-based recognition Then, according to the size of the objects we found, we set an attribute to

13

Page 15: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

denote its role as the center of the robot, the front of the robot or the ball. However, before that, it is better to process the mask as the output of color-based recognition to obtain objects. The flowchart in Figure 5.3 shows the process of object recognition:

Figure 5.3

Each team in the robot soccer competition uses color patches on top of their robot in order to distinguish the robots characteristics. Consequently, it is simple to recognize different robots and the ball according to their color. The first thing to do is to convert the image into HSV color model which is easier to denote a specific color.

After the mask has been filtered, we use morphological operations to process the mask. In the robot soccer project, the ideal output of morphological processing is two clear and isolated areas without connection and noises. The open operation eliminates the isolated dots around the mask, glitches and creates connections between two areas At the Same time it does not alter the position of the image and therefore is ideal for our application.

After optimizing the mask, the output of the color-based recognition, a few steps are required to deal with the connected regions to find the size and center of mass of them.

Object PositioningIn the robot soccer system, what's worth transmitting to the decision-making system is

the actual position of the center of robot in the world coordinate system instead of the pixel coordinates. As a result, we need to figure out the relation between the world coordinate system and the image coordinate system and the formulas to calculate the position of the robot's center according to the shape of the jersey on the top and the angle in which the robot is facing. However, during this process, it is difficult to find the coordinates of the bottom-left corner of field and the width and height of it. In the beginning, we planned to use the Harris corner detection algorithm to find the four corners of the field, and then the height will be the pixel distance in y direction between top-left corner and bottom-left corner and the width will be the pixel distance in x direction between top-left corner and top-right corner.

Design ChangesOn account of the limitation in our original design, we’ve changed our idea in the

following aspects.

14

Page 16: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Camera CalibrationThe camera sees the whole field, but the undistorted image doesn't fit in a 640x480

frame. I need to fix it by undistorting the image into a larger Mat. Instead of calling undistort, we use the function called initUndistortRectifyMap to produce two map matrices, then feed these into the remap function to do the undistorting. This is what the undistort function does internally.

To make the destination Mat larger, we provide the destination size as the Size parameter for initUndistortRectifyMap. The output image from remap will be larger, but the top and left edges may still be cropped off. To shift the image, we need to adjust the new camera matrix. And then we use this adjusted camera matrix as the new camera matrix for initUndistortRectifyMap.

Find the CornerTaking the real situation into consideration, there are several problems with Harris

corner detection algorithm that result in the imperfection. First, the corners of the field are curved instead of right angles; besides, other than the corners of the outer border, it can also detect the crosses inside the field which slow the algorithm down. As a result, we decided to find the corner manually. Finding the coordinates of corners manually means drawing a rectangle on the field and adjusting its size and position, and the coordinates of corners of the rectangle indicates the ones of four corners of the field when they are overlapped.

BugsThere were a number of bugs we encountered throughout the development process.

On of the first issues we saw was we tried to draw the coordinates on the screen to allow us to see if the robot appeared to be in the right position. When we did this we were drawing the world coordinates instead of the coordinates relative to the screen. This caused the text to be drawn in the wrong spot on the screen.

Another bug, and by far the most major bug, was we were using python to do our vision processing. This seems like it wouldn’t be a problem, but for some reason the python code introduced a 2-3 second lag on the video feed. This rendered our AI virtually useless because it did not have a real-time position to process. In order to fix this we tried profiling our code to see if it was a long loop that was causing the problem. After some testing and some optimization we found that our total processing time per frame was approximately 0.03 seconds which should be plenty of time to process about 20 fps. Despite these optimizations we still saw virtually no change in the latency of the camera. We did some

15

Page 17: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

test with a C++ version of our code and noticed that it was significantly faster. Ultimately we did a full port of our code to C++ and this reduced the latency to about half a second.

Final ResultsAfter all the step described above, the result of the vision system is shown in Figure 5.4.

First, the image has been undistorted. Then it can track the robots and ball pretty well, with the correct position in the world coordinate system.

Figure 5.4

16

Page 18: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Artificial Intelligence

IntroThis portion of the project focuses on how we control the robot, and the logic involved

in such. The artificial intelligence software consists of taking position data for where the ball and robots are from the vision software, and processing that data to intelligently command movements of the robot. The artificial intelligence software is what takes the robot from just an remote control car, to an autonomous, soccer playing, vehicle.

Original DesignThe original design for the artificial intelligence software used ROS to receive data

from the vision processing software. This data consisted of position information of the robot, the opponent robot, and the ball. The AI software then used this information to compute where the robot should move such that it could avoid the opponent, get the ball, and score, while also maintaining positioning in key strategic portions of the field. The AI would use these computed routes of movement to further compute velocity vector commands, necessary to move the robot along the desired route at the desired rate. This velocity information was then transmitted to the motion control software, again via ROS.

The AI and the motion control softwares each ran on the microprocessor (Odroid U3) on the robot. The vision software ran on a separate computer and communicated to the robot through a wireless 802.11 network. This separate computer also ran ROS software to transmit keyboard and video game controller information to the AI. This additional information was used to control the state of the AI, such as starting and stopping autonomous play, and executing debug and test modes. The information from the video game controller was also used for manual control of the robot for additional testing. Figure 6.1 shows a flowchart of the system I explained in this section.

17

Page 19: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Design ChangesThe main design changes included changing how data output of the AI software is

communicated to the motion control software also running on the Odroid. The AI also changed mostly in implementation. It was greatly refactored to improve speed, readability, and flexibility. The code structure of the AI now allows for much quicker and easier changes and expansions. This is rather important as time restrictions, especially late in a project can be quite costly.

Also the change in how information is transmitted to the motion control software was very important to reducing the amount of response time to changes in vision output data. This change is shown in figure 6.2.

Another change was the addition of more powerful testing and debugging tools allowing for finding and solving problems faster.

18

Page 20: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

BugsMost of the big bugs we saw with the AI involved not accounting enough for lag

between movement and vision data changes. Not accounting enough for this lag was causing the AI to overshoot in movement commands. This situation caused the robot to move very imprecisely, and sometimes sporadically. This was fixed by adding some instructions to stop the robot periodically to allow for the vision data to catch up. This was done after removing as much lag as possible from the vision software.

Other bugs arose from the refactoring of the AI code, and the change of how data is output to the motion control software. The main bug that arose was that multiple ports were being opened due to the fact that c++ passes by copy during a function call by default. This was not an issue before the refactor and was overlooked during the changes. This coping was causing accidental opening of the same port repeatedly, causing the code to not crash, but rather have run time errors that were hard to pin down. This was quite easy to fix though once we understood what was going on.

Final ResultsThe final results of the AI were not where I wanted them to be, but they were

satisfactory for fulfilling the requirements. Early on the AI could only be tested in simulation because the robot hardware was not ready for testing. The AI software tested quite well in simulations, but because of continuing delays in being able to test and improve the AI on hardware there was not enough time to get it where I wanted it.

In the end, the final implementation of the AI did perform its goals as required, and was able to control the robot.

19

Page 21: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Appendix A

Functional Specification Document

20

Page 22: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Functional Specification Document

ECEn 490 – Robot Soccer

Designed by:Andrew Wendt

Nick OrtegaDave BogleLesley Wu

Mike McLind

21

Page 23: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Table of contents Introduction...........................................................................................................................23

Background..........................................................................................................................23 Customer Requirements.......................................................................................................24 Metrics..................................................................................................................................25 Linking Project Specs and Needs.........................................................................................27 Summary..............................................................................................................................28

22

Page 24: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

IntroductionBackgroundRobot soccer is a long standing BYU engineering competition. The rules mostly

follow that of FIRA, the Federation of International Robot-soccer Association. The game will be 1v1 and the robots will play autonomously against each other. The Field is 5’x10’ and there is an overhead camera which is responsible for communicating position data to the robot. The video data is sent from the camera to a base station which will process the images and, via wifi, will transmit the signal to the robot. Each robot will a jersey of a specified color and dimension (see figure 1). These are for aiding in the visual processing part of the system. The winner will be determined by the team with the highest score at the end of the match.

Figure 1

High level designThe overview of our system consists of one or more overhead cameras taking an

image of the soccer field. That image is processed by a computer to gather position information on all the robots and the ball. Then the position information is transmitted to the robots wirelessly over wifi. Finally the robots use that information and communicate to each other to compete against the other team in a game of soccer.

The diagram to the right (figure 2) shows a flowchart portraying the overview of our system described above.

23

Figure 2

Page 25: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Customer RequirementsOur customer is rather nebulous, as our main goal is to put on a show for an audience rather than to sell product. However, we can see customer needs in the rules for play and the audience’s desire for exciting play. We tabulate below the needs derived from these two sources.

# Priority Customer Needs Source

1 1 Robots must be of a standard size Competition Rules

2 1 Robots must not come in contact with other robots

3 1 Robots must have four LEDs in a triangle on top for image processing

4 1 Image processing will happen offboard, while all other processing happens onboard

5 3 Processing must be fast enough for robots to respond in real time Audience Expectation

6 2 Play should be competitive (we want to win)

7 4 Play should be exciting

8 3 Low cost Investor

24

Page 26: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

MetricsMetric # Need # Metric Description Units Min Ideal

1 8 Cost per robot $USD 200 150

2 6 Supply Voltage Volts 14 14

3 1 Diameter of robot in 7 8

4 1 Height of robot in 5 7

5 2,5 Camera speed fps 24 60

6 4,5 Wifi Protocol -- 802.11 802.11ac

7 6,7 Battery Capacity (per robot) mAh 1750 2000

8 6,7 Max speed m/s .5 1

9 6,7 Rotational Velocity rpm 30 60

10 6,7 # of Wheels (per robot) -- 3 3

11 4,5 Processor -- ODROID ODROID

12 6 Percentage of win % 75 100

13 6,7 # of Strategies -- 5 7

Metric 1: The cost of the robot is important because we need to have two robots built from the same parts and we are operating on a student budget.Metric 2: In the robot we have different components operating at different voltages and therefore need to be aware of what the supply voltage from the batteries are.Metric 3: The Diameter is specified in the rules.Metric 4: The Height is specified in the rules. Metric 5: The camera speed will affect how close to real time we can get in figuring the position of the robots and the ball on the field.Metric 6: Wifi protocol will be specified so that we know how the base computer and the robot will communicate.Metric 7: Battery capacity allows us to know how long our robot can operate.Metric 8: The faster the robot is the bigger the advantage we have on the field.

25

Page 27: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Metric 9: We need to turn quickly to capture and redirect the ball quickly.Metric 10: The algorithms for three-wheel motion are simple yet powerful. Three wheels provide a strong base for the robot to support itself and can move in any direction at any rate.Metric 11: Our on-board processors have been selected for us. We will use them as to reduce personal expenditures, given no other financial backing.Metric 12: Ideally, we will win the final competition, but we will be satisfied if our team is competitive enough to give a good show for our audience.Metric 13: We want at least two defensive and two offensive strategies, and one strategy to entertain the audience.

26

Page 28: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Linking Project Specs and NeedsNeeds vs. Metrics

Needs\Metrics

Cost

per

rob

ot

Supp

ly v

olta

ge

Dia

met

er o

f rob

ot

Hei

ght o

f rob

ot

Cam

era

spee

d

Wif

i Pr

otoc

ol

Bat

tery

Cap

acit

y

Max

spe

ed

Rot

atio

nal

Vel

ocit

y#

of W

heel

s

Proc

esso

r

Win

per

cent

age

# o

f Str

ateg

ies

Robots must be of a standard size x x

Robots must not come in contact with other robots

x

Robots must have four LEDs in a triangle on top for image processing

Image processing will happen offboard, while all other processing happens onboard

x x

Processing must be fast enough for robots to respond in real time

x x x

Play should be competitive (we want to win)

x x x x x x x

Play should be exciting x x x x x

Low cost x

27

Page 29: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Summary

Based on the customers’ requirements we feel that the metrics that have been specified in this document will be sufficient. We have a structured plan to implement the various parts of the design and will therefore be able to stay on time with the deadlines we have assigned. The product, when designed to spec, will be able to meet the customers needs and achieve a success rate of at least 75%

28

Page 30: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Appendix B

Concept Generation and Selection

29

Page 31: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

CONCEPT GENERATION AND SELECTION

ECEn 490 – Robot Soccer

Team 4 MAN-DAndrew Wendt

Nick Ortega

Dave Bogle

Lesley Wu

Mike McLind

30

Page 32: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

● Introduction

Overview

Robot soccer is a long standing BYU engineering competition. The rules mostly follow that of the FIRA, the Federation of International Robot-soccer Association.There are thousands of people from different countries participating in this competition every year. This competition will be 1 vs 1 and the robots will play autonomously against each other on a 5’*10’ field. The overview of our system consists of one or more overhead camera taking an image of the soccer field. The image is processed by a computer to gather position information and orientation information on all the robots and the ball. Then the position and orientation information will be transmitted to robots wirelessly over wifi.

Process

Because each team member has his own specialization, we decided to have each team

member present on possible options for his category. We discussed together what needs to

be done in each category, and discussed what metrics we could use to show how well each

option fulfilled our needs. We then ranked each based on what the team specialist

explained about the option. Our matrices in the “Concept Selection and Scoring” section

show the output of this effort.

●Design Basis

Body of Facts

The main basis for our design decisions are the specified rules of play, provided materials,

and a need for competitiveness. Rules specify robot size, avoiding disruptive play, and the

31

Page 33: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

actual objectives of play. We are provided with an odroid operating system, Roboclaw

motor controllers, and a common camera for obtaining state information. We must base

our design on using these tools. Finally, within these constraints, we want a competitive

robot that will be able to compete with other teams and provide an entertaining experience

for those who come to watch the games.

Key Design Problems

In this document, we record decisions made to address 4 key problems: motion control,

vision, system architecture, and mechanical system. Each of these is essential in overall

functionality. Below is a block diagram of how these concepts will be integrated to create

the whole product.

●Concept Selection and Scoring

32

Page 34: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Motion Control

As per project requirement, our motion control will be done through Roboclaw motor

controllers connected to the odroid processor. The Roboclaw is a versatile controller with

many options for use. We consider each option for control using the Roboclaw.

Concept Definitions

Analog Control - The Roboclaw may be set up to accept an analog input and pass it to the motors. The Roboclaw scales the input based on maximum output voltagesSimple Serial - The Roboclaw accepts a single serial input command and interprets that as outputs to each motor.Packet Serial: direct - Sends packet commands, where the Roboclaw can return information and change control constants in addition to combined motor controls. For our first test, we used packet serial input, giving voltage commands to the motors.Packet Serial: PID - This takes advantage of the built-in Roboclaw PID controller and encoders to interpret a speed command and ensure that the motor spins at the specified rate.

Design Evaluation Metrics and Weighting

Ease of Implementation - We want to be able to debug quickly and continue working on other aspects of our project while having a reliable motion control scheme. However, the amount of time we have for the project allows for larger complexities, so the weighting for this metric will be low (5%).Cost - We have a very limited budget, and thus want to spend very little. Fortunately, all that is required for any of the options is extra wires, and so costs will be low for all. We weight it at 10%, but it will cause little variance in the results.Accuracy - This is the most important metric, as the competitiveness of our robot will depend largely on how well it can carry out our commands. These must be precise, as points will be deducted for accidental collisions and the like. As it is essential that we have accuracy, we weight this at 50%.Speed - Our motors must respond to requests quickly to keep up with the AI and image processing, or they will operate off of faulty data. This is less important than accuracy, but we still weigh it at 35%.

Design Cost Function Matrix

33

Page 35: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Metrics Weight Analog Simple Serial

Packet Serial: Direct

Packet Serial: PID

Ease 5% .8 .5 .6 .3

Cost 10% .4 .5 .6 .3

Accuracy 50% .4 .5 .5 .9

Speed 35% .7 .5 .4 .4

Weighted Total

100% 52.5 50 48 63.5

AnalysisFor this ranking, we used simple serial as a basis to rate the rest against. As we had already implemented it, it seemed a good starting point. Thus, we gave .5 to each category.The analog implementation only requires a single voltage control, so it is rather easy to set use. We ranked this at .8. It requires a few more wires than the simple serial implementation, but that is low cost, so we gave a .4 ranking. Because the analog voltages can give different outputs depending on the motor, and because they are harder to control, its accuracy is .4. Analog requires little computing, so it has a .7 speed ranking.Direct packet serial is easier than simple serial because the packets organize the information, so its ease is .6. It requires even less wiring than simple serial, and so its cost is at .6. Its just as accurate as simple serial (.5), but is requires a little extra computation and processing (.4).Packet serial using PID takes the most work, so we give it an ease of .3. It also requires the most wiring, so its cost is .3. It is the only option that can actually control the speed of the motors, so it has the highest (.9) rank for accuracy. It will require a little extra computation time, but this is small, so it gets a .4 speed rank.

ResultsAs we discussed our options, it was clear that using the Roboclaw’s PID control was the only option that would produce reasonable results. Any other option, while easier to implement, would not truly control the motors how we want to. We would end up with unreliable guesses at proper inputs to the motors. The PID will take a bit more time for s to setup, but it will pay off in reliability; what the artificial intelligence want the robot to do will then be what actually happens.

34

Page 36: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Vision Control

Concept DefinitionsKalman Filter - An algorithm can be used to estimate unknown variable such as velocity and position of an object using a series of measurements observed over time, containing noise (random variations) and other inaccuracies.Using contours - Contours are defined as “an outline, especially one representing or bounding a shape.” In opencv there is a function that can find the contours in an image. An algorithm using contours would convert an image from the video camera from rgb to hsv and then filter all colors except the one we are looking for. With that filtered image we would then find the contours of the shapes in the image. Once we have the contour we can find the centroid of that shape and track it as it moves.Cascades - Cascades are commonly used in things such as facial recognition algorithms. Cascades are generated by feeding similar images to an algorithm that finds patterns in the images. Once these patterns are found opencv can quickly and easily find these objects in the video feed.

Design Evaluation Metrics and WeightingEase of programming- We want the algorithm to be easy to implement so that we don’t need to spend too much time debugging syntax errors.The parameters it needs can be provided by existing code. Because we had some initial deadlines to meet and vision was a requirement for meeting that deadlines ease of programming was fairly important. (20%)Accuracy - This is the most important metric.The prerequisite of robots going to the correct place is our command contains accurate position and orientation information which are provided by vision processing. (40%)Speed- Because we need real time output of the video feed we need an algorithm that is fast and can quickly find and track multiple objects at the same time. Since we are operating in real time speed is crucial (30%)Time- Since we have deadlines to meet we must make sure that the algorithm we implement does not require too much time and is too complex. (10%)

Design Cost Function Matrix

Metrics Weight Kalman Filter Contours Cascades

Ease 20% .3 .7 .4

35

Page 37: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Metrics Weight Kalman Filter Contours Cascades

Accuracy 40% .8 .8 .9

Speed 30% .6 .6 .7

Time 10% .3 .6 .2

Weighted Total

100% 59 70 67

Analysis

Initial research was done before writing any code to find out which algorithm would be the best for tracking multiple robots and the ball simultaneously. The first metric that was evaluated was ease. Both Kalman filters and Cascades require a base understanding of theory in order to properly implement them, where cascades is a much more straightforward approach and thus the respective weightings of .3, .7, .4

Accuracy was our most important metric for vision. The kalman filter uses some sophisticated math in order to predict where an object is going to be in a given amount of time. When properly implemented we believe this would yield a high amount of accuracy and therefore assigned it a .8. Cascades use hundreds of sample images of the object you desire to track and therefore, if given enough sample data, become very powerful at pattern recognition. We awarded this a .9 as it is likely the most accurate. Contours use the HSV value of an image with a filter on it to filter out everything except the desired color. It can be fine tuned and therefore we gave it a .8

Speed is relatively the same across the board. Both Kalman filters and Contours take are relatively fast and therefore we gave them .6. Cascades, while they take a while to set up can be very fast once they are running and therefore a .7.

Finally time. Both Kalman filters and Cascades are harder to implement and take a greater understanding of the theory behind them. Thus we award them a .2 and .3 respectively. Contours are very straightforward as mentioned before and can be quickly implemented with little knowledge about robotic vision theory therefore a .6

ResultsAll three options were pretty good choices. Opencv offers a lot of functions that give you a wide range of options when doing vision processing. Finding contours in the image is what we chose.

36

Page 38: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

We chose this one because it was very easy to set up and provided us with the amount of accuracy and speed that we needed for our system. Both cascades and Kalman filters are a little more complicated and take more time upfront to code. Because we had to develop an effective object tracking algorithm and then connect it to the other robots we decided that finding contours was accurate and fast enough and also allowed us to meet our deadlines.

System Architecture

Concept DefinitionsROS- Robot Operating System, contrary to its name, is not an operating system, but a software package that provides powerful tools for simplifying complex networking and software integration problems. ROS is designed to allow for multiple pieces of software running on many machines to communicate in a powerful and simple way. This sort of situation is a common hurdle in robotics, this is why ROS is targeted to overcome this situation.Client/Server- This sort of architecture is most common and familiar in networking and the internet. This architecture is described by a computer, known as a server, providing services to clients, other computers. These services would be computational or for the purpose of accessing or sending data. This organization would apply to out system in that we could have the computer that handles vision processing to run a server providing the vision data to the client robots, this server would also have to provide services for the robots to communicate data back and forth.P2P- This sort of networking scheme is defined by communications between clients directly without the use of a server. This sort of system is useful for quickly sending data, but it has the drawback of being more complicated to set up, and more messy as the number of clients increases.

Design Evaluation Metrics and WeightingPortability- This is a measure of how mobile our software and network architecture is between our current system and new system hardware. This measure allows for us to weigh how easy it would be to change hardware down the road. This was given a weight of 10% because we don’t plan on changing our hardware very much because much of it is standardized between teams.Reliability- This measure describes how stable our overall system is, and how well we can depend on it to work consistently. This measure was given a weight of 20% because it should account for a fifth of our effort in relation to the other metrics, as it is rather important.Effectiveness- This measures how well our system meets our goals. This was given a weight of 40% because it is very important that we meet or goal of successfully competing.

37

Page 39: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Ease- How difficult it will be to construct and maintain the architecture. This measure is not very important was not weighed above 10% because it shouldn’t require greater effort than that.Time- How time effective will the design be. Time is a limiting factor, so it was given a nominal importance weight of 20%

Design Cost Function Matrix

Metrics Weight ROS Client / Server P2P

Portability 10% .8 .6 .4

Reliability 20% .7 .6 .4

Effectiveness 40% .8 .7 .8

Ease 10% .9 .5 .2

Time 20% .9 .7 .4

Weighted Total

100% 81 65 54

AnalysisROS was given a portability of .8 because it is already designed to work on linux and osx, and has some support for windows. The Client/server design is a pretty basic model and could be very portable, but because we would have to design it ourselves it gets a lower mark for portability. Peer to peer (P2P) communication would be very dependent on our architecture so it was only given a score of .4. For Reliability, Ros is very reliable and was given .7, a server client system would be nearly as reliable and quite similar, therefore it was scored at .6. Peer to peers reliability would be not as good as the other options, due to complexity of communication, so it was scored at .4. For effectiveness all three options would perform well at meeting our goals, ROS was given .8 because it was designed for our specific communications task. P2P was also given .8 because even though it would be harder to implement, it would provide very fast communication. Client / server was given a lower score of .7 for effectiveness, because it doesn’t offer as many benefits as the other two options. ROS is an already finished system for communication in an environment such as our, therefor it was given a very high mark for ease of .9. The client/server and P2P models would need to be programmed by us, giving them lower scores than ROS in ease; P2P being much more difficult than client/server warranting a low score of .2. For time, ROS was given a high score of .9 because, after the initial time required to

38

Page 40: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

learn how to implement ROS, it is very quick and easy to use. As mentioned before client/server and P2P models would be more difficult to implement and require more work, because of this, they would take a considerable amount more time, hence the low scores.

ResultsAll three options listed above are viable, but ROS is the clear winner based on the features that it offers and the ease of use once it is set up. After the initial hurdle of integrating ROS into our software, it will greatly reduce the time involved in integrating the different parts of the system. ROS will also give our system greater reliability and function.

Mechanical

Concept DefinitionsBattery bottom- This design puts the batteries on the bottom layer of the robot. With this design the wheels have to be moved out of a centered location, using a middle plate to put things on.One tier- We would have only one bottom plate to put everything on. All of the components are on standoffs or on the walls of the robot.Battery middle- The battery is on the middle layer of the robot. This allows the motors to be spaced equally on the bottom.Coffee Can- The wheels and bottom plate are surrounded with metal ducting. The components are placed on the walls around the ducting. The batteries are placed on a second plate attached to the top of the motor mounts

Design Evaluation Metrics and WeightingEase of assembly- We want the robot to be easy to open and assemble so that it can be maintained without taking too long to be able to get to the parts. This was given a weight of 35%. It is pretty important to be able to get to the robot parts.Center of mass- the robot should have a low center of mass. This is important so that we can have a greater rotational velocity without veering too much off the intended path. The weight decided on for this is 40%. This is the most important because we need to be able to move quickly and reliably.Movement calculations- The mathematical computations required for reliable movement depend on the placement of the wheels. Since most of the calculations are the near the same no matter where the wheels are we gave this a weight of 5%.

39

Page 41: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Adaptability- This is a metric for how easy it will be to add on things later to the robot. If we want to add or change something we need to have space. We decided to give this a weight of 20%. This is because we can’t always see what we are going to want in the future.

Design Cost Function Matrix

Metrics Weight Bottom One Middle

Can

Ease 35% .5 .8 .4 .9

Mass 40% .7 .7 .5 .65

Movement 5% .6 .6 1 1

Adaptability 20% .8 .6 .7 .9

Weighted Total

100% 64.5 71 53 80.5

AnalysisThe battery bottom approach is a .5 for the ease. This rating was given because everything was separated and but not too easy to get to. The downfall of this approach is that there is a middle layer that would need to be removed to get to the bottom. The mass was given a .7. The batteries and motors are what weigh the most. These items would be placed on the bottom and would be mostly center. The movement calculations rated a .6. The motors are not in perfect alignment in this design, 2 of them are slightly moved forward. This makes the calculations a little harder. The adaptability of the bottom design is .8. This is because there is lots or room to move things around to different areas if we need to.The one layer approach scored a .8 on ease. This is because there isn’t a middle layer so everything is accessible after the top is removed. The main problem with this is that some things need to be attached to the top which could make it hard to remove. The mass metric was rated at a .7. This is the same reason as the bottom approach. The batteries and motors would be on the bottom of the robot. The one layer approach also scored a .6 in the movement metric. The wheels are in the same alignment as the battery bottom approach. The adaptability scored a .6. The space in the robot is almost all accounted for. This means that it would be hard to add in more subsystems later.The battery on the middle design scored a .4 on ease. This is because most of the components would be on the bottom and the middle would have to be removed to get to them. A score of .5

40

Page 42: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

was given to the mass metric. The batteries would be able to be centered as well as the wheels but most of the mass, the batteries, would be moved higher. The movement was given a perfect score of 1. The wheels would all be in perfect alignment. We wouldn’t have to perform any extra calculations. The adaptability was given a .7. The middle layer gives more room to put things in, but the batteries take up a lot of space and working around them might be a bit difficult.The coffee can version scored a .9 on ease. This is because most of the parts are spread out on the edge and can easily be reached without taking anything off. The wheels and motors are still covered by the batteries though. It scored a .65 on the mass, because the heavy batteries are located just above the motors close to the bottom. The movement was scored as 1 because of the same reason the middle design did. The wheels can be centered and in perfect alignment. .805 was given to adaptability. The ducting has plenty of room to add more things around the edge on the inside, allowing us to add things as we need.

ResultsOur first mechanical design was using the battery bottom method with a middle tier. After the robot was built we realized that the metrics had the wrong values and had to change them. The coffee can robot will be the best choice because the low mass and the ease of getting to the working parts will be the best in the long run.

●ConclusionWe decided to use an aluminum body coffee can design for our robot. This will provide a

low center of mass and easy accessibility when working on parts.We decided to use an algorithm based around finding contours for our vision processing.

We believe this algorithm will give us the speed and flexibility necessary to meet our design requirements.

For our motor control we are going to use a PID controller to get highly accurate data to our motors. It will outperform any other design and is easy enough to implement that we can make it work in a short amount of time.

Finally we chose ROS as our overall architecture. With ROS we can get a quick and easy network setup in which our vision data can be communicated to our robots and our robots can communicate with each other.

41

Page 43: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

Appendix C

Project Schedule

42

Page 44: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

43

Page 45: rwbclasses.groups.et.byu.netrwbclasses.groups.et.byu.net/lib/exe/fetch.php?media=rob…  · Web viewThroughout the project we encountered some problems. As we were about two weeks

44