report for lego mindstorms maze robot
DESCRIPTION
This is a report on how to build a robot using LEGO Mindstorms that can solve a maze made up of lines. The robot follows the line, solves the maze and then returns the quickest way possible to the start point.TRANSCRIPT
1
MSE110 – Report IV by Siddhant Modi
Final Project – Maze
Introduction
The aim of this project was very straightforward – to build a robot that could solve a maze made up
of blue lines on a 5x7 grid and then find the most efficient way back to the starting point. Many
factors had to be considered while working on this project; the key ones being the mechanical
design of the robot, i.e. choosing between either a fixed light sensor, a light sensor attached to a
crank slider mechanism or a light sensor moving in a circular arc and the software corresponding to
each type design. Designing a dynamic system that would keep track of the robot’s position and
orientation was another challenge of this project.
Overview
The Hardware
There weren’t many constraints placed on the mechanical aspect of this project. There were simple
requirements – the robot must keep track of the blue line efficiently as well as move along the maze
and make turns with precision. Another design aspect that we kept in mind was having a very small
turning radius so that the robot would turn right on the spot and hence eliminate the chances of its
position changing while it turned. The most important part of the mechanical structure was the light
sensor and the mechanism responsible for its movement. Table 1 explains briefly the options we
had to choose from:
Mechanism Description
Fixed Light Sensor Just like the name says, the light sensor is fixed in one position and is not moved through the course of the maze.
Crank Slider Mechanism A crank slider mechanism converts rotational motion of the servos to linear motion – the light sensor oscillates in a horizontal line.
Rotational Motion The light sensor is directly connected to the motor via beams and the sensor then oscillates in a circular arc
Table 1 – Summary of the three different mechanisms available
Table 2 summarises the considerations that were made before taking a decision. We abandoned the
idea of a fixed light sensor as it would have been very inefficient and time consuming to get the
robot to check the sides at every vertex. We chose the crank slider over the rotational motion as
there would certainly be less error associated with it and it worked perfectly with the software ideas
we had in mind.
The precision aspect was taken care of by using gears to step down the default gear ratio of the
servos and thus reduce the error associated with the motor encoder values.
2
Mechanism Type
Fixed Light Sensor Crank Slider mechanism Rotational motion
The bad Not very efficient – the robot needs to turn right and left at every node to check for availability of turns.
The position of the light sensor needs to be tracked -impossible to use the collected data without knowing where the light sensor got the information
Similar problem as the crank slider. The position needs to be tracked. Since the encoder needs to be reset often, error piles up. The circular motion means there are chances the robot might miss corners and hence miss a turn.
The good No need for a 3rd motor and no need for keeping track of encoder values.
Efficient – the robot will scan the availabilities of turns at each junction without having to turn left or right.
Again, the robot will not have to turn at each node to check the options it has on the sides.
Table 2 – Pros and Cons associated with each mechanism
The Software
The software was the larger chunk of work for this project. The following list outlines the objectives that the software aimed to accomplish:
1. Keep track of the position and orientation of the robot. 2. Keep track of the position of the light sensor. 3. Store the data collected by the light sensor and analyse it. 4. Stay on the blue line using the information collected. 5. Navigate through the maze using the Right Wall Following algorithm. 6. Store the details relating to each coordinate the robot visits, such as the availability of turns
in a structure. 7. Get rid of dead end paths. 8. Come back to the start via the shortest possible path.
Each of these objectives was assigned to individual functions. The functions were all interrelated and often, one function called another. Switch statements were used in a high number as many decisions made by the robot would depend on the orientation of the robot, i.e. whether it faced North/East/South or West. For example, if the robot was facing North, it would check the East direction for a right turn but it the robot was facing East, it would have to check the South direction for the availability of a right turn. Switch statements help work through such situations efficiently. Since RobotC does not support dynamic variables, global variables had to be used in some places. The encoder values for the motors were used frequently to calculate how much the robot had moved and where the crank slider was.
3
Image 2 – Top view of the crank slider
HARDWARE
The wheels and gear ratios
Image 1 – Stepping down of the gears and the centre of rotation
A gear ratio of 1:1.6 was used which helped step down the gears by 62.5%. This helped in the
reduction of errors. The red ellipses in Image 1 show the arrangement of the gears. The wheel base
was also minimized and the horizontal blue lines in Image 1 show the width of this wheel base.
The white circle shows the centre of rotation of the robot. Hence, in comparison with the size of the
entire robot, it can be seen that the robot would indeed turn right on the spot with a very small
turning radius. In practice, this design backfired on us a little bit as due to the inaccuracy of the
encoder values, if the robot turned more or less than the specified amount, it’s deviation from the
required position would be relatively high. It was too late by the time we realised this to be a flaw in
our design. A wider wheel base would have eliminated this error.
The Crank Slider Mechanism
This structure was initially
developed as a separate part
and then it was tweaked in
order to fit the robot. The robot
was also modified at the same
time in order to accommodate
this structure. Image 2 shows
the top view of the crank slider
part of the robot.
4
Image 3 – Side view of the crank slider
Image 4 – Bottom view of the robot
The red circle shows the light sensor and the orange one shows the motor. The blue line runs along
the LT Steering gear – this part connects the servo and the light sensor and helps convert rotational
motion to linear motion. The green line shows the range through which the light sensor moves. The
beams that surround the central mechanism give strength to the structure and help support the
light sensor while not causing any obstruction to its movement at the same time. Image 3 shows a
side view of this structure.
The biggest advantage of this design
was its range. Due to the large
range, the light sensor could easily
cover both sides of the blue line and
thus give accurate information
relating to the availability of turns.
The flaw that came with this design
was the fact that the constant
motion of the light sensor caused
slight jerks to the entire robot
causing it to drift of path a little.
The structure as a whole
The main chassis was designed around the
NXT Brick and two motors on its sides. From
there, we were about to build a mount for the
crank slider structure. We ensured that we
did all we could to make the robot structurally
strong. Using a network of different beams
allowed us to do so. Image 4 shows the
bottom view of the robot. Note that the
beams running between the wheels ensure
that the wheel cannot have any sort of
vertical motion. Angled beams were used in
abundance to help strengthen the structure.
5
Image 5 – Angled views of the robot
Image 6 – Top view of the robot
Image 7 – Side view of the robot
The robot could not move along a straight line for a given cell block but besides that, there were no
noticeable faults.
Overall, the structure served its purpose. It could very well move along the maze and get light
sensor data efficiently from both sides of the line. Good software to accompany this design would
ensure that it solved the maze. The following images show the robot from several different angles.
6
Fig 1. Flowchart for task main function
SOFTWARE
Global variables
Table 3 displays all the global variables that were used in the software
Variable Name Data Type Description
orientation Int 0-N, 1-E, 2-S, 3-W – keeps track of direction the robot is facing
xPos Int Current x-coordinate of the robot w.r.t. the origin
yPos Int Current y-coordinate of the robot w.r.t. the origin
targetX Int Target x-coordinate of the robot w.r.t. the origin
targetY Int Target y-coordinate of the robot w.r.t. the origin
originX Int x-coordinate of the starting point
originY Int y-coordinate of the starting point
leftPower Int Speed for the left motor
rightPower Int Speed for the right motor
adjustPower Int Speed added to the motor speeds for adjustment
crankPower Int Speed for the crank-slider motor
maze 6x8 array (int*) Matrix that stores information about each vertex
Table 3 – List of the global variables used by the program
The mazeVertex structure data type
A new variable type was defined in order to assist with keeping track of the details relating to each vertex that was a part of the maze. The structure consisted of 6 member variables – four of them stored information relating to the availabilities of turns in each of the 4 directions. The Boolean variable visited helped the robot remember whether it had been to a particular vertex previously and the variable goodDirection stored the direction facing which the robot first entered that vertex. These last two variables helped in finding the most efficient way back to the start.
The Functions
task main
This function is responsible for the execution of the entire program. It is a simple function that calls certain functions, waits for the robot to reach the end and then call another function to make sure that the robot goes back to the
7
start. Figure 1 (previous page) shows a flowchart outlining this function.
This function uses the xPos, yPos and xTarget, yTarget global variables to check whether the robot has reached the end of the maze. The functions it calls make use of other global variables. This task main function does not take any arguments and does not return any value.
readSensor
The purpose of this function is to take readings from the light sensor at 3 different points – the middle, the left and the right of the blue line and then store than information in three separate arrays. Following the analysis, it calls another function which returns an integer. readSensor then returns that int.
One key thing to note about this function is that it depends on the light sensor being in the same position everytime the function is called. Hence resetCrank is called at the beginning.
A lightSensor value greater than 35 indicates a white surface and that less than 33 indicates the blue tape. Note that three ranges that are part of the conditional statements all specify three positions of the crank – the middle, the left and the right.
If more than 2/3s of an array suggests the presence of the tape, the tape is assumed to be present. This function does not take any argumets. Fig. 2 shows the flowchart for readSensor.
Fig 2. Flowchart for the readSensor function
8
resetCrank
This function was designed in order to help the readSensor function. resetCrank merely takes the light sensor back to the starting position. This function takes an argument that is the current position of the light sensor based on which it calculates the required movement. Fig. 3 shows the flowchart for the resetCrank function. The only factor affected on a global scale is the change in speeds of the crank.
combination
This function serves the purpose of identifying the combination of colours the light sensor sees at the left, middle and right positions. For example, White Blue White or Blue Blue White. The function takes three boolean values as arguments and returns an integer based on the combination. The integers each correspond to a particular combination. The flowchart for this function is shown in Fig 4. This function does not interact with global variables.
Fig 3. Flowchart for the resetCrank function
Fig 4. Flowchart for the combination function
9
storeSideData
The function sotreSideData does exactly what its name suggests. Based on the orientation, the sides of the robot can either be East/West or North/South. This function checks that and then stores the information in the maze array for the structure corresponding to the current xPos and yPos. storeSideData takes an integer (a combination) as its argument and it makes changes to the global variable maze. Fig 5. shows the flowchart for this function.
storeForwardData
This function is very similar to the storeSideData function. What is diferent is the fact that it stores data for the forward/backward directions. Again, the function takes an integer (a combination) as the argument and it causes changes to elements of the global variable maze. Fig 6. (next page) shows the flowchart for the storeForwardData function.
Fig 5. Flowchart for the storeSideData function
10
Fig 6. Flowchart for the storeForwardData
function
Fig 7. Flowchart for the visitedAndGoodDirection
function
visitedAndGoodDirection
This function updates the visited status for every vertex the robot enters and saves the direction which the robot was facing the first time it entered the vertex. visitedAndGoodDirection does not take any arguments and it makes changes to elements of the global variable maze. Figure 7 shows the flowchart for this function.
deleteDeadEnds
If the robot enters a vertex that it has been to before, it means that the robot enocuntered a dead end. So, the deleteDeadEnds function changes the direction from which the robot currently entered the cell to a wall. This function does not take any arguments and alters elements of the global variable maze. The flowchart for this function is shown in Fig 8. (next page).
directionPriorities
This function is essentially based on the right wall following algorithm. It prioritizes turning in the order of right, straight, left and turning around. This function gives commands to the robot to move around the maze. Hence, the global variables xPos, yPos and orientation are indirectly altered by directionPriorities. The function does not take any arguments. The flowchart for this function is shown in Fig 9. (next page).
11
Fig 8. Flowchart for the deleteDeadEnds function
Fig 9. Flowchart for the directionPriorities function
12
Fig 10. Flowchart for the goBackToStart function
goBackToStart
goBackToStart takes the robot back to the start point of the maze. It does this by making use of the goodDirection stored for each vertex the robot visited. When run in a loop, this function traces back to the beginning of the maze. The global variables targetX and targetY are changed to originX and originY and the orientation, xPos and yPos are updated as the robot moves along the maze. Fig 10. shows the flowchart for this function.
adjustment
This function is used within the goFwd function to try and keep the robot on the blue line when it moves straight. On a global scale, this function affects the speeds of the left and right motors and thus also affects the corresponding encoders. Fig 11 (next page) shows the flowchart for this function. This function accepts an integer as its argument – the integer represents the current encoder value of the crank.
13
Fig 12. Flowchart for the goFwd function
Fig 11. Flowchart for the adjustment function
goFwd
This version of the goFwd is used while going from the start of the maze to the end. This function makes the robot move one cell block, i.e. from one vertex to another. The robot stops twice on the way, once to get a reading for the sideways options at next vertex, and once more to get the forwards option before coming to a final stop at the vertex. Note that the numbers 235, 110 and 365 add up to 710 – that is the encoder value measured for each cell block. Based on the orientation, it also updates the xPos and yPos of the robot. Other factors affected on a global scale are the encoder values for the crank and right motors. goFwd does not take any arguments. The flowchart for this function is shown in Fig 12.
14
Fig 13. Flowchart for the goFwd2 function
Fig 14. Flowchart for the turnLeft function
goFwd2
This version of the goFwd function is used while returning to the starting position. Unlike the first one, this function does not make any additional stops besides at the vertex itself. It affects the same variables as goFwd on a global scale – xPos and yPos along with motor encoders. A flowchart is shown in Fig 13. for this function.
turnLeft
The function does what its name says. At the same time, it updates the orientation. It does not affect any other global variables. Figure 14 shows the flowchart for this function.
15
Fig 15. Flowchart for the turnRight function
turnRight
This function works pretty much the same way as the turnLeft function. The only difference is in the way of updating the orientation. Figure shows the flowchart this function.
Observations, explanation for failure and conclusion
The robot was unable to navigate through the maze on the day of the competition. All the robot did was made a couple of turns, went forward, made a wrong turn and gave a stack-overflow error. We believe this was due the readSensor and resetCrank functions not working as we had expected them to. These functions seemed to be malfunctioning during the testing as well but we were unable to diagnose the exact cause of the semantic errors. The problem comes from the fact that the readSensor function depends on the light sensor being at the fixed starting position every time the function is called and if the sensor is at another position, the values stored for the light readings will be incorrect causing the robot to not “see” the lines correctly and hence cause problems in navigation. The resetCrank function was designed in order to assist with doing this but there seemed to be a logical error.
Also, there was a semantic error that we could not locate in the adjustment function as the robot would not realign itself if it moved off the line. Hence, it had to be done manually, by hand, during the competition.
Keeping track of the orientation and the x and y positions was essential as many other functions depended on those values.
Logically, all the functions other than these three would have functioned correctly and the robot would have been able to make it to the end and back, had it been able to navigate correctly as it had the mechanical and computing abilities to do so.
(Lego Digital Designer was used to obtain images of the robot. http://www.LucidChart.com was used for the flowcharts)