Difference: GoogleCSR (1 vs. 16)

Revision 162021-04-22 - LabTech

Line: 1 to 1
 
META TOPICPARENT name="FordhamRoboticsAndComputerVisionLaboratory"

GoogleCSR Autonomous Robotics Project Spring 2001

Autonomous Robot Exploration

Line: 225 to 225
 **By 4/21 Each team will transmit a traditional poster, a set of slides and a recorded presentation 6-7 minutes long. Workshop 4/23 -- Poster/Video from each team showing results**
Added:
>
>
 Suggested slide sequence
  • Title slide, title names of team, schools, pictures?
  • Objective - what problem are you solving - make sure to use images and visual as well as words

Revision 152021-04-21 - LabTech

Line: 1 to 1
 
META TOPICPARENT name="FordhamRoboticsAndComputerVisionLaboratory"

GoogleCSR Autonomous Robotics Project Spring 2001

Autonomous Robot Exploration

Line: 225 to 225
 **By 4/21 Each team will transmit a traditional poster, a set of slides and a recorded presentation 6-7 minutes long. Workshop 4/23 -- Poster/Video from each team showing results**
Added:
>
>
Suggested slide sequence
  • Title slide, title names of team, schools, pictures?
  • Objective - what problem are you solving - make sure to use images and visual as well as words
  • what did you need to learn to address this: ROS, Gazebo simulator, ROS Gmapping stack, Turtlrbot3 robot
  • Why does wander not do a good job, shown examples
  • What strategy do you propose to address this and what data structure will you need (eg 2D pose histogram or wall histogram etc)
  • describe how you get the data structures and what they look like
  • present some results of running these and comment on effectiveness and how you measure it

Have a short video sequence of your best strategy run

Video is 6-7 minutes, each team member talk to 2 or 3 of the slides. Its not really a video, just add narration to your slides e.g., in powerpoint or whatever and save the result as an mp4

 WORKSHOP April 23rd!!

Revision 142021-04-21 - LabTech

Line: 1 to 1
 
META TOPICPARENT name="FordhamRoboticsAndComputerVisionLaboratory"

GoogleCSR Autonomous Robotics Project Spring 2001

Autonomous Robot Exploration

Line: 222 to 222
  * W8 4/19: *
Changed:
<
<
By 4/21 Each team will transmit a traditional poster, a set of slides and a recorded presentation 6-7 minutes long. Workshop 4/23 -- Poster/Video from each team showing results
>
>
**By 4/21 Each team will transmit a traditional poster, a set of slides and a recorded presentation 6-7 minutes long. Workshop 4/23 -- Poster/Video from each team showing results**
  WORKSHOP April 23rd!!

Revision 132021-04-15 - LabTech

Line: 1 to 1
 
META TOPICPARENT name="FordhamRoboticsAndComputerVisionLaboratory"

GoogleCSR Autonomous Robotics Project Spring 2001

Autonomous Robot Exploration

Line: 175 to 175
  Next meeting: 4/7. We need to have the code running by then.
Changed:
<
<
W6 4/5:
>
>
*W6 4/5: *
  Remember when calculating the transformation from the sensor values (angle, range) for the laser, than you need to 1) add the angle of the robot to the angle of the sensor (and make sure you have everything in radians) and, 2) add the position of the robot to the x and y values you get from your polar to cartesian mapping.
Line: 204 to 204
 Next meeting 4/14
Changed:
<
<
W7 4/12:
>
>
W7 4/12:
 
Changed:
<
<
W8 4/19:
>
>
The rubber hits the road this week! Develop and test your algorithms for improved mapping. Remember! You have to have evidence that it works, and you can only get this by conducting a principled parameter study of your code running. You need a number of gMapping maps generated just by wander, to use as your 'base' case. Then you need to generate the same number for each solution you have. Ideally you would show not just that your algorithm is better, but you would show the behavior of your algorithm as you modify parameters. E.g. if you are directing the robot to regions of empty space to improve mapping, then detail how well it works for different target sizes of empty space (numbers of 'pixels' / value of 'pixels' on you internal histogram data structure)

Since we are almost done, you need to start writing up your results now as well: Make some slides on how your method works and what your objective was. \Collect your gtMap outputs and put these on slides to present.

You can send me copies of these if you want my feedback, but do that by Sunday at the latest!

Discuss and agree on who will be saying what in your short video presentation.

Next meeting is 4/21 and that is the LAST meeting - so we're review all your materials then - which you will then submit.

* W8 4/19: *

  By 4/21 Each team will transmit a traditional poster, a set of slides and a recorded presentation 6-7 minutes long. Workshop 4/23 -- Poster/Video from each team showing results
Added:
>
>
WORKSHOP April 23rd!!
 

Permissions

Revision 122021-04-08 - LabTech

Line: 1 to 1
 
META TOPICPARENT name="FordhamRoboticsAndComputerVisionLaboratory"

GoogleCSR Autonomous Robotics Project Spring 2001

Autonomous Robot Exploration

Line: 177 to 177
  W6 4/5:
Added:
>
>
Remember when calculating the transformation from the sensor values (angle, range) for the laser, than you need to 1) add the angle of the robot to the angle of the sensor (and make sure you have everything in radians) and, 2) add the position of the robot to the x and y values you get from your polar to cartesian mapping.

You want the map to be big enough to represent the entire house area at the resolution you want. A 1 meter resolution is probably too big - maybe between 10 and 50 centimeters would be a good range to experiment in.

You can use your 2D pose histogram data very easily with wander -- make the velocity in the else clause not a random number, but directly proportional to the value in the histogram at the robot's location. Thus: faster where it has been before and slower otherwise. Make sure the 'cap' the value however, or you may get overly fast speeds.

You can use the wall histogram array, which is pretty close to a 2D floor map, in several ways: E.g., 1) Find 'breaks' in the wall area enclosing the current position of the robot and move towards the center of the break, 2) Find the close wall, move towards it and follow it around.

These are just some ideas - you can also invent new ones.

However, you have to be objective about your evaluation of a strategy. Lets say that our strategy is to run the program for 5 minutes and then ask the map server to store the map. You should do this 3 or 4 times now for the basic wander program, so you have some 'base' examples to compare your ideas to. For consistency, always move the robot to the same start location and the same angle before you run a test.

Implement the simplest of the ideas for improving coverage, e.g., the pose histogram one. Debug thoroughly of course to make sure it is working before you test it. Then run it 3 or 4 times and ask map_server to store the maps. Visually compare each of the wander maps with each of the new maps: are they all more complete, are some better, are they different in some other way?

Perhaps break up the task, so you can test a few simple ideas like this and report back next week. Lets discuss how the ideas worked, and how you did your map comparisons.

For you presentation you will need to show some videos and some images. Even though you are still working on your main result, you should spend some time (perhaps delegate team members) to 1) 10s to 20s recordings of wander in gazebo house model, and other models if you want 2) same length screen recordings of RViz showing the laser output 3) same length recordings of gmapping while the robot is wandering

I suggest start now since this is a standalone task that someone could do to help the team and become proficient at using the screen recording function to generate short visually attractive video clips. Its tougher to do, but you can also use the Gazebo interface to perform panning or zooming during the video to get a more 'live TV' effect.

Next meeting 4/14

 

W7 4/12:

W8 4/19:

Changed:
<
<
Workshop 4/23 -- Poster/Video from each team showing rsults
>
>
By 4/21 Each team will transmit a traditional poster, a set of slides and a recorded presentation 6-7 minutes long. Workshop 4/23 -- Poster/Video from each team showing results
 

Permissions

Persons/group who can view/change the page:

Revision 112021-04-07 - LabTech

Line: 1 to 1
 
META TOPICPARENT name="FordhamRoboticsAndComputerVisionLaboratory"

GoogleCSR Autonomous Robotics Project Spring 2001

Autonomous Robot Exploration

Revision 102021-04-01 - LabTech

Line: 1 to 1
 
META TOPICPARENT name="FordhamRoboticsAndComputerVisionLaboratory"

GoogleCSR Autonomous Robotics Project Spring 2001

Autonomous Robot Exploration

Line: 88 to 88
  Using "roslaunch turtlebot3_gazebo turtlebot3_house.launch" start the simulated house world. In a separate window, start the wanderT3 python program with "python wanderT3.py". Observe the behavior of the robot - have you any ideas for improvement so that it explores the house better and quicker?
Deleted:
<
<
  The demonstration of SLAM did not go as expected during our meeting. I had forgotten to type"source ~/catkin_ws/devel/setup.bash" before I started it, and thats how come I got some errors when it started!

You need to have started the house simulation first, and then start a new terminal window.

Added:
>
>
image
 When you type "roslaunch turtlebot3_slam turtlebot3_slam.launch slam_method:=gmapping" what you should see is this
Changed:
<
<
>
>
image
  This shows the 'progress' of the robot in using the 360 degrees of laser range finding data to make a map. If you start another new terminal shell and let wanderT3.py run, you will see the map grow. You can also use the teleoperation command from last week to 'drive' the robot around, and that is a much quicker way to make the map!
Line: 149 to 150
 
Changed:
<
<
W5 3/29 Begin Phase 2
>
>
W5 3/29 Begin Phase 2

  Design & testing of exploration algorithms
Added:
>
>
We discussed adding two items of state information to the Wander robot program. The first was to use a numpy array as a 'map' of the room. The robot pose information - as generated in a pose callback (see goto.py) - can be used to generate the x,y location of the robot on a grid. This needs to be transformed to array indices.

You can use a linear transformation: index = scale * coordinate + offset. In this the coordinate is either the x or the y coordinate of position. The scale magnifies the coordinates from meters to centimeters or millimeters - whatever resolution you want. The offset displaces the coordinates to the middle of the array, so that negative values will appear on the left or top and positive values on the right or bottom. And 0,0 will be right in the middle. Remember you must make the index an integer as the last step, in order to use it to access the numpy array. The attached code (plotting maps.txt) shows an example of setting up the numpy array.

The easiest way to use this map is to just add 1 to the elemnt at the index for every location that the robot occupies. That way you get a histogram in 2D of the robot's location - maybe it can avoid places it has been to frequently etc.

The second idea we discussed was to use another numpy array to store the position of the walls etc as detected by the laser range sensor. The laser range sensor produces a list of 360 values, the distances to the walls around the robot starting at angle 0 facing front, then angle 1 degree to the left, 2 degrees, etc. If the robot is at angle theta (that's gLoc[2] in the pose callback that gives the angle of the robot), then the angle of the laser range ray is theta +the index of the laser range list (msg.ranges in the laser callback) converted to radians. gLoc[2] is already in radians.

If we look at one specific laser range ray, say the one with index i. Then the angle is alpha=theta+math.radians(i). The coordinates of the end point of the ray (where the wall is) can be gotten by adding the coordinates of the position of the robot (x,y) to (r cos alpha, r sin alpha) where r =msg.ranges[i], the range value at index i.

Once you get the coordinates of the end point of the array, you can use the same linear transformation as used for the robot position to map it into the numpy array you make for walls. You can also use the histogram approach - adding one to the location every time you get a laser reflection from that location. Thus, places with high values are more likely walls.

You can use this information for example to determine if you are finished mapping a room, or if you can see doorways in the room and want to go into them. Note that the map generated by this method is very much inferior to the one generated by gMapping!

For next week: Modify Wander so that it includes the pose callback and pose subscriber, make your two numpy arrays, by adding the code to make the position histogram at the end of the pose callback and the wall 'histogram' at the end of the laser callback. Use the plotting code to show the two maps. Put the plotting code in the shutdown callback, so that it will display the maps when you stop the program.

Next meeting: 4/7. We need to have the code running by then.

 W6 4/5:

W7 4/12:

Revision 92021-03-31 - LabTech

Line: 1 to 1
 
META TOPICPARENT name="FordhamRoboticsAndComputerVisionLaboratory"

GoogleCSR Autonomous Robotics Project Spring 2001

Autonomous Robot Exploration

Line: 173 to 173
 
Added:
>
>
 
META FILEATTACHMENT attachment="moveSquareT3.py.txt" attr="" comment="Simple ROS Node" date="1615411134" name="moveSquareT3.py.txt" path="moveSquareT3.py.txt" size="1753" user="DamianLyons" version="1"
META FILEATTACHMENT attachment="WanderT3.py.txt" attr="" comment="WanderT3.py" date="1616030867" name="WanderT3.py.txt" path="WanderT3.py.txt" size="3264" user="DamianLyons" version="1"
META FILEATTACHMENT attachment="goto.py.txt" attr="" comment="Goto node" date="1616680185" name="goto.py.txt" path="goto.py.txt" size="3028" user="DamianLyons" version="1"
Added:
>
>
META FILEATTACHMENT attachment="plotting_maps.txt" attr="" comment="Instructions for platting maps" date="1617229558" name="plotting_maps.txt" path="plotting_maps.txt" size="779" user="LabTech" version="1"

Revision 82021-03-25 - DamianLyons

Line: 1 to 1
 
META TOPICPARENT name="FordhamRoboticsAndComputerVisionLaboratory"

GoogleCSR Autonomous Robotics Project Spring 2001

Autonomous Robot Exploration

Line: 43 to 43
  feizza01ATgmail.com, dengconnie0723ATgmail.com, silviadb48ATgmail.com
Added:
>
>

 

Schedule

W1 3/1: Begin Phase 1 UBUNTU in place, ROS installed, Start tutorials

Line: 84 to 86
  Review this program and discuss it in your group meeting. If your group has questions, please email me, but copy both teams so everybody can benefit from the answer.
Changed:
<
<
Using "roslaunch turtlebot3_gazebo turtlebot3_house.launch" start the simulated house world. In a separate window, start the wanderT3 python program with "python3 wantderT3.py". Observe the behavior of the robot - have you any ideas for improvement so that it explors the house better and quicker?
>
>
Using "roslaunch turtlebot3_gazebo turtlebot3_house.launch" start the simulated house world. In a separate window, start the wanderT3 python program with "python wanderT3.py". Observe the behavior of the robot - have you any ideas for improvement so that it explores the house better and quicker?