Saturday, March 13, 2004
Status Board - DARPA Grand Challenge:: Too bad all the bots from the Grand Challenge have died.
It looks like no one made it past 10 miles. In my understanding, the Red Team went through a few fence posts (which you can imagine are hard to detect because they are thin) which caused mechanical damage.
I watched the extremely boring and poorly produced satellite feed at CMU. Shame on whoever was responsible for that mishap, which could debatably lead to less interest in robotics, the exact opposite of what the goal is.
Actually the goal is to be autonomous off-road. It is a very, very hard problem which wasn't solved this year, as many people predicted.
The highlight for me was watching the bad bots break in the first 200m. The motorcycle, which couldn't even run autonomously and was driven by remote control as a demo, fell over in 2 seconds.
What needs to be done to solve the problem? My opinion: add vision as a classification tool. Currently, it is only used for adding more points to the dense 3D map, basically aiding the laser-range-finders. This is ok, stereo is proven. But a bump in a map could be a bush, or a rock, and you need vision to tell the difference.
Segmentation is a problem in computer vision where you try to break an image up into meaningful parts. Drivable road, non-drivable road, unknown, and sky are the four categories that pop out to me. Hopefully we'll see this used next year.
For those who are disappointed by the result, just keep in mind: this is a VERY, VERY hard problem. It is basically THE problem to solve: understanding you environment enough to intelligently interact with it.
It looks like no one made it past 10 miles. In my understanding, the Red Team went through a few fence posts (which you can imagine are hard to detect because they are thin) which caused mechanical damage.
I watched the extremely boring and poorly produced satellite feed at CMU. Shame on whoever was responsible for that mishap, which could debatably lead to less interest in robotics, the exact opposite of what the goal is.
Actually the goal is to be autonomous off-road. It is a very, very hard problem which wasn't solved this year, as many people predicted.
The highlight for me was watching the bad bots break in the first 200m. The motorcycle, which couldn't even run autonomously and was driven by remote control as a demo, fell over in 2 seconds.
What needs to be done to solve the problem? My opinion: add vision as a classification tool. Currently, it is only used for adding more points to the dense 3D map, basically aiding the laser-range-finders. This is ok, stereo is proven. But a bump in a map could be a bush, or a rock, and you need vision to tell the difference.
Segmentation is a problem in computer vision where you try to break an image up into meaningful parts. Drivable road, non-drivable road, unknown, and sky are the four categories that pop out to me. Hopefully we'll see this used next year.
For those who are disappointed by the result, just keep in mind: this is a VERY, VERY hard problem. It is basically THE problem to solve: understanding you environment enough to intelligently interact with it.
Comments:
Post a Comment