Cannabaceae

Navlab is a series of autonomous and semi-autonomous vehicles developed by teams from The Robotics Institute at the School of Computer Science, Carnegie Mellon University. Later models were produced under a new department created specifically for the research called "The Carnegie Mellon University Navigation Laboratory".[1] Navlab 5 notably steered itself almost all the way from Pittsburgh to San Diego.

History

[edit]

Research on computer controlled vehicles began at Carnegie Mellon in 1984[1] as part of the DARPA Strategic Computing Initiative[2] and production of the first vehicle, Navlab 1, began in 1986.[3]

Applications

[edit]

The vehicles in the Navlab series have been designed for varying purposes, "... off-road scouting; automated highways; run-off-road collision prevention; and driver assistance for maneuvering in crowded city environments. Our current work involves pedestrian detection, surround sensing, and short range sensing for vehicle control."[4]

Several types of vehicles have been developed, including "... robot cars, vans, SUVs, and buses."[1]

Vehicles

[edit]

The institute has made vehicles with the designations Navlab 1 through 11.[4] The vehicles were mainly semi-autonomous, though some were fully autonomous and required no human input.[4]

Navlab 1 was built in 1986 using a Chevrolet panel van.[3] The van had 5 racks of computer hardware, including 3 Sun workstations, video hardware and GPS receiver, and a Warp supercomputer.[3] The computer had 100 MFLOP/sec, the size of a fridge, and a portable 5 kW generator.[5] The vehicle suffered from software limitations and was not fully functional until the late 80s, when it achieved its top speed of 20 mph (32 km/h).[3]

Navlab 2 was built in 1990 using a US Army HMMWV.[3] Computer power was uprated for this new vehicle with three Sparc 10 computers, "for high level data processing", and two 68000-based computers "used for low level control".[3] The Hummer was capable of driving both off- or on-road. When driving over rough terrain, its speed was limited with a top speed of 6 mph (9.7 km/h). When Navlab 2 was driven on-road it could achieve as high as 70 mph (110 km/h)[3]

Navlab 1 and 2 were semi-autonomous and used "... steering wheel and drive shaft encoders and an expensive inertial navigation system for position estimation."[3]

Navlab 5 used a 1990 Pontiac Trans Sport minivan. In July 1995, the team took it from Pittsburgh to San Diego on a proof-of-concept trip, dubbed "No Hands Across America", with the system navigating for all but 50 of the 2850 miles, averaging over 60 MPH.[6][7][8] In 2007, Navlab 5 was added to the Class of 2008 inductees of the Robot Hall of Fame.[9]

Navlabs 6 and 7 were both built with Pontiac Bonnevilles. Navlab 8 was built with an Oldsmobile Silhouette van. Navlabs 9 and 10 were both built out of Houston transit buses.[10]

ALVINN

[edit]

The ALVINN (An Autonomous Land Vehicle in a Neural Network) was developed in 1988.[11][12][13] Detailed information is found in Dean A. Pomerleau's PhD thesis (1992).[14] It was an early demonstration of representation learning, sensor fusion, and data augmentation.

Architecture

[edit]

ALVINN was a 3-layer fully connected feedforward network trained by backpropagation, with 1217-29-46 neurons. It had 3 types of inputs:

  • A 30x32 grid representing grayscale values from the blue channel of a video camera pointing forward.
  • An 8x32 grid containing depth information from a laser rangefinder (30 by 80 degree field of view).
  • 1 feedback unit. It is directly connected to the one in the output layer, with one-step delay in the style of the Jordan network. It was designed to provide rudimentary processing of time

The output layer consisted of 46 units:

  • 45 units represent a linear range of steering angles. The most activated unit within this range determined the vehicle's steering direction.
  • 1 feedback unit.

By inspecting the network weights, Pomerleau noticed that the feedback unit learned to measure the relative lightness of the road areas vs the non-road areas.

Training

[edit]

ALVINN was trained by supervised learning on a dataset of 1200 simulated road images paired with corresponding range finder data. These images encompassed diverse road curvatures, retinal orientations, lighting conditions, and noise levels. The simulated images took 6 hours of Sun-4 CPU time.

The network was trained for 40 epochs using backpropagation on Warp (taking 45 minutes). The desired output for each training example was a Gaussian distribution of activation across the steering output units, centered on the unit representing the correct steering angle.

At the end of training, the network achieved 90% accuracy in predicting the correct steering angle within two units of the true value on unseen simulated road images. Because the data was unclear

In live experiments, it ran on Navlab 1, with a video camera and a laser rangefinder. It could drive it at 0.5 m/s along a 400-meter wooded path under a variety of weathers: snowy, rainy, sunny and cloudy. This was competitive with traditional computer-vision-based algorithms at the time.

Later, they applied on-line imitation learning with real data by a person driving the Navlab 1. They noticed that because a human driver never strays far from the path, the network would never be trained on what action to take if it ever finds itself straying far from the path. To deal with this problem, they applied data augmentation, where each real image is shifted to the left by 5 different amounts and to the right by 5 different amounts, and the real human steering angle is shifted accordingly. In this way, each example is augmented to 11 examples.

See also

[edit]

References

[edit]
  1. ^ a b c "Navlab: The Carnegie Mellon University Navigation Laboratory". The Robotics Institute. Retrieved 14 July 2011.
  2. ^ "Robotics History: Narratives and Networks Oral Histories: Chuck Thorpe". IEEE.tv. 17 April 2015. Retrieved 2018-06-07.
  3. ^ a b c d e f g h Todd Jochem; Dean Pomerleau; Bala Kumar & Jeremy Armstrong (1995). "PANS: A Portable Navigation Platform". The Robotics Institute. Retrieved 14 July 2011.
  4. ^ a b c "Overview". NavLab. The Robotics Institute. Archived from the original on 8 August 2011. Retrieved 14 July 2011.
  5. ^ Hawkins, Andrew J. (2016-11-27). "Meet ALVINN, the self-driving car from 1989". The Verge. Retrieved 2024-08-07.
  6. ^ "Look, Ma, No Hands". Carnegie Mellon University. 31 December 2017. Retrieved 31 December 2017.
  7. ^ Freeman, Mike (3 April 2017). "Connected Cars: The long road to autonomous vehicles". Center for Wireless Communications. Archived from the original on 1 January 2018. Retrieved 31 December 2017.
  8. ^ Jochem, Todd (3 April 2015). "Back to the Future: Autonomous Driving in 1995 - Robotics Trends". www.roboticstrends.com. Retrieved 31 December 2017.
  9. ^ "THE 2008 INDUCTEES". The Robot Institute. Archived from the original on 26 September 2011. Retrieved 14 July 2011.
  10. ^ Shirai, Yoshiaki; Hirose, Shigeo (2012). Attention and Custom for Safe Behavior. Springer Science & Business Media. p. 249. ISBN 978-1447115809. {{cite book}}: |work= ignored (help)
  11. ^ Pomerleau, Dean A. (1988). "ALVINN: An Autonomous Land Vehicle in a Neural Network". Advances in Neural Information Processing Systems. 1. Morgan-Kaufmann.
  12. ^ Pomerleau, Dean (1990). "Rapidly Adapting Artificial Neural Networks for Autonomous Navigation". Advances in Neural Information Processing Systems. 3. Morgan-Kaufmann.
  13. ^ Pomerleau, Dean A. (1990), "Neural Network Based Autonomous Navigation", The Kluwer International Series in Engineering and Computer Science, Boston, MA: Springer US, pp. 83–93, ISBN 978-1-4612-8822-0, retrieved 2024-08-07
  14. ^ Pomerleau, Dean A. (1993). Neural Network Perception for Mobile Robot Guidance. Boston, MA: Springer US. doi:10.1007/978-1-4615-3192-0. ISBN 978-1-4613-6400-9.
[edit]

Leave a Reply