Automated driving is a key component of Ford’s Blueprint for Mobility — a plan that outlines what transpiration will look like in the year 2025 and beyond, along with the technologies, business models, and partnerships needed to get there. In working towards achieving its goal, The Blue Oval recently launched an automated Fusion Hybrid research vehicle to explore potential solutions for any issues, whether societal, legislative, or technological, presented by a future of fully-automated driving. Today, the automaker is announcing new projects with the Massachusetts Institute of Technology and Stanford University that aims to “research and develop solutions to some of the technical challenges surrounding automated driving.”
“To deliver on our vision for the future of mobility, we need to work with many new partners across the public and private sectors, and we need to start today,” said Paul Mascarenas, chief technical officer and Vice President, Ford research and innovation. “Working with university partners like MIT and Stanford enables us to address some of the longer-term challenges surrounding automated driving while exploring more near-term solutions for delivering an even safer and more efficient driving experience.”
To note, the Fusion Hybrid research vehicle uses the same technology already used in Ford vehicles available for purchase today. It then adds four LiDAR sensors (those goofy-looking things on the roof) to generate a real-time 3D map of the vehicle’s surrounding environment.
And although that functionality (provided by the LiDAR sensors) furnishes the research vehicle with the ability to sense objects around it, Ford’s partnership with MIT will focus on using advanced algorithms to help the vehicle learn to predict where moving vehicles and pedestrians could be in the future. According to Ford, “this scenario planning provides the vehicle with a better sense of the surrounding risks, enabling it to plan a path that will safely avoid pedestrians, vehicles and other moving objects.”
Meanwhile, Ford’s work with Stanford will focus on exploring the ways in which sensors could see around obstacles, similar to how — when a driver’s view is blocked by an obstacle like a big truck — the driver will maneuver within the lane to take a peek around it to see what’s ahead. Stanford’s research would enable the sensors on an autonomous Ford vehicle to “take a peek ahead” and make evasive maneuvers if needed. An example presented by Ford involves a truck that has slammed on its brakes. With the “peek ahead functionality”, the autonomous vehicle would know if the area around it is clear to safely change lanes.
Effectively, Ford’s goal “is to provide the vehicle with common sense,”, according to Greg Stevens, global manager for driver assistance and active safety, Ford research and innovation. “Drivers are good at using the cues around them to predict what will happen next, and they know that what you can’t see is often as important as what you can see. Our goal in working with MIT and Stanford is to bring a similar type of intuition to the vehicle”, he concluded.