Accurate vision-processing software is critical for the development of AVs, and at the core of this technology should be a strong deep-learning ability for the camera software. One breakthrough being made is via StradVision’s lean SVNet software, which is said to be capable of running on automotive chipsets at significantly more affordable cost levels. The company’s aim is to be the first deep-learning-based software provider fully compatible with Automotive Safety Integrity Level B (ASIL B) for functional safety. The company is optimizing its sensor fusion technology, using cameras and LiDAR sensors, to generate much richer data about objects on the road, another critical aspect of successfully bringing fully autonomous vehicles to the consumer. It is also pursuing skeleton detection, which provides necessary data to predict pedestrian behaviors. SVNet software provides real-time feedback, detects obstacles in blind spots, and alerts drivers to potential accidents. It also prevents collisions by detecting lanes, abrupt lane changes, and vehicle speeds, even in poor lighting and weather conditions. The External product enables vehicles to execute ADAS and self-driving functions, and the Internal software monitors both driver and passenger to ensure a safe driving experience. The Tools product enables operational efficiency by guaranteeing data independence, its functions including an auto labeling system (with minimal human intervention required), data training suite, and a platform optimization suite. By 2021, StradVision expects that there will be 6.8 million vehicles on the road using SVNet software, which is already compliant with strict standards such as Euro NCAP, Guobiao in China, and ASPICE Level 2.