UK Startup Achieves AI Breakthrough for Safer Self Driving Cars and Robots

Oxford based startup Align AI claims to have achieved a significant breakthrough in AI safety that could enhance the reliability of self-driving cars, robots, and other AI-based products. The company, founded just one year ago, has developed a novel algorithm called the “Algorithm for Concept Extraction” (ACE), which enables AI systems to form more sophisticated associations, akin to human concepts. This breakthrough addresses a common problem in current AI systems, where they often make spurious correlations based on their training data, potentially leading to catastrophic consequences in real-world applications.

The Challenge of Spurious Correlations

One of the critical challenges in AI safety is preventing AI systems from forming incorrect correlations, known as “misgeneralizations.” These incorrect associations can lead to unexpected and undesirable behavior, as exemplified by the tragic incident in 2018 when an Uber self-driving car failed to recognize a pedestrian crossing the road, leading to a fatal accident. The car’s AI software had learned to identify pedestrians only in crosswalks, demonstrating a failure to generalize its knowledge.

Align AI’s Solution: The ACE Algorithm

Align AI’s ACE algorithm addresses the issue of misgeneralization by allowing AI systems to recognize and understand differences between their training data and new data. When presented with new data that deviates from its training examples, ACE formulates two hypotheses about the AI’s true objective based on these differences. It then tests these hypotheses to determine which one best fits the new data. This iterative process continues until the AI system identifies the correct objective that aligns with the new data.

Demonstrating ACE’s Capabilities

To showcase ACE’s capabilities, Align AI conducted tests using the CoinRun video game—a challenging benchmark used to evaluate an AI model’s ability to avoid spurious correlations. In this game, AI agents navigate mazes filled with obstacles, hazards, and monsters while searching for a gold coin and advancing to the next level.

Historically, AI agents would frequently misgeneralize in CoinRun, always heading to the lower right corner of the screen, where the exit was located, instead of seeking out the coin. Previous AI systems achieved a coin retrieval rate of only 59%, barely better than random chance.

In contrast, AI agents trained using ACE achieved a coin retrieval rate of 72%. These ACE-trained agents demonstrated the ability to adapt their strategies based on the game’s changing scenarios, understanding when to prioritize coin collection and when to evade approaching threats.

Future Goals and Applications

Aligned AI aims to further enhance ACE’s capabilities, ultimately achieving “zero-shot” learning, where AI systems can discern the correct objective when encountering entirely new data. Such advancements could lead to safer self-driving cars, robots capable of handling diverse scenarios, and more reliable AI systems for various applications.

Rebecca Gorman, CEO of Aligned AI, envisions potential applications of ACE in areas like robotics, content moderation on social media, and internet forums. The ability to ensure safe AI behavior without continuous human oversight could revolutionize industries and improve the reliability of AI-powered technologies.

Read More: Idris Elba Partners with Stellar a Fresh Face in the Crypto World


Align AI’s breakthrough with the ACE algorithm represents a significant step forward in AI safety. By addressing the challenge of misgeneralization, the company aims to enhance the reliability and safety of AI systems in various domains, including self-driving cars, robotics, and content moderation. As Align AI seeks funding and patents for ACE, the technology holds the potential to make AI systems more interpretable and capable of understanding their objectives, marking a crucial advancement in the field of artificial intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *