Learn more about Vicarious technology
Dive in to our Resource Library, which has everything from articles to scientific publications about intelligent automation in action.
Vicarious was built on the premise that differentiated AI is needed to solve real-world manipulation tasks: our approach augments “classic” machine learning techniques with proprietary neuroscience-based AI in order to create machines that, like a child’s brain, learns fast and develop a model of the world along the way. This brings huge advantages over “classic” machine learning-only companies: our robots need drastically less training data, can generalize better, and are much more resilient to real-life perturbations.
Training requirement
Model of the world, causes & effects
Generalization power
Traditional system integrators
Hard coded
Other AI robotics companies (machine learning-based)
Large amount of data needed
No
Low
Traditional system integrators Hard coded
Other AI robotics companies (machine learning-based) Large amount of data needed
Vicarious 6000x less than traditional machine learning 1
Traditional system integrators Hard coded
Other AI robotics companies (machine learning-based) No
Vicarious Yes 2
Traditional system integrators Hard coded
Other AI robotics companies (machine learning-based) Low
Vicarious High, thus faster learning & more resilience 3
Around this core AI, we have created a complete platform for general purpose robotic manipulation, with all the required components fully integrated into a closed loop of perception and action. We built our stack at the right abstraction level, in a containerized and modular way, to ensure any future task can easily be coded and rapidly deployed.
We can handle objects of nearly all geometry, weight, and texture with no fixturing required. More precisely:
Our robotic vision system ensures full comprehension of objects: finding their exact boundary, reconstituting their geometry, and deriving a high-quality grasping point. It does so in complex environments like cluttered bins with objects loosely dumped on top of each other, moving bins, changing light conditions, and more. Furthermore, it can perform “object-agnostic grasping”, i.e grasping at first sight without prior knowledge of the object. Like a human would do.
Our robots don’t just simply drop objects and see where they might fall. Our novel camera calibration routines find the best mapping between the virtual image and the physical scene. When combined with optimized visual servoing techniques, it allows for very fine operations: insertion into fixed or moving target, with precise location (tolerance of 1mm), with a certain orientation, applying torque if necessary (e.g clipping into a plastic blister), and others.
In today’s operation, production changeovers are increasingly more frequent due to customer’s desire for shorter product cycles and more personalization. Our solutions are purpose-built to enable these high-mix operations: each robot can handle hundreds of SKU types at any given time, we provide proprietary smart replenishment systems (flowracks, bins) so that each robot can access a large number of different SKUs, and our technology integrates into our customer’s production workflow (e.g batch processing, license-plate number (LPN) scanning, etc).
Dive in to our Resource Library, which has everything from articles to scientific publications about intelligent automation in action.