We protect computer vision systems from adversarial data poisoning and model evasion attacks.
First, we empirically score a model's robustness to long-tailed edge cases, both naturally occurring and adversarial, which account for over half of all encountered occurrences. The robustness metrics provided give insight beyond traditional accuracy metrics that don't typically translate well from the lab to the real world.
Second, we have an audit tool that can inspect training data and highlight potential embedded Trojan attacks. This second-generation approach is attack agnostic, allowing us to flag suspicious data for both known and unknown attacks.
Third, we have an AI firewall that can inspect data coming into models from the real world during deployment. We also have several value adds pertaining to data quality and the improvement of both robustness and accuracy of models.