AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes.