ARLINGTON, Virginia — Long before generative AI’s boom, a Silicon Valley firm contracted to collect and analyze non-classified data on illicit Chinese fentanyl trafficking made a compelling case for its embrace by U.S. intelligence agencies.
The operation’s results far exceeded human-only analysis, finding twice as many companies and 400% more people engaged in illegal or suspicious commerce in the deadly opioid.
Excited U.S. intelligence officials touted the results publicly — the AI made connections based mostly on internet and dark-web data — and shared them with Beijing authorities, urging a crackdown.
One important aspect of the 2019 operation, called Sable Spear, that has not previously been reported: The firm used generative AI to provide U.S. agencies — three years ahead of the release of OpenAI’s groundbreaking ChatGPT product — with evidence summaries for potential criminal cases, saving countless work hours.
“You wouldn’t be able to do that without artificial intelligence,” said Brian Drake, the Defense Intelligence Agency’s then-director of AI and the project coordinator.
The contractor, Rhombus Power, would later use generative AI to predict Russia‘s full-scale invasion of Ukraine with 80% certainty four months in advance, for a different U.S. government client. Rhombus says it also alerts government customers, who it declines to name, to imminent North Korean missile launches and Chinese space…