PrimerAI introduces ‘near-zero hallucination’ update to AI platform

0

An artificial intelligence company that contracts with the U.S. government says its AI system’s analysis of large data sets can produce nearly flawless results.

PrimerAI announced on Oct. 14 that an update to its AI platform can achieve a near-zero hallucination rate, a result that the company believes has broader implications for the Defense Department and defense industry as a whole.

The term “hallucinations” in AI parlance refers to models spitting out incorrect results.

“In high-stakes environments where precision and time lines are crucial, Primer’s enhanced platform emerges as a game changing solution,” the company’s press release said.

PrimerAI CEO Sean Moriarty explained what’s important about the change to the system in a phone interview with Military Times.

While many AI platforms experience a hallucination rate of 10%, Moriarty said, PrimerAI had whittled it down to .3%.

A screenshot of the PrimerAI interface that demonstrates the retrieval augmented generation verification process. (Cindy Ma/PrimerAI)

The biggest boon of the update was the ability to fact-check its own results.

This proprietary system, which the company says captures over 99% of errors before they reach users, is called the retrieval augmented generation verification system. Large language models, or LLMs, which are AI systems that can understand and process human language, already use retrieval augmented generation when given a prompt. ChatGPT is a prime example.

What makes PrimerAI’s system novel is that once it generates a response or summary, it generates a claim for the summary and corroborates that claim with the source data, according to Cindy Ma, senior product manager at PrimerAI.

This extra layer of revision leads to exponentially reduced mistakes, said Ma, who provided an in-person demonstration of the system at the Association of the United States Army’s annual conference in Washington, D.C., on Oct. 14.

PrimerAI believes that for the Defense Department, providing a large buffer against inaccuracy is paramount because small hallucinations can trigger dramatic responses.

“Imagine a world where an LLM is saying an adversary has five times as many aircraft carriers as they actually have,” Moriarty said.

Moriarty acknowledged that there’s always room for error despite the aim for flawlessness. Even if high-quality reference data is the foundation of an AI system’s analysis, there are still pieces of the puzzle that get warped. Data itself can be tainted by human judgment.

“Our present world is not a zero-defect world, although we strive for zero defects,” Moriarty said.

Riley Ceder is an editorial fellow at Military Times, where he covers breaking news, criminal justice and human interest stories. He previously worked as an investigative practicum student at The Washington Post, where he contributed to the ongoing Abused by the Badge investigation.

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy