One of Boston’s most prominent artificial intelligence startups, LiquidAI, on Monday disclosed how its software works for the first time — and how it might be able to outperform popular AI apps like ChatGPT. The company is also making the software accessible to anyone over the internet for further testing.
In a blog post and white paper released on Monday, LiquidAI claimed that its software could generate answers as quickly as competitors while requiring less memory and computing power, thus saving money and consuming less energy.
The company dubbed its technology “liquid foundational models” to contrast with the “large language models” underlying ChatGPT from OpenAI, Google’s Gemini, and Meta’s Llama, among many others. On one popular benchmark test, LiquidAI said its largest and most capable model could equal the performance of Meta’s Llama 3.1-70B model released in July while needing much less computer memory, since it is only about one-sixth the size.
“We’ve markedly improved quality, cost-effectiveness, and power efficiency compared to current GPT models,” LiquidAI chief executive and cofounder Ramin Hasani said in the blog post. “Our innovative architecture allows us to outperform larger traditional models, providing powerful solutions that are both cost-effective and scalable.”
LiquidAI was founded in March 2023 by Hasani, chief technical officer Mathias Lechner, and chief scientific officer Alexander Amini, with MIT professor Daniela Rus, who runs the university’s Computer Science and Artificial Intelligence Laboratory.
Hasani and Lechner studied the brain of a tiny roundworm as inspiration for creating a new kind of computer neural network. Instead of modeling the network on the human brain, which has about 86 billion neurons, they used the roundworm’s brain, which has just 300 neurons.
Essentially, the digital “neurons” in LiquidAI’s model operate in a more flexible way than the “neurons” in large language models. The model can generate text or video or be used for other AI tasks, the company said.
In the blog post, LiquidAI said it had developed models of varying size and capability, with some able to run on smartphones and others requiring larger servers in a data center. The models were tested against a handful of comparable rivals using a variety of industry benchmarks, where they met or exceeded the performance of competing large language models, the company said. Those claims were not independently verified by the Globe. (LiquidAI did not test its models against OpenAI’s GPT models, which are vastly larger and required hundreds of millions of dollars or more to create.)
Starting on Monday, people can access the models through LiquidAI’s chatbot or try linking their own software via the company’s website and some other cloud services. The company also said customers would be able to run its software on a variety of hardware, including chips from Apple, Nvidia, Advanced Micro Devices, and Qualcomm.
So far, LiquidAI has shown success using its software in a few ways but will need to prove it can attack a variety of large-scale challenges, particularly around generating text and more complicated math or coding problems, Usama Fayyad, executive director of the Institute for Experiential Artificial Intelligence at Northeastern, said.
“They’ve done the early breakthrough work and it’s a promising direction,” said Fayyad, who worked on AI and data science in Silicon Valley for decades before joining the university. “Now the question is … can they show it applying to many tasks, reliably under many conditions.”
Last December, LiquidAI raised $37.5 million in seed funding from high-profile investors led by Boston Celtics co-owner Stephen Pagliuca and San Francisco venture capital firm OSS Capital. The company is rumored to be raising considerably more for its next funding round.
The company is planning a public demonstration of its software on Oct. 23 at MIT’s Kresge Auditorium.
Aaron Pressman can be reached at [email protected]. Follow him @ampressman.
link
