50x faster, 50x thriftier: UK AI startup delivers stunning gains in performance and power consumption using a cheap $30 system board
Literal Labs wants to make GPU-based training obsolete with its Tseltin Machine
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
Back in March 2024, we reported how British AI startup Literal Labs wasworking to make GPU-based training obsoletewith its Tseltin Machine, a machine learning model that uses logic-based learning to classify data.
It operates through Tsetlin automata, which establish logical connections between features in input data and classification rules. Based on whether decisions are correct or incorrect, the machine adjusts these connections using rewards or penalties.
Developed by Soviet mathematician Mikhail Tsetlin in the 1960s, this approach contrasts with neural networks by focusing on learning automata, rather than modeling biological neurons, to perform tasks like classification and pattern recognition.
Energy-efficient design
Now, Literal Labs has developed a model using Tsetlin Machines that despite its compact size of just 7.29KB, delivers high accuracy and dramatically improves anomaly detection tasks for edge AI and IoT deployments.
The model was benchmarked by Literal Labs using the MLPerf Inference: Tiny suite and tested on a $30NUCLEO-H7A3ZI-Q development board, which features a 280MHzARMCortex-M7 processor and doesn’t include an AI accelerator. The results show Literal Labs’ model achieves inference speeds that are 54 times faster than traditional neural networks while consuming 52 times less energy.
Compared to the best-performing models in the industry, Literal Labs’ model demonstrates both latency improvements and an energy-efficient design, making it suitable for low-power devices like sensors. Its performance makes it viable for applications in industrial IoT, predictive maintenance, and health diagnostics, where detecting anomalies quickly and accurately is crucial.
The use of such a compact and low-energy model could help scale AI deployment across various sectors, reducing costs and increasing accessibility to AI technology.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Literal Labs says, “Smaller models are particularly advantageous in such deployments as they require less memory and processing power, allowing them to run on more affordable, lower-specification hardware. This not only reduces costs but also broadens the range of devices capable of supporting advanced AI functionality, making it feasible to deploy AI solutions at scale in resource-constrained settings.”
More from TechRadar Pro
Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.
Should your VPN always be on?
This new malware utilizes a rare programming language to evade traditional detection methods
This new phishing strategy utilizes GitHub comments to distribute malware