
Microsoft’s “1‑bit” AI model runs on a CPU only, while matching larger systems by kordlessagain
Does size matter?
Memory requirements are the most obvious advantage of reducing the complexity of a model’s internal weights. The BitNet b1.58 model can run using just 0.4GB of memory, compared to anywhere from 2 to 5GB for other open-weight models of roughly the same parameter size.
But the simplified weighting system also leads to more efficient operation at inference time, with internal operations that rely much more on simple addition instructions and less on computationally costly multiplication instructions. Those efficiency improvements mean BitNet b1.58 uses anywhere from 85 to 96 percent less energy compared to similar full-precision models, the researchers estimate.
A demo of BitNet b1.58 running at speed on an Apple M2 CPU.
By using a highly optimized kernel designed specifically for the BitNet architecture, the BitNet b1.58 model can also run multiple times faster than similar models running on a standard full-precision transformer. The system is efficient enough to reach “speeds comparable
2 Comments
pvg
Thread the other day https://news.ycombinator.com/item?id=43714004
gnabgib
Discussion (107 points, 2 days ago, 30 comments) https://news.ycombinator.com/item?id=43714004