Users running a quantized 7B model on a laptop expect 40+ tokens per second. A 30B MoE model on a high-end mobile device ...
Adding big blocks of SRAM to collections of AI tensor engines, or better still, a waferscale collection of such engines, ...