B AI model on its wafer-scale processor, delivering 57x faster speeds than GPU solutions and challenging Nvidia's AI chip dominance with U.S.-based inference processing.
Cerebras Systems, the pioneer in accelerating generative AI, today announced record-breaking performance for DeepSeek-R1-Distill-Llama-70B inference, achieving more than 1,500 tokens per second – 57 ...
Aurora has a modular design that consists of four similar units, each installed in a standard server rack that is slightly ...