SambaNova Reports Fastest DeepSeek-R1 671B with High Efficiency

Palo Alto, CA – Generative AI company SambaNova announced last week that DeepSeek-R1 671B is running today on SambaNova Cloud at 198 tokens per second (t/s), “achieving speeds and efficiency that no other platform can match,” the company said. DeepSeek-R1 has reduced AI training costs by 10X, but its widespread adoption has been hindered by […]

Toward AGI: AI Innovation Will Be Driven by Applications, Not LLMs

DeepSeek’s LLM has caused a stir, but … companies like OpenAI and Anthropic are aiming higher, their sights are set on artificial general intelligence, for which LLMs will be a component. No matter how fast, powerful, or efficient they get, LLMs alone won’t be enough to achieve AGI.

HPC News Bytes 20250203: DeepSeek Lessons, Intel Reroutes GPU Roadmap, LANL and OpenAI for National Security, Nuclear Reactors for Google Data Centers

The HPC-AI world was upended last week by DeepSeek AI benchmark numbers, as the dust settles we offer commentary on what it may, at this stage, mean: Five lessons from DeepSeek, Intel GPU rack scale architecture ….