site stats
230k GPUs, including 30k GB200s, are operational for training Grok in a single supercluster called Colossus 1 (inference is done by our cloud providers). At Colossus 2, the first batch of 550k GB200s & GB300s, also for training, start going online in a few weeks. As Jensen
sign_in_with_google sign_in_with_google

2066 位用户此时在线

24小时点击排行 Top 10:
  1. 本站自动实时分享网络热点
  2. 24小时实时更新
  3. 所有言论不代表本站态度
  4. 欢迎对信息踊跃评论评分
  5. 评分越高,信息越新,排列越靠前