В США подняли дюжину самолетов по тревоге из-за приближения российских Ту‑14208:43
Muon outperforms every optimizer we tested (AdamW, SOAP, MAGMA). Multi-epoch training matters. And following work by Kotha et al. , scaling to large parameter counts works if you pair it with aggressive regularization -- weight decay up to 16x standard, plus dropout. The baseline sits at ~2.4x data efficiency against modded-nanogpt.
,更多细节参见PDF资料
3014297710http://paper.people.com.cn/rmrb/pc/content/202603/02/content_30142977.htmlhttp://paper.people.com.cn/rmrb/pad/content/202603/02/content_30142977.html11921 从春节消费看超大规模市场优势。PDF资料对此有专业解读
Что думаешь? Оцени!