黄仁勋首次回应DeepSeek 称被投资者误解 附实录(图)
大鱼新闻 财经 14 hours, 18 minutes
三言Pro消息 今年1月底,DeepSeek发布的R1模型对整个科技圈造成了巨大轰动,英伟达更是应声下跌16.79%,市值蒸发5900亿美元,创下美国金融史记录。英伟达发言人当时表示:“DeepSeek是一项出色的人工智能进步,也是测试时间缩放的完美例子。”尽管英伟达已经回血,不过其CEO黄仁勋一直未公开回应此事。
周四,黄仁勋在一场访谈中首次回应了DeepSeek,他表示投资者对DeepSeek 在人工智能领域取得的进展存在误解,这导致了市场对英伟达股票的错误反应。DeepSeek以低成本高性能引发关注后,投资者开始质疑科技公司投入巨额成本建设AI基础设的必要性。
黄仁勋表示,市场的剧烈反应源于投资者的误读。尽管 R1 的开发似乎减少了对算力的依赖,但人工智能行业仍需强大的算力来支持模型后训练处理方法,这些方法能让AI模型在后训练进行推理或预测。
“从投资者的角度来看,他们认为世界分为预训练和推理两个阶段,而推理就是向 AI 提问并立即得到答案。我不知道这种误解是谁造成的,但显然这种观念是错误的。”
黄仁勋指出,预训练仍然重要,但后处理才是“智能最重要的部分”,也是“学习解决问题的关键环节”。
此外,黄仁勋还认为R1开源后,全球范围内展现出的热情令人难以置信,“这是一件极其令人兴奋的事情”。
黄仁勋访谈主要环节实录:
黄仁勋:
What's really exciting and you probably saw,what happened with DeepSeek.
The world's first reasoning model that's open sourced,and it is so incredibly exciting the energy around the world as a result of R1 becoming open sourced,incredible.
真正令人兴奋的是,你可能已经看到了,DeepSeek发生了什么。世界上第一个开源的推理模型,这太不可思议了,因为R1变成了开源的,全球都因此而充满了能量,真是不可思议。
访问者:
Why do people think this could be a bad thing?I think it's a wonderful thing.
为什么人们认为这可能是一件坏事呢?我认为这是一件美好的事情。
黄仁勋:
Well,first of all,I think from an investor from an investor perspective,there was a mental model that,the world was pretraining, and then inference.And inference was,you ask an AI question and it instantly gives you an answer,one shot answer.
I don't know whose fault it is,but obviously that paradigm is wrong.The paradigm is pre training,because we want to have foundation you need to have a basic level of foundational understanding of information.In order to do the second part which is post training.So pretraining is continue to be rigorous.
The second part of it and this is the most important part actually of intelligence is we call post training,but this is where you learn to solve problems.You have foundational information.You understand how vocabulary works and syntax work and grammar works,and you understand how basic mathematics work,and so you take this foundational knowledge you now have to apply it to solve problems.
首先,我认为从投资者的角度来看,过去存在一种思维模型是,世界是预先训练好的,然后是推理。推理就是你问AI一个问题,它立即给你一个答案,一次性回答。我不知道这是谁的错,但显然这种模式是错误的。
正确的模式应该是先进行预训练,因为我们想要有一个基础,你需要对信息有一个基本的理解水平,以便进行第二个部分,也就是后期训练。所以预训练要继续保持严谨。第二部分实际上是智能最重要的部分,我们称之为后训练,但这是你学习解决问题的地方,你已经掌握了基础知识,你明白词汇是如何工作的,句法是如何工作的,语法是如何工作的,你明白了基本数学是如何工作的,所以你现在必须应用这些基础知识来解决实际问题……
So there's a whole bunch of different learning paradigms that are associated with post training,and in this paradigm,the technology has evolved tremendously in the last 5 years and computing needs is intensive.And so people thought that oh my gosh,pretraining is a lot less,they forgot that post training is really quite intense.
因此后训练有一系列很多不同的学习模式,在这种模式下,技术在过去五年里取得了巨大的进步,计算需求非常大,所以人们认为,哦天那,预训练要少得多。但是他们忘记了后训练其实相当大。
And then now the 3rd scaling law is ,the more reasoning that you do,the more thinking that you do before you answer a question.And so reasoning is a fairly compute intensive part of.And so I think the market responded to R1 as 'oh my gosh AI is finished',you know it dropped out of the sky ,we don't need to do any computing anymore.It's exactly the opposite.
现在第三条缩放定律是,你做的推理越多,你在回答问题之前思考得越多,推理就会越好,这是一个计算量相当大的过程。因此我认为市场对R1的反应是“哦我的天哪,AI到头了",就好像它从天而降,我们不再需要进行任何计算了,但实际上完全相反。

周四,黄仁勋在一场访谈中首次回应了DeepSeek,他表示投资者对DeepSeek 在人工智能领域取得的进展存在误解,这导致了市场对英伟达股票的错误反应。DeepSeek以低成本高性能引发关注后,投资者开始质疑科技公司投入巨额成本建设AI基础设的必要性。
黄仁勋表示,市场的剧烈反应源于投资者的误读。尽管 R1 的开发似乎减少了对算力的依赖,但人工智能行业仍需强大的算力来支持模型后训练处理方法,这些方法能让AI模型在后训练进行推理或预测。
“从投资者的角度来看,他们认为世界分为预训练和推理两个阶段,而推理就是向 AI 提问并立即得到答案。我不知道这种误解是谁造成的,但显然这种观念是错误的。”
黄仁勋指出,预训练仍然重要,但后处理才是“智能最重要的部分”,也是“学习解决问题的关键环节”。
此外,黄仁勋还认为R1开源后,全球范围内展现出的热情令人难以置信,“这是一件极其令人兴奋的事情”。
黄仁勋访谈主要环节实录:
黄仁勋:
What's really exciting and you probably saw,what happened with DeepSeek.
The world's first reasoning model that's open sourced,and it is so incredibly exciting the energy around the world as a result of R1 becoming open sourced,incredible.
真正令人兴奋的是,你可能已经看到了,DeepSeek发生了什么。世界上第一个开源的推理模型,这太不可思议了,因为R1变成了开源的,全球都因此而充满了能量,真是不可思议。
访问者:
Why do people think this could be a bad thing?I think it's a wonderful thing.
为什么人们认为这可能是一件坏事呢?我认为这是一件美好的事情。
黄仁勋:
Well,first of all,I think from an investor from an investor perspective,there was a mental model that,the world was pretraining, and then inference.And inference was,you ask an AI question and it instantly gives you an answer,one shot answer.
I don't know whose fault it is,but obviously that paradigm is wrong.The paradigm is pre training,because we want to have foundation you need to have a basic level of foundational understanding of information.In order to do the second part which is post training.So pretraining is continue to be rigorous.
The second part of it and this is the most important part actually of intelligence is we call post training,but this is where you learn to solve problems.You have foundational information.You understand how vocabulary works and syntax work and grammar works,and you understand how basic mathematics work,and so you take this foundational knowledge you now have to apply it to solve problems.
首先,我认为从投资者的角度来看,过去存在一种思维模型是,世界是预先训练好的,然后是推理。推理就是你问AI一个问题,它立即给你一个答案,一次性回答。我不知道这是谁的错,但显然这种模式是错误的。
正确的模式应该是先进行预训练,因为我们想要有一个基础,你需要对信息有一个基本的理解水平,以便进行第二个部分,也就是后期训练。所以预训练要继续保持严谨。第二部分实际上是智能最重要的部分,我们称之为后训练,但这是你学习解决问题的地方,你已经掌握了基础知识,你明白词汇是如何工作的,句法是如何工作的,语法是如何工作的,你明白了基本数学是如何工作的,所以你现在必须应用这些基础知识来解决实际问题……
So there's a whole bunch of different learning paradigms that are associated with post training,and in this paradigm,the technology has evolved tremendously in the last 5 years and computing needs is intensive.And so people thought that oh my gosh,pretraining is a lot less,they forgot that post training is really quite intense.
因此后训练有一系列很多不同的学习模式,在这种模式下,技术在过去五年里取得了巨大的进步,计算需求非常大,所以人们认为,哦天那,预训练要少得多。但是他们忘记了后训练其实相当大。
And then now the 3rd scaling law is ,the more reasoning that you do,the more thinking that you do before you answer a question.And so reasoning is a fairly compute intensive part of.And so I think the market responded to R1 as 'oh my gosh AI is finished',you know it dropped out of the sky ,we don't need to do any computing anymore.It's exactly the opposite.
现在第三条缩放定律是,你做的推理越多,你在回答问题之前思考得越多,推理就会越好,这是一个计算量相当大的过程。因此我认为市场对R1的反应是“哦我的天哪,AI到头了",就好像它从天而降,我们不再需要进行任何计算了,但实际上完全相反。
相关新闻