8 Comments

Thanks for the article. We also wrote an article about deepseek and china AI ecosystem, and chatGPT relating to NVDA stocks here:

🚨 AI just got 45x cheaper—DeepSeek built a GPT-4-level model for $5.6M, and if this scales, Nvidia’s AI monopoly might not last. 🚨

https://ghginvest.substack.com/p/ai-just-got-45x-cheaperand-it-might

Expand full comment

I think DeepSeek has attracted a fair number of traffic and attention to various newsletters. It's no great wonder we are all writing about it. I personally cannot read enough about this news.

Expand full comment

100%

Expand full comment

Amazing post thank you !!

Expand full comment

H800s were allowed to be exported to China before Oct 2023. Tencent built up a stockpile and presumably Deepseek did too. https://www.businessinsider.com/tencent-has-big-nvidia-h800-ai-us-chip-stockpile-2023-11

Expand full comment

The pricing listed in the chart for GPT 4o is wrong. Open AI only charges $2.50/1M input tokens and $10/1M output tokens. For o1, which is a more recent model than 4o, they charge $15/1M input tokens and $60/1M output tokens.

Anyone using o1 with an API key should be aware that o1 is a "thinking" model. It will use tokens as it iterates (or "thinks about") the response it will eventually give you. Open AI charges output token prices for the thinking steps. You will not have access to the model's "thoughts," only the final result.

Also, it is possible to use older models, such as gpt-3.5-turbo-0613 (released over a year ago), manually prompt "thinking" iterations, and get somewhat similar results to that of o1 for just $3/1M input tokens and $4/1M output tokens.

Expand full comment

🙌🏻

Expand full comment

Interesting!

Expand full comment