
DeepSeek's AI Model Outperforms OpenAI, Prompting Altman to Claim Unfair Data Usage
DeepSeek, a Chinese AI research lab, has emerged as a significant competitor to OpenAI with its new DeepSeek-R1 model, showcasing advanced capabilities in mathematical reasoning and code generation while using fewer computational resources.

Sam Altman frowning
Originally established as Fire-Flyer under the High-Flyer hedge fund, DeepSeek has made its flagship model and six smaller variants openly available under an MIT license, allowing free development and commercialization.
The success of DeepSeek-R1 has raised concerns at OpenAI and Microsoft, who are investigating whether the Chinese company trained their model using OpenAI's model outputs. Bloomberg and Financial Times report that such activities could violate OpenAI's terms of service.
David Sacks, dubbed the 'AI Czar,' suggests there is "substantial evidence" that DeepSeek used knowledge distillation - a technique where one model learns from another by asking millions of questions to mimic its reasoning process.
OpenAI CEO Sam Altman expressed his concerns about copying existing technology versus creating new innovations, stating: "It is (relatively) easy to copy something that you know works. It is extremely hard to do something new, risky, and difficult when you don't know if it will work."
The situation has highlighted a notable contradiction: OpenAI defends its practice of scraping internet data for training, including from potentially illegal sources, while opposing knowledge distillation techniques used by competitors. The company is currently working with the U.S. government to protect its technological capabilities from competitors while facing lawsuits over its own data collection practices.

Mall interior with spiral stairs
Related Articles

Reddit Co-Founder Ohanian and Digg Creator Rose Unite to Revive Digg with AI-Powered Vision
