From dba01e2e945a4d145c850e57de4feccd20f7db3c Mon Sep 17 00:00:00 2001 From: henry01e414022 Date: Sun, 6 Apr 2025 22:37:05 +0800 Subject: [PATCH] Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model' --- ...R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md new file mode 100644 index 0000000..6b1c28e --- /dev/null +++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md @@ -0,0 +1,2 @@ +
[DeepSeek open-sourced](https://shinjintech.co.kr) DeepSeek-R1, an LLM fine-tuned with reinforcement knowing (RL) to enhance reasoning ability. DeepSeek-R1 [attains](https://gitea.dgov.io) results on par with OpenAI's o1 design on several criteria, consisting of MATH-500 and [SWE-bench](https://gitea.qi0527.com).
+
DeepSeek-R1 is based on DeepSeek-V3, a mix of [experts](http://94.110.125.2503000) (MoE) design just recently open-sourced by [DeepSeek](https://rapid.tube). This base model is [fine-tuned utilizing](https://centerfairstaffing.com) Group Relative Policy Optimization (GRPO), a reasoning-oriented version of RL. The research group likewise performed understanding distillation from DeepSeek-R1 to open-source Qwen and Llama designs and launched numerous versions of each \ No newline at end of file