1 Simon Willison's Weblog
evanmclain722 edited this page 2025-02-10 01:00:56 +08:00


That design was trained in part using their unreleased R1 "thinking" design. Today they have actually released R1 itself, together with a whole family of new designs obtained from that base.

There's a great deal of things in the new release.

DeepSeek-R1-Zero seems the base design. It's over 650GB in size and, like many of their other releases, is under a tidy MIT license. DeepSeek alert that "DeepSeek-R1-Zero comes across challenges such as unlimited repetition, bad readability, and language mixing." ... so they also launched:

DeepSeek-R1-which "includes cold-start information before RL" and "attains efficiency similar to OpenAI-o1 across math, code, and thinking jobs". That a person is likewise MIT accredited, and is a similar size.

I don't have the capability to run models bigger than about 50GB (I have an M2 with 64GB of RAM), so neither of these two designs are something I can easily have fun with myself. That's where the brand-new distilled designs are available in.

To support the research study community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen.

This is a remarkable flex! They have actually models based on Qwen 2.5 (14B, 32B, Math 1.5 B and Math 7B) and Llama 3 (Llama-3.1 8B and Llama 3.3 70B Instruct).

Weirdly those Llama designs have an MIT license connected, which I'm uncertain is suitable with the underlying Llama license. Qwen designs are Apache accredited so possibly MIT is OK?

(I likewise just noticed the MIT license files say "Copyright (c) 2023 DeepSeek" so they may need to pay a little bit more attention to how they copied those in.)

Licensing aside, these distilled models are fascinating monsters.

Running DeepSeek-R1-Distill-Llama-8B-GGUF

Quantized versions are currently starting to appear. Up until now I have actually attempted just one of those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF launched by Unsloth AI-and photorum.eclat-mauve.fr it's actually enjoyable to play with.

I'm running it using the combination of Ollama, LLM and the llm-ollama plugin.

First I brought and ran the design using Ollama itself:

This downloads around 8.5 GB of design information and begins an interactive chat interface.

Once the model has been fetched, LLM can speak with it too. I prefer using LLM for experiments since it logs everything to SQLite for later exploration.

I installed the plugin and ran it like this:

Each reaction starts with a ... pseudo-XML tag containing the chain of thought utilized to help create the action. For "a joke about a pelican and a walrus who run a tea space together" that started like this:

Okay, so I require to come up with a joke about a pelican and a walrus running a tea room together. Hmm, that's a fascinating combination. Let me think about how these 2 characters might engage in a humorous situation.

First, let's consider their qualities. Pelicans are known for their long beaks and Webbed feet, often seen near the beach or water. Walruses have huge teeth, thick fur, and they're usually discovered in cooler climates, like icebergs or snowy areas. So, integrating these 2 into a tea space setting is currently a funny image.

[...]
It then believed for 20 paragraphs before outputting the joke! Here's the full transcript and here's the output at the end, after the tag:

The Joke:

A pelican and a walrus decide to open a quaint little tea space together. The walrus, library.kemu.ac.ke with its huge size, has a hard time to discover comfy chairs, so it rests on the table by mishap, overturning the teapot. Meanwhile, the pelican, trying to help, uses its beak to position saucers on the table, causing a couple of spills.

After a series of comical incidents, the walrus looks up and says with a smile, "This isn't so fishy anymore." The pelican smirks and remarks, "Maybe not, but we do have a lot of krill in our tea!"

Explanation:

- Setup: Pelican and Walrus run an unlikely tea space. Their sizes lead to comedic chaos.