Track_Shovel@slrpnk.net to Lemmy Shitpost@lemmy.worldEnglish · 3 days agoHexadecimalslrpnk.netimagemessage-square137fedilinkarrow-up11.05Karrow-down125
arrow-up11.02Karrow-down1imageHexadecimalslrpnk.netTrack_Shovel@slrpnk.net to Lemmy Shitpost@lemmy.worldEnglish · 3 days agomessage-square137fedilink
minus-squarecoldsideofyourpillow@lemmy.cafelinkfedilinkEnglisharrow-up1·edit-27 hours agoYou don’t need a background in coding at all. In fact, the spaces of machine learning and programming are almost completely seperate. Download Ollama. Depending on the power of your GPU, run one of the following commands: DeepSeek-R1-Distill-Qwen-1.5B: ollama run deepseek-r1:1.5b DeepSeek-R1-Distill-Qwen-7B: ollama run deepseek-r1:7b DeepSeek-R1-Distill-Llama-8B: ollama run deepseek-r1:8b DeepSeek-R1-Distill-Qwen-14B: ollama run deepseek-r1:14b DeepSeek-R1-Distill-Qwen-32B: ollama run deepseek-r1:32b DeepSeek-R1-Distill-Llama-70B: ollama run deepseek-r1:70b Bigger models means better output, but also longer generation times.
You don’t need a background in coding at all. In fact, the spaces of machine learning and programming are almost completely seperate.
Download Ollama.
Depending on the power of your GPU, run one of the following commands:
DeepSeek-R1-Distill-Qwen-1.5B:
ollama run deepseek-r1:1.5b
DeepSeek-R1-Distill-Qwen-7B:
ollama run deepseek-r1:7b
DeepSeek-R1-Distill-Llama-8B:
ollama run deepseek-r1:8b
DeepSeek-R1-Distill-Qwen-14B:
ollama run deepseek-r1:14b
DeepSeek-R1-Distill-Qwen-32B:
ollama run deepseek-r1:32b
DeepSeek-R1-Distill-Llama-70B:
ollama run deepseek-r1:70b
Bigger models means better output, but also longer generation times.