programming_ai_aid/README.md
2025-02-02 18:29:16 +01:00

1.3 KiB

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

I prefer to make the model files available for all computers in our working group. Thus I put them on a NAS under /data_1/deepseek.

In that case you want to change /etc/systemd/system/ollama.service and add your directory to it:

Environment= [PATH BLA BLA] "OLLAMA_MODELS=/data_1/deepseek/models" "HOME=/data_1/deepseek"

After that restart systemd with:

systemctl restart ollama.service

Check the status of the service:

systemctl status ollama.service

Now we can get the models:

ollama pull deepseek-r1:1.5b
ollama pull deepseek-r1:7b
ollama pull deepseek-r1:8b
ollama pull deepseek-r1:32b
ollama pull deepseek-r1:14b
ollama pull deepseek-r1:70b

However, you want to check first which model you want based on the CPU RAM or the GPU VRAM you have:

ollama list | grep deepseek
deepseek-r1:1.5b                 a42b25d8c10a    1.1 GB    
deepseek-r1:671b                 739e1b229ad7    404 GB    
deepseek-r1:32b                  38056bbcbb2d    19 GB     
deepseek-r1:14b                  ea35dfe18182    9.0 GB    
deepseek-r1:7b                   0a8c26691023    4.7 GB    
deepseek-r1:70b                  0c1615a8ca32    42 GB     
deepseek-r1:8b                   28f8fd6cdc67    4.9 GB