Getting started with Ollama

Getting started with Ollama

Now we will install Ollama. This is a tool that will allow you to interact with several LLM's. First we will need to make sure curl is installed

sudo apt install curl

Now we will install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Now we can start to pull models, but the default location is in the /usr/share/ollama/. directory and these models can be big so if you have some other spot where you store large files maybe change the models folder to there. this can be done per user but I prefer to make it for all users as I may need a system user to be able to run it in the background and duplicating the models all over the drives takes a lot of space so in /etc/profile add the line

export OLLAMA_MODELS=/<Your directory path>

Reboot again or close your terminal to update the session since our path changed it will not use it untill a new terminal is opened. Now we verify Ollama is running and pull our models. I prefer to start with phi-4:14b and deepseek-r1:14b. This will take in general about 20GB of drive space but more will be needed for other features and models which is why we should change to a drive space where you can expand.

ollama pull phi-4:14b
ollama pull deepseek-r1:14b

So now we have some models to play with

ollama run deepseek-r1:14b

Give it a prompt and go now you have a command line ollama server. But what if it's on a network and you want it to be accessible?

sudo nano /etc/systemd/system/ollama.service

In the [Service] section add

Environment="OLLAMA_HOST=0.0.0.0"

This will listen on all available interfaces. You can use an ip address of a specific network device also.