Working with Ollama
Learn how to connect your Ollama to Octarine to use Writing Assistant and Ask Octarine
This guide provides step-by-step instructions to install Ollama, load a model, and using it in Octarine. This setup allows you to run large language models (LLMs) on your machine without relying on external services.
Prerequisites
- Operating System: Compatible with macOS, Windows, or Linux.
- Hardware Requirements: Ensure your system has sufficient resources (CPU, RAM, and storage) to run the desired LLMs.
Step 1: Install Ollama
-
Download Ollama:
- Visit the Ollama official website and download the installer appropriate for your operating system.
-
Install Ollama:
- macOS: Open the downloaded
.dmgfile and follow the on-screen instructions. - Windows: Run the downloaded
.exefile and complete the installation wizard. - Linux: Follow the installation instructions provided on the Ollama website for your specific distribution.
- macOS: Open the downloaded
Step 2: Pull a Model
After installing Ollama, you need to download a model to use.
-
Open Terminal or Command Prompt:
- On macOS/Linux, open the Terminal.
- On Windows, open Command Prompt or PowerShell.
-
Pull a Model:
-
Use the following command to download a model (e.g.,
llama2):ollama pull llama2 -
Replace
llama2with the name of the model you wish to use. A list of available models can be found in the Ollama documentation.
-
Step 3: Start the Ollama Service
-
Run the Ollama Service:
-
In the same terminal or command prompt, start the Ollama service:
ollama serve -
This command starts the Ollama server, which runs by default on
http://localhost:11434.
-
Step 4: Setting it up in Octarine
With the Ollama service running, you can interact with the model via its API.
- Head over to
Settings → AI Assistant → AI Providers - Click on Ollama
- Enter the server URL from the above step in the input box and press Save.
Step 5: Using it in Writing Assistant/Ask Octarine
- Open Writing Assistant or Ask Octarine
- Click on the model selector and search for
Ollamamodels - Select and have fun!
Additional Considerations
-
Security: By default, the Ollama API is accessible only from
localhost, ensuring that it is not exposed to external networks. -
Model Management: To list all available models on your system, use:
ollama listThis command displays all models currently available for use.
-
Stopping the Service: To stop the Ollama service, return to the terminal or command prompt where
ollama serveis running and pressCtrl+C.