LogoOctarine Docs
Working with AI

Working with Ollama

Learn how to connect your Ollama to Octarine to use Writing Assistant and Ask Octarine

This guide provides step-by-step instructions to install Ollama, load a model, and using it in Octarine. This setup allows you to run large language models (LLMs) on your machine without relying on external services.

Prerequisites

  • Operating System: Compatible with macOS, Windows, or Linux.
  • Hardware Requirements: Ensure your system has sufficient resources (CPU, RAM, and storage) to run the desired LLMs.

Step 1: Install Ollama

  1. Download Ollama:

  2. Install Ollama:

    • macOS: Open the downloaded .dmg file and follow the on-screen instructions.
    • Windows: Run the downloaded .exe file and complete the installation wizard.
    • Linux: Follow the installation instructions provided on the Ollama website for your specific distribution.

Step 2: Pull a Model

After installing Ollama, you need to download a model to use.

  1. Open Terminal or Command Prompt:

    • On macOS/Linux, open the Terminal.
    • On Windows, open Command Prompt or PowerShell.
  2. Pull a Model:

    • Use the following command to download a model (e.g., llama2):

      ollama pull llama2
    • Replace llama2 with the name of the model you wish to use. A list of available models can be found in the Ollama documentation.

Step 3: Start the Ollama Service

  1. Run the Ollama Service:

    • In the same terminal or command prompt, start the Ollama service:

      ollama serve
    • This command starts the Ollama server, which runs by default on http://localhost:11434.

Step 4: Setting it up in Octarine

With the Ollama service running, you can interact with the model via its API.

  1. Head over to Settings → AI Assistant → AI Providers
  2. Click on Ollama
  3. Enter the server URL from the above step in the input box and press Save.

Step 5: Using it in Writing Assistant/Ask Octarine

  1. Open Writing Assistant or Ask Octarine
  2. Click on the model selector and search for Ollama models
  3. Select and have fun!

Additional Considerations

  • Security: By default, the Ollama API is accessible only from localhost, ensuring that it is not exposed to external networks.

  • Model Management: To list all available models on your system, use:

    ollama list

    This command displays all models currently available for use.

  • Stopping the Service: To stop the Ollama service, return to the terminal or command prompt where ollama serve is running and press Ctrl+C.