LogoOctarine Docs
Working with AI

Configuring AI

Easily set up and manage connections to AI providers, including both cloud-based APIs and local models.

Octarine integrates with multiple AI providers, allowing you to choose the language models that best suit your workflow. The AI system supports both cloud-based services and local models through Ollama, giving you flexibility in how you incorporate AI assistance into your note-taking.

AI Features are only available to Pro License users.

Setting Up AI Providers

Access AI configuration through Settings → AI -> Providers . Octarine supports multiple providers that can be configured simultaneously:

  • Cloud Providers: Require you to provide your own API key, by requesting from the provider’s console.
  • Ollama: Connects to locally running models (no API key required)

Each provider can be enabled independently by entering valid credentials. The configuration automatically saves and validates your API keys, showing a checkmark when successfully connected.

Selecting Models

Once providers are configured, available models appear in the model selector dropdown. The interface displays:

  • Provider grouping: Models are organized by their respective providers
  • Model capabilities: Each model shows its context window size and key features
  • Quick switching: Change models on-the-fly without interrupting your workflow
  • The last selected model persists across sessions

Models can be changed at any time through the AI panel or via the quick switcher in the editor toolbar when AI features are active.

Ollama Integration (Local Models)

Ollama enables running AI models locally on your machine, providing privacy and offline functionality:

  • Automatic detection: Octarine detects if Ollama is running on the default port (11434)
  • Model discovery: Available Ollama models are automatically listed in the model selector
  • No configuration needed: Simply start Ollama and select your preferred local model
  • Performance indicators: Local models show system resource usage during generation

To use Ollama:

  1. Install Ollama on your system and pull desired models
  2. Ensure Ollama is running (ollama serve)
  3. Select any Ollama model from the dropdown in Octarine

The connection status indicator shows when Ollama is properly connected, and models refresh automatically when new ones are pulled through the Ollama CLI.

Managing Multiple Providers

  • Provider priority: When multiple providers are configured, all their models are available in a single unified list
  • Seamless switching: Move between cloud and local models without reconfiguration
  • Credential management: API keys are stored on device and can be updated or removed at any time.

Error states are clearly indicated if a provider becomes unavailable or credentials expire, allowing you to quickly identify and resolve connection issues.