Learn how to connect online and offline Large Language Model providers to your VAKAV Autonomous platform.
1. Introduction
The VAKAV Autonomous platform has a unique capability: it can connect to multiple AI “brains.” These “brains” are Large Language Models (LLMs) responsible for text analysis and generation. Depending on your needs, you can choose from two main categories of providers and connect them to the platform:
Online Providers: Cloud services offered by companies such as Google, OpenAI, and others.
Offline Providers: Software like Ollama, which allows you to run LLMs on your own servers.
This guide helps you understand the differences between the two approaches and walks you through the general process of connecting them to Autonomous.
2. Accessing the Feature
Configuration for language model providers is managed at two levels:
Global Level: To add and configure providers for the entire instance, navigate to the main settings sidebar, click on "AI Providers," and then select the "Large Language Model" submenu. This section manages the default

settings.Workspace-specific Level: In addition to the global settings, you can select a different language model for each workspace individually. To do this, simply go to the settings for that specific workspace and, from the "Chat Settings" tab, change the language model provider from the system default.

The LLM settings page includes the following sections:
Provider Selection: A dropdown menu listing all supported providers.
Configuration Fields: Appear after selecting a provider. These typically include:
API Key: Required for online providers. Think of it as a password to access their service.
Base URL: For offline providers, this is the network address where the software is running on your server.
Model Selection: Allows you to pick the specific model you want to use (e.g., gpt-css.20b).
4. Core Actions
Below are the typical steps for connecting each type of provider.
Action 1: Connecting an Online Provider (Example: Google Gemini)
Obtain an API Key :
Sign up at the provider’s website (e.g., Google AI Studio) and get an API key. This long, unique text string identifies you to the service. Keep it confidential, just like a password.

Select Provider in Autonomous:
From the LLM settings dropdown, choose the provider (e.g., Gemini).
Enter Configuration Details:
The provider-specific fields (for Gemini) will appear. Paste your API key into the corresponding field.
Save & Use:
Save your settings. The Gemini models are now available for use across all your workspaces.
Action 2: Connecting an Offline Provider (Example: Ollama)
Install on Your Server:
Your technical team needs to install and set up the provider software (e.g., Ollama) on one of your organization’s servers.
Refer to Official Documentation:
Installation and model download instructions vary by provider. Always check the official documentation. For Ollama, visit ollama.com
Select Provider & Configure in Autonomous:
From the LLM settings page, choose Ollama from the list. Enter your server details:
Ollama Base URL: The network address where Ollama is running (e.g., http://172.17.0.1:11434).
Ollama Model: Once connected, you can select from the models installed on your Ollama server.
5. Tips & Recommendations
Key Difference – Online vs. Offline:
Online solutions are faster and easier to set up but send your data to external servers for processing. Offline solutions require technical setup but maximize security and privacy, since your data never leaves your organization’s network.
API Key Security:
Treat your API keys as confidential. Never share them publicly.
Always Refer to Official Docs:
For troubleshooting or advanced configurations, the provider’s official documentation is the most accurate and up-to-date resource.