1
Control Panel Guide
gtadmin edited this page 2026-01-10 03:24:25 +00:00

Control Panel Guide

Accessing via local network: You can connect via web browser to the host which is running GT AI OS over the local network. You can also install Cloudflare with a fully qualified domain name (FQDN) once the Control Panel and Tenant are fully configured.

This guide walks you through configuring GT AI OS after installation. Follow these steps in order to secure your installation and set up your private AI environment.

Table of Contents


Quick Reference

Page What It Does
Tenant Configure your organization settings (landing page)
Users Create and manage user accounts
Models View and configure available AI models
API Keys Add cloud AI provider keys (NVIDIA, Groq)
Settings Enable two-factor authentication

Step 1: First Login

  1. Open your web browser
  2. Go to http://localhost:3002
  3. You will see the Tenant login page
  4. Enter the default credentials:
    • Email: gtadmin@test.com
    • Password: Test@123
  5. Click Sign In

Security Notice: The default account is publicly known. You must create your own admin account and delete the default one. Continue to Step 2.


Step 2: Create Your Admin Account on the Control Panel

Go to http://localhost:3001

Before doing anything else, create your own super admin account.

  1. In the left sidebar, click Users
  2. Click the Create User button (top right)
  3. Fill in the form:
    • Email: Enter your email address (e.g., admin@yourcompany.com)
    • Full Name: Enter your full name
    • Password: Create a strong password (at least 8 characters, mix of letters, numbers, symbols)
    • Confirm Password: Type the same password again
    • User Type: Select Super Admin from the dropdown
    • Require TFA: Check this box
  4. Click Create User
  5. You should see a success message and your new user in the list

Verification: Your new user should appear in the Users list with "Super Admin" badge.


Step 3: Verify Your New Account Works

Before deleting the default account, make sure your new account works:

  1. Click your profile name (top right corner) or find the Logout button
  2. Click Logout
  3. On the login page, enter YOUR new credentials (not the default ones)
  4. Click Sign In
  5. You'll be prompted to set up TFA now (see Step 7)
  6. Verify you can see the Control Panel (Tenant page is the landing page)

If login fails: Go back to http://localhost:3001, log in with the default credentials, and check your new user was created correctly.


Step 4: Delete the Default Admin Account

Now that your account works, remove the default account for security:

  1. In the left sidebar, click Users
  2. Find gtadmin@test.com in the user list
  3. Click the three-dot menu (⋮) on the right side of that row
  4. Click Delete
  5. A confirmation dialog will appear
  6. Click Delete to confirm

Verification: The gtadmin@test.com user should no longer appear in the list.

Important: After this step, the default credentials (gtadmin@test.com / Test@123) will no longer work. Make sure you remember your new credentials!


Step 5: Configure Your Tenant Name

The default tenant is named "GT AI OS". You can update it to your organization name:

  1. Click Tenant in the left sidebar (or you're already there - it's the landing page)
  2. You'll see one tenant: "GT AI OS"
  3. Click the Edit button (pencil icon) for this tenant
  4. Update the fields:
    • Name: Enter your organization name (e.g., "Acme Corporation")
    • Frontend URL: Leave as http://localhost:3002 (or enter your domain if you have one)
  5. Click Save

Verification: The tenant name in the list should show your new name.


Step 6: Add Your External AI Inference Provider API Key

GT AI OS needs access to local or external inference to run agents. An API key to connect to External AI models via an AI Inference provider. Choose at least one provider. Both NVIDIA and Groq offer free tiers - this guide covers getting started with free API access.

Option A: Add NVIDIA NIM API Key

NVIDIA NIM offers GPU-optimized inference powered by DGX Cloud. Free unlimited prototyping for NVIDIA Developer Program members (rate-limited).

Get your free API key:

  1. Go to https://build.nvidia.com/
  2. Sign in or create a free NVIDIA account
  3. Navigate to any model page
  4. Click Get API Key and copy it

Add the key to GT AI OS:

  1. In Control Panel, click API Keys in the left sidebar
  2. Click Add API Key
  3. Fill in:
    • Provider: Select NVIDIA
    • API Key: Paste your NVIDIA API key
  4. Click Save
  5. Click Test next to your new key to verify it works

Verification: You should see a green checkmark or "Valid" status after testing.

Option B: Add Groq API Key

Groq offers fast inference powered by their custom LPU hardware. Free tier includes rate limits.

Get your free Groq API key:

  1. Go to https://console.groq.com/
  2. Sign in or create a free account
  3. Click API Keys in the left sidebar
  4. Click Create API Key
  5. Copy the key (it starts with gsk_)

Add the key to GT AI OS:

  1. In Control Panel, click API Keys in the left sidebar
  2. Click Add API Key
  3. Fill in:
    • Provider: Select Groq
    • API Key: Paste your Groq API key
  4. Click Save
  5. Click Test to verify

Verification: You should see a green checkmark or "Valid" status after testing.


Step 7: Verify Models Are Available

GT AI OS automatically loads pre-configured models during installation. Verify they're available:

  1. In the left sidebar, click Models
  2. You should see 20+ models already listed

What you should see:


Preloaded Models

GT AI OS comes preloaded with a number of model configurations. Valid API keys are required when using external inference.

NVIDIA NIM Models (requires NVIDIA API key)

Model Strengths
NVIDIA Llama 3.3 Nemotron Super 49B Flagship reasoning model, best accuracy/throughput on single GPU
NVIDIA Llama 3.1 Nemotron Ultra 253B Maximum accuracy for scientific reasoning, math, and coding
NVIDIA Llama 3.1 Nemotron Nano 8B Cost-effective, optimized for edge devices and low latency
NVIDIA Meta Llama 3.3 70B Instruct Latest Meta Llama, excellent instruction following
NVIDIA Meta Llama 3.1 405B Instruct Largest open-source LLM, exceptional quality across all tasks
NVIDIA Meta Llama 3.1 70B Instruct Excellent balance of quality and speed
NVIDIA Meta Llama 3.1 8B Instruct Fast and cost-effective for simpler tasks
NVIDIA DeepSeek R1 Enhanced reasoning, reduced hallucination, strong math/coding
NVIDIA DeepSeek V3 Hybrid Think/Non-Think modes, 128K context, strong tool use
NVIDIA Mistral Large State-of-the-art general purpose MoE model
NVIDIA Qwen 3 235B Ultra-long context (131K) with strong multilingual support
NVIDIA Kimi K2 Instruct Long context window with enhanced reasoning capabilities
NVIDIA OpenAI GPT-OSS 120B Production-grade reasoning, fits single H100 GPU
NVIDIA OpenAI GPT-OSS 20B Low latency, runs in 16GB VRAM

Groq Models (requires Groq API key)

Model Strengths
Groq Moonshot AI Kimi K2 Highest quality, 1T parameters, massive 262K context window
Groq Compound AI Search Intelligent blend of GPT-OSS-120B + Llama 4 Scout
Groq OpenAI GPT OSS 120B Large open-source model, strong general performance
Groq OpenAI GPT OSS 20B Medium open-source model, good balance of speed and quality
Groq Meta Llama 4 Maverick 17B MoE architecture (17Bx128E), efficient inference
Groq Llama 3.1 8B Instant Ultra-fast responses, lowest latency
Groq Llama Guard 4 12B Safety and content moderation

Embedding Model (on Ubuntu 24.04 x86_64 will use NVIDIA GPU if present, on NVIDIA DGX OS 7 with Grace Blackwell chip will use the Grace Blackwell Chip, on macOS with Apple Silicon M1+, will use the CPU)

Model Description
BGE-M3 Multilingual Embedding Powers document search and RAG

Adding More Models

You can add models beyond the preloaded ones from NVIDIA NIM or Groq, as well as from Ollama, for offline local AI inference. See Ollama Setup to configure local models.

Adding NVIDIA NIM Models

  1. Go to https://build.nvidia.com/explore/discover
  2. Browse available models and click on one you want to add
  3. Note the Model ID from the API example (e.g., meta/llama-3.2-3b-instruct)
  4. Open Control Panel: http://localhost:3001
  5. Go to ModelsAdd Model
  6. Fill in the fields:
Field Value
Model ID The model ID from NVIDIA (e.g., meta/llama-3.2-3b-instruct)
Name A friendly name (e.g., "NVIDIA Llama 3.2 3B")
Provider nvidia
Model Type LLM
Endpoint URL https://integrate.api.nvidia.com/v1/chat/completions
Context Window Check the model page for context length
Max Tokens Usually 4096 or 8192
  1. Click Save
  2. The model will appear in your agent's model dropdown

⚠️ Critical: Model ID Must Match Exactly

The Model ID must match the NVIDIA API's model identifier exactly - character for character. Common mistakes:

  • Extra spaces before or after the ID
  • Typos in the model name
  • Using a display name instead of the API ID
  • Wrong capitalization

Example: Use meta/llama-3.2-3b-instruct, NOT Meta/Llama-3.2-3B-Instruct or meta/llama-3.2-3b-instruct

If your model doesn't work, double-check the Model ID on the NVIDIA model page's API example.

Adding Ollama Models

Adding Groq Models

  1. Go to https://console.groq.com/docs/models
  2. Find a model you want to add
  3. Note the Model ID (e.g., llama-3.2-1b-preview)
  4. Open Control Panel: http://localhost:3001
  5. Go to ModelsAdd Model
  6. Fill in the fields:
Field Value
Model ID The model ID from Groq (e.g., llama-3.2-1b-preview)
Name A friendly name (e.g., "Groq Llama 3.2 1B Preview")
Provider groq
Model Type LLM
Endpoint URL https://api.groq.com/openai/v1/chat/completions
Context Window Check the Groq docs for context length
Max Tokens Check the Groq docs for max output tokens
  1. Click Save
  2. The model will appear in your agent's model dropdown

⚠️ Critical: Model ID Must Match Exactly

The Model ID must match Groq's model identifier exactly - character for character. Common mistakes:

  • Extra spaces before or after the ID
  • Typos in the model name
  • Using a display name instead of the API ID
  • Missing hyphens or version numbers

Example: Use llama-3.2-1b-preview, NOT Llama 3.2 1B Preview or llama-3.2-1b-preview

If your model doesn't work, double-check the Model ID on the Groq models page.

Finding Model Information

NVIDIA NIM:

Groq:


Note: NVIDIA NIM, other Open AI compatible API inference providers or Groq Models only work when their provider's API key is correctly configured. If a model shows errors, verify you have the correct API key added in Step 6.

About the Embedding Model

The BAAI/bge-m3 embedding model is installed by default which powers document search and RAG (Retrieval Augmented Generation) in GT AI OS.

Important limitations:

  • Custom embedding models cannot be configured yet
  • For most use cases, the built-in embedding model works great, especially on:
    • Ubuntu x86 with NVIDIA GPU
    • NVIDIA DGX Spark devices using GB10 ARM architecture

Advanced: Using an external embedding endpoint

If you need to run the embedding model on a separate server:

  1. Go to Models in the Control Panel
  2. Find the BGE-M3 Multilingual Embedding model
  3. Click Edit
  4. Change Endpoint Configuration from "Local GT Edge" to "External Endpoint"
  5. Enter the IP address and port of your external inference server
  6. Click Save Configuration

Note: The external server must be running HuggingFace's BAAI/bge-m3 model with a compatible inference API.


Step 8: Enable Two-Factor Authentication

Two-factor authentication (TFA) adds an extra layer of security to your account and should always be used:

  1. In the left sidebar, click Settings
  2. Find the Security or Two-Factor Authentication section
  3. Click Enable TFA or Set Up TFA
  4. You'll see a QR code on screen
  5. Open your authenticator app on your phone:
    • Google Authenticator
    • Microsoft Authenticator
    • Authy
    • 1Password
    • Any TOTP-compatible app
  6. Scan the QR code with your authenticator app
  7. Enter the 6-digit code from your app into GT AI OS
  8. Click Verify or Enable

Verification: Next time you log in, you'll be asked for a 6-digit code after entering your password.


Step 9: Create Additional Users (Optional)

If other people need to use GT AI OS, create accounts for them:

Community Edition Limit: The Community Edition allows a maximum of 10 active users.

User Types Explained

User Type Can Access Best For
Super Admin Everything IT administrators
Tenant Admin Users, agents, settings within their tenant Team leads
Tenant User Chat, agents, datasets Regular team members

Creating an additional User

  1. Click Users in the sidebar
  2. Click Create User
  3. Fill in:
    • Email: The user's email address
    • Full Name: Their full name
    • Password: A secure password
    • User Type: Choose based on the table above
  4. Click Create User

Share with the new user:


What's Next?

Your Control Panel is now configured! Here's what to do next:

  1. Set up local AI models: If you want local/offline AI, see Ollama Setup
  2. Add more cloud models: See Adding NVIDIA NIM Models or Adding Groq Models above
  3. Use the Tenant App: See Tenant App Guide to start creating agents and chatting

Common Issues

"I can't log in after deleting the default account"

WARNING: Following the following steps will delete all Users, API Key Configs, Agents, Datasets, and Conversation History on the platform.

If you deleted the default account before verifying your new account works:

  1. You'll need to reset the database
  2. Open Terminal
  3. Navigate to your GT AI OS folder:
    cd ~/Desktop/gt-ai-os-community  # Mac
    cd ~/gt-ai-os-community          # Ubuntu/DGX
    
  4. Reset everything:
    docker compose down -v
    docker compose up -d
    
  5. Wait 2-3 minutes for services to start
  6. Log in with the default credentials again
  7. This time, verify your new account works BEFORE deleting the default one

"API key test fails"

"Models not appearing after initialization"

  • Make sure you have at least one API key configured and tested
  • Try refreshing the page (press F5 or Cmd+R)
  • Check that the API key provider matches the model provider (NVIDIA models need NVIDIA key, Groq models need Groq key)

"I'm locked out due to TFA"

If you lost access to your authenticator app:

  1. You'll need to reset the database (see "I can't log in" above)
  2. After reset, set up TFA again.

Summary Checklist

Before moving to the Tenant App, verify you've completed these steps:

  • Created your own super admin account
  • Verified you can log in with your new account
  • Deleted the default gtadmin@test.com account
  • Updated your tenant name
  • Added at least one AI Inference Provider (NVIDIA NIM, Ollama, or Groq Cloud)
  • Enabled two-factor authentication

Next: Tenant App Guide - Learn how to create agents and start chatting!