Table of Contents
- Control Panel Guide
- Table of Contents
- Quick Reference
- Step 1: First Login
- Step 2: Create Your Admin Account on the Control Panel
- Step 3: Verify Your New Account Works
- Step 4: Delete the Default Admin Account
- Step 5: Configure Your Tenant Name
- Step 6: Add Your External AI Inference Provider API Key
- Step 7: Verify Models Are Available
- Preloaded Models
- NVIDIA NIM Models (requires NVIDIA API key)
- Groq Models (requires Groq API key)
- Embedding Model (on Ubuntu 24.04 x86_64 will use NVIDIA GPU if present, on NVIDIA DGX OS 7 with Grace Blackwell chip will use the Grace Blackwell Chip, on macOS with Apple Silicon M1+, will use the CPU)
- Adding More Models
- Adding NVIDIA NIM Models
- Adding Ollama Models
- Adding Groq Models
- Finding Model Information
- About the Embedding Model
- Step 8: Enable Two-Factor Authentication
- Step 9: Create Additional Users (Optional)
- What's Next?
- Common Issues
- "I can't log in after deleting the default account"
- "API key test fails"
- "Models not appearing after initialization"
- "I'm locked out due to TFA"
- Summary Checklist
Control Panel Guide
Accessing via local network: You can connect via web browser to the host which is running GT AI OS over the local network. You can also install Cloudflare with a fully qualified domain name (FQDN) once the Control Panel and Tenant are fully configured.
This guide walks you through configuring GT AI OS after installation. Follow these steps in order to secure your installation and set up your private AI environment.
Table of Contents
- Quick Reference
- Step 1: First Login
- Step 2: Create Your Admin Account
- Step 3: Verify Your New Account Works
- Step 4: Delete the Default Admin Account
- Step 5: Configure Your Tenant Name
- Step 6: Add Your External AI Inference Provider API Key
- Step 7: Verify Models Are Available
- Preloaded Models
- Adding More Models
- Step 8: Enable Two-Factor Authentication
- Step 9: Create Additional Users (Optional)
- What's Next?
- Common Issues
- Summary Checklist
Quick Reference
| Page | What It Does |
|---|---|
| Tenant | Configure your organization settings (landing page) |
| Users | Create and manage user accounts |
| Models | View and configure available AI models |
| API Keys | Add cloud AI provider keys (NVIDIA, Groq) |
| Settings | Enable two-factor authentication |
Step 1: First Login
- Open your web browser
- Go to http://localhost:3002
- You will see the Tenant login page
- Enter the default credentials:
- Email:
gtadmin@test.com - Password:
Test@123
- Email:
- Click Sign In
Security Notice: The default account is publicly known. You must create your own admin account and delete the default one. Continue to Step 2.
Step 2: Create Your Admin Account on the Control Panel
Go to http://localhost:3001
Before doing anything else, create your own super admin account.
- In the left sidebar, click Users
- Click the Create User button (top right)
- Fill in the form:
- Email: Enter your email address (e.g.,
admin@yourcompany.com) - Full Name: Enter your full name
- Password: Create a strong password (at least 8 characters, mix of letters, numbers, symbols)
- Confirm Password: Type the same password again
- User Type: Select Super Admin from the dropdown
- Require TFA: Check this box
- Email: Enter your email address (e.g.,
- Click Create User
- You should see a success message and your new user in the list
Verification: Your new user should appear in the Users list with "Super Admin" badge.
Step 3: Verify Your New Account Works
Before deleting the default account, make sure your new account works:
- Click your profile name (top right corner) or find the Logout button
- Click Logout
- On the login page, enter YOUR new credentials (not the default ones)
- Click Sign In
- You'll be prompted to set up TFA now (see Step 7)
- Verify you can see the Control Panel (Tenant page is the landing page)
If login fails: Go back to http://localhost:3001, log in with the default credentials, and check your new user was created correctly.
Step 4: Delete the Default Admin Account
Now that your account works, remove the default account for security:
- In the left sidebar, click Users
- Find
gtadmin@test.comin the user list - Click the three-dot menu (⋮) on the right side of that row
- Click Delete
- A confirmation dialog will appear
- Click Delete to confirm
Verification: The gtadmin@test.com user should no longer appear in the list.
Important: After this step, the default credentials (
gtadmin@test.com/Test@123) will no longer work. Make sure you remember your new credentials!
Step 5: Configure Your Tenant Name
The default tenant is named "GT AI OS". You can update it to your organization name:
- Click Tenant in the left sidebar (or you're already there - it's the landing page)
- You'll see one tenant: "GT AI OS"
- Click the Edit button (pencil icon) for this tenant
- Update the fields:
- Name: Enter your organization name (e.g., "Acme Corporation")
- Frontend URL: Leave as
http://localhost:3002(or enter your domain if you have one)
- Click Save
Verification: The tenant name in the list should show your new name.
Step 6: Add Your External AI Inference Provider API Key
GT AI OS needs access to local or external inference to run agents. An API key to connect to External AI models via an AI Inference provider. Choose at least one provider. Both NVIDIA and Groq offer free tiers - this guide covers getting started with free API access.
Option A: Add NVIDIA NIM API Key
NVIDIA NIM offers GPU-optimized inference powered by DGX Cloud. Free unlimited prototyping for NVIDIA Developer Program members (rate-limited).
Get your free API key:
- Go to https://build.nvidia.com/
- Sign in or create a free NVIDIA account
- Navigate to any model page
- Click Get API Key and copy it
Add the key to GT AI OS:
- In Control Panel, click API Keys in the left sidebar
- Click Add API Key
- Fill in:
- Provider: Select NVIDIA
- API Key: Paste your NVIDIA API key
- Click Save
- Click Test next to your new key to verify it works
Verification: You should see a green checkmark or "Valid" status after testing.
Option B: Add Groq API Key
Groq offers fast inference powered by their custom LPU hardware. Free tier includes rate limits.
Get your free Groq API key:
- Go to https://console.groq.com/
- Sign in or create a free account
- Click API Keys in the left sidebar
- Click Create API Key
- Copy the key (it starts with
gsk_)
Add the key to GT AI OS:
- In Control Panel, click API Keys in the left sidebar
- Click Add API Key
- Fill in:
- Provider: Select Groq
- API Key: Paste your Groq API key
- Click Save
- Click Test to verify
Verification: You should see a green checkmark or "Valid" status after testing.
Step 7: Verify Models Are Available
GT AI OS automatically loads pre-configured models during installation. Verify they're available:
- In the left sidebar, click Models
- You should see 20+ models already listed
What you should see:
Preloaded Models
GT AI OS comes preloaded with a number of model configurations. Valid API keys are required when using external inference.
NVIDIA NIM Models (requires NVIDIA API key)
| Model | Strengths |
|---|---|
| NVIDIA Llama 3.3 Nemotron Super 49B | Flagship reasoning model, best accuracy/throughput on single GPU |
| NVIDIA Llama 3.1 Nemotron Ultra 253B | Maximum accuracy for scientific reasoning, math, and coding |
| NVIDIA Llama 3.1 Nemotron Nano 8B | Cost-effective, optimized for edge devices and low latency |
| NVIDIA Meta Llama 3.3 70B Instruct | Latest Meta Llama, excellent instruction following |
| NVIDIA Meta Llama 3.1 405B Instruct | Largest open-source LLM, exceptional quality across all tasks |
| NVIDIA Meta Llama 3.1 70B Instruct | Excellent balance of quality and speed |
| NVIDIA Meta Llama 3.1 8B Instruct | Fast and cost-effective for simpler tasks |
| NVIDIA DeepSeek R1 | Enhanced reasoning, reduced hallucination, strong math/coding |
| NVIDIA DeepSeek V3 | Hybrid Think/Non-Think modes, 128K context, strong tool use |
| NVIDIA Mistral Large | State-of-the-art general purpose MoE model |
| NVIDIA Qwen 3 235B | Ultra-long context (131K) with strong multilingual support |
| NVIDIA Kimi K2 Instruct | Long context window with enhanced reasoning capabilities |
| NVIDIA OpenAI GPT-OSS 120B | Production-grade reasoning, fits single H100 GPU |
| NVIDIA OpenAI GPT-OSS 20B | Low latency, runs in 16GB VRAM |
Groq Models (requires Groq API key)
| Model | Strengths |
|---|---|
| Groq Moonshot AI Kimi K2 | Highest quality, 1T parameters, massive 262K context window |
| Groq Compound AI Search | Intelligent blend of GPT-OSS-120B + Llama 4 Scout |
| Groq OpenAI GPT OSS 120B | Large open-source model, strong general performance |
| Groq OpenAI GPT OSS 20B | Medium open-source model, good balance of speed and quality |
| Groq Meta Llama 4 Maverick 17B | MoE architecture (17Bx128E), efficient inference |
| Groq Llama 3.1 8B Instant | Ultra-fast responses, lowest latency |
| Groq Llama Guard 4 12B | Safety and content moderation |
Embedding Model (on Ubuntu 24.04 x86_64 will use NVIDIA GPU if present, on NVIDIA DGX OS 7 with Grace Blackwell chip will use the Grace Blackwell Chip, on macOS with Apple Silicon M1+, will use the CPU)
| Model | Description |
|---|---|
| BGE-M3 Multilingual Embedding | Powers document search and RAG |
Adding More Models
You can add models beyond the preloaded ones from NVIDIA NIM or Groq, as well as from Ollama, for offline local AI inference. See Ollama Setup to configure local models.
Adding NVIDIA NIM Models
- Go to https://build.nvidia.com/explore/discover
- Browse available models and click on one you want to add
- Note the Model ID from the API example (e.g.,
meta/llama-3.2-3b-instruct) - Open Control Panel: http://localhost:3001
- Go to Models → Add Model
- Fill in the fields:
| Field | Value |
|---|---|
| Model ID | The model ID from NVIDIA (e.g., meta/llama-3.2-3b-instruct) |
| Name | A friendly name (e.g., "NVIDIA Llama 3.2 3B") |
| Provider | nvidia |
| Model Type | LLM |
| Endpoint URL | https://integrate.api.nvidia.com/v1/chat/completions |
| Context Window | Check the model page for context length |
| Max Tokens | Usually 4096 or 8192 |
- Click Save
- The model will appear in your agent's model dropdown
⚠️ Critical: Model ID Must Match Exactly
The Model ID must match the NVIDIA API's model identifier exactly - character for character. Common mistakes:
- Extra spaces before or after the ID
- Typos in the model name
- Using a display name instead of the API ID
- Wrong capitalization
Example: Use
meta/llama-3.2-3b-instruct, NOTMeta/Llama-3.2-3B-Instructormeta/llama-3.2-3b-instructIf your model doesn't work, double-check the Model ID on the NVIDIA model page's API example.
Adding Ollama Models
- See Ollama Setup to configure local models.
Adding Groq Models
- Go to https://console.groq.com/docs/models
- Find a model you want to add
- Note the Model ID (e.g.,
llama-3.2-1b-preview) - Open Control Panel: http://localhost:3001
- Go to Models → Add Model
- Fill in the fields:
| Field | Value |
|---|---|
| Model ID | The model ID from Groq (e.g., llama-3.2-1b-preview) |
| Name | A friendly name (e.g., "Groq Llama 3.2 1B Preview") |
| Provider | groq |
| Model Type | LLM |
| Endpoint URL | https://api.groq.com/openai/v1/chat/completions |
| Context Window | Check the Groq docs for context length |
| Max Tokens | Check the Groq docs for max output tokens |
- Click Save
- The model will appear in your agent's model dropdown
⚠️ Critical: Model ID Must Match Exactly
The Model ID must match Groq's model identifier exactly - character for character. Common mistakes:
- Extra spaces before or after the ID
- Typos in the model name
- Using a display name instead of the API ID
- Missing hyphens or version numbers
Example: Use
llama-3.2-1b-preview, NOTLlama 3.2 1B Previeworllama-3.2-1b-previewIf your model doesn't work, double-check the Model ID on the Groq models page.
Finding Model Information
NVIDIA NIM:
- Model catalog: https://build.nvidia.com/explore/discover
- Click any model to see its ID, context window, and capabilities
- API endpoint is always:
https://integrate.api.nvidia.com/v1/chat/completions
Groq:
- Model list: https://console.groq.com/docs/models
- Shows model ID, context window, and max output tokens
- API endpoint is always:
https://api.groq.com/openai/v1/chat/completions
Note: NVIDIA NIM, other Open AI compatible API inference providers or Groq Models only work when their provider's API key is correctly configured. If a model shows errors, verify you have the correct API key added in Step 6.
About the Embedding Model
The BAAI/bge-m3 embedding model is installed by default which powers document search and RAG (Retrieval Augmented Generation) in GT AI OS.
Important limitations:
- Custom embedding models cannot be configured yet
- For most use cases, the built-in embedding model works great, especially on:
- Ubuntu x86 with NVIDIA GPU
- NVIDIA DGX Spark devices using GB10 ARM architecture
Advanced: Using an external embedding endpoint
If you need to run the embedding model on a separate server:
- Go to Models in the Control Panel
- Find the BGE-M3 Multilingual Embedding model
- Click Edit
- Change Endpoint Configuration from "Local GT Edge" to "External Endpoint"
- Enter the IP address and port of your external inference server
- Click Save Configuration
Note: The external server must be running HuggingFace's BAAI/bge-m3 model with a compatible inference API.
Step 8: Enable Two-Factor Authentication
Two-factor authentication (TFA) adds an extra layer of security to your account and should always be used:
- In the left sidebar, click Settings
- Find the Security or Two-Factor Authentication section
- Click Enable TFA or Set Up TFA
- You'll see a QR code on screen
- Open your authenticator app on your phone:
- Google Authenticator
- Microsoft Authenticator
- Authy
- 1Password
- Any TOTP-compatible app
- Scan the QR code with your authenticator app
- Enter the 6-digit code from your app into GT AI OS
- Click Verify or Enable
Verification: Next time you log in, you'll be asked for a 6-digit code after entering your password.
Step 9: Create Additional Users (Optional)
If other people need to use GT AI OS, create accounts for them:
Community Edition Limit: The Community Edition allows a maximum of 10 active users.
User Types Explained
| User Type | Can Access | Best For |
|---|---|---|
| Super Admin | Everything | IT administrators |
| Tenant Admin | Users, agents, settings within their tenant | Team leads |
| Tenant User | Chat, agents, datasets | Regular team members |
Creating an additional User
- Click Users in the sidebar
- Click Create User
- Fill in:
- Email: The user's email address
- Full Name: Their full name
- Password: A secure password
- User Type: Choose based on the table above
- Click Create User
Share with the new user:
- Tenant App URL: http://localhost:3002
- Their email address
- Their password
What's Next?
Your Control Panel is now configured! Here's what to do next:
- Set up local AI models: If you want local/offline AI, see Ollama Setup
- Add more cloud models: See Adding NVIDIA NIM Models or Adding Groq Models above
- Use the Tenant App: See Tenant App Guide to start creating agents and chatting
Common Issues
"I can't log in after deleting the default account"
WARNING: Following the following steps will delete all Users, API Key Configs, Agents, Datasets, and Conversation History on the platform.
If you deleted the default account before verifying your new account works:
- You'll need to reset the database
- Open Terminal
- Navigate to your GT AI OS folder:
cd ~/Desktop/gt-ai-os-community # Mac cd ~/gt-ai-os-community # Ubuntu/DGX - Reset everything:
docker compose down -v docker compose up -d - Wait 2-3 minutes for services to start
- Log in with the default credentials again
- This time, verify your new account works BEFORE deleting the default one
"API key test fails"
- NVIDIA: Verify you copied the complete key. Some NVIDIA keys are longer than expected. Check your account at https://build.nvidia.com/
- Groq: Make sure your key starts with
gsk_. Check that your Groq account is active at https://console.groq.com/
"Models not appearing after initialization"
- Make sure you have at least one API key configured and tested
- Try refreshing the page (press F5 or Cmd+R)
- Check that the API key provider matches the model provider (NVIDIA models need NVIDIA key, Groq models need Groq key)
"I'm locked out due to TFA"
If you lost access to your authenticator app:
- You'll need to reset the database (see "I can't log in" above)
- After reset, set up TFA again.
Summary Checklist
Before moving to the Tenant App, verify you've completed these steps:
- Created your own super admin account
- Verified you can log in with your new account
- Deleted the default
gtadmin@test.comaccount - Updated your tenant name
- Added at least one AI Inference Provider (NVIDIA NIM, Ollama, or Groq Cloud)
- Enabled two-factor authentication
Next: Tenant App Guide - Learn how to create agents and start chatting!