118 lines
5.4 KiB
Markdown
118 lines
5.4 KiB
Markdown
# GT AI OS Community Edition
|
|
|
|
[](LICENSE)
|
|
|
|
A self-hosted AI platform with data privacy for individuals and organizations.
|
|
Build and deploy custom generative AI agents and bring-your-own local or external API inference via NVIDIA NIM, Ollama, Groq, vLLM, SGLang and more.
|
|
|
|
GT AI OS is ideal for working with documents and files that need data privacy.
|
|
Ensure that you are using local or external inference with sero data retention if you want your data to remain private.
|
|
|
|
## Supported Platforms
|
|
|
|
| Platform | Host Architecture | Status |
|
|
|----------|--------------|--------|
|
|
| **Ubuntu Linux** 24.04 | x86_64 |
|
|
| **NVIDIA DGX OS 7** (Optimized for Grace Blackwell Architecture) | ARM64 |
|
|
| **macOS** (Apple Silicon M1+) | ARM64 |
|
|
|
|
Ubuntu VM's running on Proxmox with raw all functions GPU passthrough works.
|
|
Windows is currently not supported.
|
|
macOS x86_64 support is being considered although it will be quite slow.
|
|
|
|
Note that the install scripts are unique for each OS and hardware architecture.
|
|
|
|
Embedding model GPU acceleration:
|
|
Only NVIDIA GPU's are supported for embeddings.
|
|
Ensure that your NVIDIA GPU hardware is installed prior to starting the GT AI OS installation.
|
|
NVIDIA drivers and dependencies and tools will be installed during the pre requisites part of the runbook.
|
|
|
|
If you do not have an NVIDIA GPU in your target install host, then the CPU will be used for running the embedding model.
|
|
CPU vs GPU accelerated embedding will exhibit slow file uploads when adding files to datasets
|
|
|
|
At v2.0.34, once you install GT AI OS, you cannot install GPU hardware and switch from CPU to GPU for embeddings.
|
|
We are looking to fix this in a future release.
|
|
Embedding model is installed by default.
|
|
|
|
---
|
|
|
|
## Features
|
|
|
|
- **AI Agent Builder** - Create custom AI agents with your own system prompts, categorization, role base access and guardrails
|
|
- **Local Model Support** - Run local AI models with Ollama (completely offline)
|
|
- **Document Processing** - Upload documents into datasets and create agents to interact with them
|
|
- **Create Teams** - For setting up a workgroup that has Team based access to agents and dataasets
|
|
- **Observability** - See metrics dashboards including agents, models and dataset usage, chat logs and more
|
|
|
|
---
|
|
|
|
## Documentation
|
|
|
|
| Topic | Description |
|
|
|-------|-------------|
|
|
| [Installation](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Installation) | Detailed setup instructions |
|
|
| [Updating](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Updating) | Keep GT AI OS up to date |
|
|
| [NVIDIA NIM Setup](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/NVIDIA-NIM-Setup) | Enterprise GPU-accelerated inference |
|
|
| [Ollama Setup](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Ollama-Setup) | Set up local AI models |
|
|
| [Groq Cloud Setup](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Groq-Cloud-Setup) | Ultra-fast cloud inference |
|
|
| [Cloudflare Tunnel](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Cloudflare-Tunnel-Setup) | Access GT AI OS from anywhere |
|
|
| [Troubleshooting](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Troubleshooting) | Common issues and solutions |
|
|
|
|
---
|
|
|
|
## Community vs Enterprise
|
|
|
|
| Feature | Community (Free) | Enterprise (Paid) |
|
|
|---------|-----------|------------|
|
|
| **Users** | Up to 50 users | User licenses per seat |
|
|
| **Support** | GitHub Issues | Dedicated human support |
|
|
| **Billing & Reports** | Not included | Full financial tracking |
|
|
| **Pro Agents** | Not included | Pre-built professional agents |
|
|
| **AI Inference** | BYO/DIY | Fully Managed |
|
|
| **Setup** | DIY | Fully Managed |
|
|
| **Uptime Guarantee** | Self | 99.99% uptime SLA |
|
|
|
|
**Want Enterprise?** [Contact GT Edge AI](https://gtedge.ai/contact-us/)
|
|
|
|
---
|
|
|
|
## Architecture
|
|
|
|
```
|
|
┌────────────────────────────────────────────────────────────────┐
|
|
│ GT AI OS │
|
|
├──────────────────┬──────────────────────┬──────────────────────┤
|
|
│ Control Panel │ Tenant App │ Resource Cluster │
|
|
│ (Admin UI) │ (User UI) │(AI Inference Routing)│
|
|
├──────────────────┴──────────────────────┴──────────────────────┤
|
|
│ Postgres DB │
|
|
│ Control DB │ Tenant DB │
|
|
└────────────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
---
|
|
|
|
## Contributing
|
|
|
|
Found a bug? Have an idea? Open an issue: https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/issues
|
|
|
|
See [CONTRIBUTING.md](CONTRIBUTING.md) for details.
|
|
|
|
---
|
|
|
|
## Security
|
|
|
|
Found a security issue? Report via [our contact form](https://gtedge.ai/contact-us)
|
|
|
|
See [SECURITY.md](SECURITY.md) for our security policy.
|
|
|
|
---
|
|
|
|
## License
|
|
|
|
Apache License 2.0 - See [LICENSE](LICENSE)
|
|
|
|
---
|
|
|
|
**GT AI OS Community Edition** | Made by [GT Edge AI](https://gtedge.ai)
|