Update README.md
Some checks failed
Build and Push Multi-Arch Docker Images / Build control-panel-backend (amd64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build control-panel-frontend (amd64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build resource-cluster (amd64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build tenant-app (amd64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build tenant-backend (amd64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build control-panel-backend (arm64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build control-panel-frontend (arm64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build resource-cluster (arm64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build tenant-app (arm64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build tenant-backend (arm64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Create multi-arch manifest for control-panel-backend (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Create multi-arch manifest for control-panel-frontend (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Create multi-arch manifest for resource-cluster (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Create multi-arch manifest for tenant-app (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Create multi-arch manifest for tenant-backend (push) Has been cancelled
Some checks failed
Build and Push Multi-Arch Docker Images / Build control-panel-backend (amd64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build control-panel-frontend (amd64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build resource-cluster (amd64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build tenant-app (amd64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build tenant-backend (amd64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build control-panel-backend (arm64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build control-panel-frontend (arm64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build resource-cluster (arm64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build tenant-app (arm64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Build tenant-backend (arm64) (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Create multi-arch manifest for control-panel-backend (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Create multi-arch manifest for control-panel-frontend (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Create multi-arch manifest for resource-cluster (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Create multi-arch manifest for tenant-app (push) Has been cancelled
Build and Push Multi-Arch Docker Images / Create multi-arch manifest for tenant-backend (push) Has been cancelled
This commit is contained in:
18
README.md
18
README.md
@@ -5,7 +5,7 @@
|
||||
GT AI OS software is intended to provide easy to use "daily driver" web based generative AI for processing documents & files with data privacy for individuals and organizations.
|
||||
You can install GT AI OS on Ubuntu x86 and NVIDIA DGX OS 7 ARM hosts using Docker.
|
||||
|
||||
[Start Installation ](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Installation)
|
||||
[Start Installation ](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Installation)
|
||||
|
||||
Minimum 4 CPU cores, 16GB RAM and 50GB SSD storage required for the application.
|
||||
GT AI OS will usually use about 7GB RAM when fully installed.
|
||||
@@ -22,7 +22,7 @@ It is not multimodal and can't generate or process images, videos or audio as of
|
||||
|
||||
Ensure that you are using local or external inference with zero data retention features if you want your data to remain private.
|
||||
|
||||
[GT AI OS Wiki](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki)
|
||||
[GT AI OS Wiki](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki)
|
||||
|
||||
## Supported Platforms
|
||||
|
||||
@@ -67,13 +67,13 @@ CPU vs GPU accelerated embedding will result in slower file uploads when adding
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| [Installation](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Installation) | Detailed setup instructions |
|
||||
| [Updating](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Updating) | Keep GT AI OS up to date |
|
||||
| [NVIDIA NIM Setup](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Control-Panel-Guide#adding-nvidia-nim-models) | Enterprise GPU-accelerated inference |
|
||||
| [Ollama Setup](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Ollama-Setup) | Set up local AI models |
|
||||
| [Groq Cloud Setup](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Control-Panel-Guide#adding-groq-models) | Ultra-fast cloud inference |
|
||||
| [Cloudflare Tunnel](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Cloudflare-Tunnel-Setup) | Access GT AI OS from anywhere |
|
||||
| [Troubleshooting](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Troubleshooting) | Common issues and solutions |
|
||||
| [Installation](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Installation) | Detailed setup instructions |
|
||||
| [Updating](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Updating) | Keep GT AI OS up to date |
|
||||
| [NVIDIA NIM Setup](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Control-Panel-Guide#adding-nvidia-nim-models) | Enterprise GPU-accelerated inference |
|
||||
| [Ollama Setup](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Ollama-Setup) | Set up local AI models |
|
||||
| [Groq Cloud Setup](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Control-Panel-Guide#adding-groq-models) | Ultra-fast cloud inference |
|
||||
| [Cloudflare Tunnel](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Cloudflare-Tunnel-Setup) | Access GT AI OS from anywhere |
|
||||
| [Troubleshooting](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Troubleshooting) | Common issues and solutions |
|
||||
|
||||
---
|
||||
|
||||
|
||||
Reference in New Issue
Block a user