From 014e5c7f23bbe21799160e91a4c58d22c8f11e27 Mon Sep 17 00:00:00 2001 From: daniel Date: Sat, 10 Jan 2026 04:11:15 +0000 Subject: [PATCH] Update README.md --- README.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index bbfdf63..38fa051 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ GT AI OS software is intended to provide easy to use "daily driver" web based generative AI for processing documents & files with data privacy for individuals and organizations. You can install GT AI OS on Ubuntu x86 and NVIDIA DGX OS 7 ARM hosts using Docker. -[Start Installation ](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Installation) +[Start Installation ](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Installation) Minimum 4 CPU cores, 16GB RAM and 50GB SSD storage required for the application. GT AI OS will usually use about 7GB RAM when fully installed. @@ -22,7 +22,7 @@ It is not multimodal and can't generate or process images, videos or audio as of Ensure that you are using local or external inference with zero data retention features if you want your data to remain private. -[GT AI OS Wiki](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki) +[GT AI OS Wiki](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki) ## Supported Platforms @@ -67,13 +67,13 @@ CPU vs GPU accelerated embedding will result in slower file uploads when adding | Topic | Description | |-------|-------------| -| [Installation](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Installation) | Detailed setup instructions | -| [Updating](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Updating) | Keep GT AI OS up to date | -| [NVIDIA NIM Setup](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Control-Panel-Guide#adding-nvidia-nim-models) | Enterprise GPU-accelerated inference | -| [Ollama Setup](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Ollama-Setup) | Set up local AI models | -| [Groq Cloud Setup](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Control-Panel-Guide#adding-groq-models) | Ultra-fast cloud inference | -| [Cloudflare Tunnel](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Cloudflare-Tunnel-Setup) | Access GT AI OS from anywhere | -| [Troubleshooting](https://github.com/GT-Edge-AI-Internal/gt-ai-os-community/wiki/Troubleshooting) | Common issues and solutions | +| [Installation](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Installation) | Detailed setup instructions | +| [Updating](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Updating) | Keep GT AI OS up to date | +| [NVIDIA NIM Setup](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Control-Panel-Guide#adding-nvidia-nim-models) | Enterprise GPU-accelerated inference | +| [Ollama Setup](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Ollama-Setup) | Set up local AI models | +| [Groq Cloud Setup](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Control-Panel-Guide#adding-groq-models) | Ultra-fast cloud inference | +| [Cloudflare Tunnel](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Cloudflare-Tunnel-Setup) | Access GT AI OS from anywhere | +| [Troubleshooting](https://gitea-dell-promax-gb10.gtedgeai.app/GTEdgeAI/gt-ai-os-community/wiki/Troubleshooting) | Common issues and solutions | ---