Geek Labs
Geek Labs
May 7, 2026 · Artificial Intelligence

Running Large Language Models Locally on RTX 3090: Two Open‑Source Solutions

This article introduces two recent GitHub projects—club‑3090, which enables single‑ or dual‑RTX 3090 inference of 27‑billion‑parameter models with detailed performance benchmarks, and library‑skills, a tool that keeps AI agents synchronized with the latest official library APIs—explaining their configurations, usage steps, hardware requirements, and target audiences.

AI agentsDockerRTX 3090
0 likes · 7 min read
Running Large Language Models Locally on RTX 3090: Two Open‑Source Solutions
AI Explorer
AI Explorer
May 5, 2026 · Artificial Intelligence

Achieving 95% SimpleQA Accuracy on a Single RTX 3090 with Local Deep Research

Local Deep Research is an open‑source AI assistant that runs entirely on a consumer RTX 3090, reaches about 95% accuracy on the SimpleQA benchmark, uses a plugin‑based architecture with multiple LLM and search back‑ends, stores data in an encrypted SQLCipher database, and can be launched in minutes via Docker for privacy‑focused researchers and developers.

DockerLLMLocal Deep Research
0 likes · 6 min read
Achieving 95% SimpleQA Accuracy on a Single RTX 3090 with Local Deep Research