9 Must‑See GitHub Projects: MacBook‑Run LLM, WeChat AI, Multi‑Agent Collaboration and More
This article reviews nine standout GitHub open‑source projects, covering a C/Metal LLM engine for MacBooks, a Claude Code commercial‑analysis skill, multi‑agent communication tools, web‑enabled AI, autonomous research automation, WeChat AI integration, a minimalist terminal, a Codex console, and a lightweight WARP proxy.
flash‑moe – 397 B‑parameter LLM inference on MacBook Pro M3 Max
Pure C/Metal inference engine that runs the 397‑billion‑parameter Qwen3.5 MoE model on a MacBook Pro M3 Max (48 GB RAM) without loading the full 209 GB model into memory.
Inference speed ≈ 4.4 tokens / second on the device.
Full tool‑calling support.
Model streamed from NVMe SSD.
https://github.com/danveloper/flash-moedbskill – Commercial‑diagnosis skill for Claude Code
Extracts 12 307 commercial‑diagnosis tweets into a Skill toolbox for Claude Code, enabling the model to diagnose business models, evaluate content direction, and suggest short‑video openings.
Input a question → AI diagnoses business‑model issues.
Analyzes potential success of a content direction before creation.
Generates suggestions for short‑video openings.
npx skills add dontbesilent2025/dbskill https://github.com/dontbesilent2025/dbskillclaude‑peers‑mcp – Multi‑agent communication for Claude Code
MCP tool that lets multiple Claude Code instances communicate in real time, forming an AI “chat room”. Messages are persisted.
Invite several AIs into a discussion group.
Results from one AI are instantly pushed to others.
Persistent message storage prevents loss.
npx claude-peers-mcp daemon https://github.com/louislva/claude-peers-mcpweb‑access – Internet access for Claude Code
Skill that enables Claude Code to browse the web, search Google, retrieve Wikipedia entries, and automate browser actions.
Self‑initiated Google searches and Wiki lookups.
Arbitrary webpage browsing and information extraction.
Supports browser automation.
npx @eze-is/web-access https://github.com/eze-is/web-accesscodex‑autoresearch – Autonomous iterative research system
Implements an autonomous research loop inspired by Karpathy’s autoresearch concept. AI designs experiments, runs them, evaluates results, and iterates.
Eight AIs executed 2 430 experiments in 30 hours, achieving a 6.4 % performance improvement.
All experiment parameters and outcomes are automatically logged.
Reduces manual hyper‑parameter tuning and experiment fatigue.
https://github.com/leo-lilinxiao/codex-autoresearchweixin‑agent‑sdk – Bridge any AI to WeChat
Framework that connects Claude Code, Codex, Kimi and other models to WeChat, supporting text, image, voice, video, and file exchange, as well as posting to Moments and voice calls. Messages are encrypted.
npx weixin-acp login
npx weixin-acp start -- claude-agent-acp https://github.com/wong2/weixin-agent-sdkghostling – Minimal terminal emulator
Minimal viable terminal built on the libghostty C API. Starts in seconds with extremely low resource usage and retains the advantages of Ghostty.
Seconds‑level startup.
Clean interface without ads or extra features.
Lightweight base for custom terminal development.
https://github.com/ghostty-org/ghostlingcodex‑console – Integrated Codex control panel
Unified console that aggregates Codex capabilities, providing task management, batch processing, automatic data export/upload, and real‑time log viewing.
Batch process multiple tasks with a single command.
Automatic export and upload of results.
Real‑time log and status monitoring.
pip install codex-console https://github.com/dou-jiang/codex-consoleMicroWARP – 800 KB Cloudflare WARP proxy
Pure kernel‑mode WARP proxy that consumes only 800 KB of memory and can be deployed with Docker.
Official WARP uses >100 MB; MicroWARP needs 800 KB.
One‑click Docker deployment runs in seconds.
Performance remains high with minimal latency.
docker run -d --name microwarp -p 1080:1080 ccbkkb/microwarp https://github.com/ccbkkb/MicroWARPGeek Labs
Daily shares of interesting GitHub open-source projects. AI tools, automation gems, technical tutorials, open-source inspiration.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
