
deploy.live
April 27, 2026
4 min read
47/100
Summary
A MacBook Pro M5 Max with 128GB of unified memory and a 40-core GPU was used to run local LLMs offline during a ten-hour flight. Gemma 4 31B and Qwen 4.6 36B were tested alongside the top 100 most common Docker images and programming languages, enabling the building of function sites with rich visualizations.
Key Takeaways
Community Sentiment
Positives
Concerns

I ran Gemma 4 as a local model in Codex CLI
Apr 12, 2026

Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code
Apr 5, 2026

Darkbloom – Private inference on idle Macs
Apr 16, 2026
Flash-MoE: Running a 397B Parameter Model on a Laptop
Mar 22, 2026

MacBook M5 Pro and Qwen3.5 = Local AI Security System
Mar 20, 2026