In this next episode of our “untitled” podcast, Nohe and Rody take a “tech walk” to discuss the evolving landscape of AI development tools. We dive deep into the differences between the linear workflows of Gemini CLI and the asynchronous, project-level capabilities of Anti-Gravity.
We also geek out on home lab setups—discussing the shift from Docker Compose to Kubernetes (K3s) on Raspberry Pi clusters—and share a game-changing workflow using NotebookLM to generate context files for your AI agents. Finally, we explore Stitch for generative UI, including how to instantly create shaders and animations from simple screenshots.
Key Topics Discussed:
Chapters: 00:00 - Intro & Gemini CLI Configurations 01:05 - Gemini CLI vs. Anti-Gravity: Linear vs. Async Workflows 03:45 - Anti-Gravity’s Agent Manager & Project-Level Tasks 06:30 - Managing Multiple Contexts with Git Worktrees 08:30 - Neovim, Tmux, and AI Integration 12:30 - SSH Tunneling & Remote Development with Anti-Gravity 15:10 - Home Lab: Raspberry Pi Clusters & K3s vs. Docker Swarm 19:45 - Using AI to Convert Docker Compose to Kubernetes 22:55 - The NotebookLM Workflow: Generating Agent Rules from Docs 31:30 - Stitch: Generative UI, Shaders, and App Redesigns 35:20 - Outro & Teaser: Prompt Optimization
Hosts:
#AI #GoogleCloud #GeminiCLI #Antigravity #SoftwareEngineering #NeoVim #Docker