What Needs to Change for Open-Source AI Coding?
- Hamburg, Germany
Diese Notiz ist auch auf Deutsch verfügbar.
Open-source models like Kimi K2 and DeepSeek v3 are in principle good enough for serious code generation, you can see that when you try them out.
The problem starts with inference. Kimi K2 has 1 trillion parameters, DeepSeek v3 over 600 billion, that does not run on consumer hardware and local use is therefore nonsense. The alternative would be paying through Open Router or directly via the API, but the models are not good enough for me to spend money on that.
So we remain stuck in provider vertical stacks, because Claude Max only works through Claude Code, Gemini only through Google tools, and Copilot only within Microsoft’s ecosystem. Open-source CLI tools like Aider, Continue, or Open Interpreter cannot connect to these subscriptions.
For this to change, we need models with 3 to 7 billion parameters that run locally while still coding at Opus level. Whether that happens anytime soon, I do not know. Until then, open-source tools remain a niche for enthusiasts. I will keep using my Claude Max subscription with Claude Code. And if I am not happy with that, I would switch to another CLI platform with my subscription. That is as far as I would go.