It shows that a company you and your organization are trusting with your data, and allowing full control over your devices 24/7, is failing to properly secure its own software.
It is a client running on an interpreted language your own computer, there is nothing to secure or hide as source was provided to you already or am I mistaking?
Even with the same model (--self-review), that makes a huge difference, and immediately highlights how bad the first iterations of an LLM output can be.
I looked at the leaked code expecting some "secret sauce", but honestly didn't found anything interesting.
I don't get the hype around Claude Code. There's nothing new or unique. The real strength are the models.
reply