They absolutely do, the CEO has come out and said a few engineers have told him that they dont even write code by hand anymore. To some people that sounds horrifying, but a good engineer would not just take code blindly, they would read it and refine it using Claude, while still saving hundreds of man hours.
> They absolutely do, the CEO has come out and said a few engineers have told him that they dont even write code by hand anymore. To some people that sounds horrifying, but a good engineer would not just take code blindly, they would read it and refine it using Claude, while still saving hundreds of man hours.
TBH, that isn't sustainable. Skills atrophy. At some point they are going take the code blindly.
Considering what they have said in the past about agentic code changes, they are already doing just that - blindly approving code from the agent. I say this because when I last read what one of their engineers on CC tweeted/posted/whatever, I thought to myself "No human can review that many lines of code per month"[1].
---------
[1] IIRC, it was something stupid like 30kLoc reviewed in a month by a single engineer.
I keep telling my friends while experienced devs feel extremely productive. The newer ones will likely not develop skills needed to work with finer aspects of code.
This might work for a while, but you do a year or two of this, and then as little as a small Python script will feel like yak shaving.
I would love to hear/see a definitive answer for this, but I read somewhere that the relationship between MS and \A is such that the copilot version of the \A models has a smaller context window than through CC.
This would explain the "secret sauce", if it's true. But perhaps it's not and a lot is LLM nondeterminism mixing with human confirmation bias.
Agreed. I was an early adopter of Claude Code. And at work we only had Copilot. But the Copilit CLI isn't too bad now. you've got slash commands for Agents.MD and skills.md files now for controlling your context, and access to Sonnet & Opus 4.5.
Maybe Microsoft is just using it internally, to finish copying the rest of the features from Claude Code.
Much like the article states, I use Claude Code beyond just it's coding capabilities....
Same situation, once I discovered the CLI and got it set up, my happiness went up a lot. It's pretty good, for my purposes at work it's probably as good as Claude Code.
I'm amazed that a company that's supposedly one of the big AI stocks seemingly won't spare a single QA position for a major development tool. It really validates Claude's CLI-first approach.
> Zig where I used to use C/Rust (but admittedly I spent the least time here).
I really don't understand how that fit with the “I want something that allows me to focus my mental facilities on the complexities of the actual problem domain”.
For low-level stuff, Rust allows to offload the cognitive load of maintaining the ownership requirements to the machine. On the opposite, Zig is exactly like C as it forces you to think about it all the time or you just shoot yourself in the foot at the first opportunity…
For stuff that can be done with managed languages, then absolutely, the GC allows to completely ignore that aspect, at the cost of some performance you don't always care about because how fast the modern hardware is.
What is the difference between Kuberesolver and using a Headless Service?
In the README.md file, they compare it with a ClusterIP service, but not with a Headless on "ClusterIP: None".
The advantages of using Kuberesolver are that you do not need to change DNS refresh and cache settings. However, I think this is preferable to the application calling the Kubernetes API.
I can give an n=1 anecdote here: the dns resolver used to have hard-coded caching which meant that it would be unresponsive to pod updates, and cause mini 30s outages.
That meant that deploying a service which drained in less than 30s would have a little mini-outage for that service until the in-process DNS cache expired, with of course no way to configure it.
Kuberesolver streams updates, and thus lets clients talk to new pods almost immediately.
I think things are a little better now, but based on my reading of https://github.com/grpc/grpc/issues/12295, it looks like the dns resolver still might not resolve new pod names quickly in some cases.
No rebinding, better fits the grain of the OTP, no AST macros. Last I checked, the debugging experience with elixir was pretty subpar. Erlang is also a fundamentally nicer syntax that I find a great deal more readable. I'm not really sure what the appeal of Elixir as a language is actually supposed to be, outside of people who have spent a lot of time writing Ruby code.
Full disclosure: I started with Erlang, I get paid to work with Elixir every day, I love Erlang still.
Why someone might like Elixir:
- slightly less crufty stdlib for a lot of the basic stuff (though we still use the Erlang stdlib all the time)
- the Elixir community started off using binaries instead of charlists so everything uses binaries
- great general collections libraries in the stdlib that operate on interfaces/protocols rather than concrete collections (Enum, Stream)
- macros allow for default impls and a good deal less boilerplate, great libraries like Phoenix and Ecto, and the community seems to be pretty judicious with their use
- protocols allow datatype polymorphism in a really nice way (I know about behaviours, they are also good)
- very standard build tool/project layout/generators that have been there from the start (Erlang has caught up here with rebar, it seems)
- a lot of high quality libraries for web stuff, specifically
- convenience stuff around common OTP patterns like Task, Task.Supervisor, Agent, etc.
For me, I love the clarity and brevity of Erlang the language but I find Elixir a lot more pleasant to use day-to-day. This is just personal, I am not making a general statement saying Elixir is better.
> Last I checked, the debugging experience with elixir was pretty subpar.
Just curious, why is this? All of the Erlang debugging stuff seems to work.
> Just curious, why is this? All of the Erlang debugging stuff seems to work.
But you'd see a decompiled Erlang-ish code in the (WX-based, graphical) debugger, no? Genuinely curious, I think it was like that last I checked, but that was in 2019.
(Assuming you're not trolling: you chose to focus on features that can only be judged subjectively, and therefore can only be discussed as preferences. It's ok to have them, but actively displaying them is a bit pointless. Objectively measurable features of both languages put them very close together, with both having slight advantages over the other in different areas, on average making them almost equivalent. Especially compared to anything non-BEAM.)
I'm not trolling, but I'm very serious about language design after going through a long gauntlet. I don't think making mutation easy, and also having the ability to hide code execution, is necessarily a good practice in a system principally designed for safe, robust and efficient concurrency. Don't use a flathead on a phillips screw.
Rebinding is not mutation. This seems pedantic but is an important distinction. None of the semantics of the runtime are changed. The data remains immutable. You probably know this. However, for the benefit of readers who may be less familiar: Erlang does not allow variables to be rebound, so it's somewhat typical for Erlang code like this:
X1 = 8.
X2 = X1 + 1.
X3 = X2 * 302.
You cannot, say, do this:
X1 = 8.
X1 = X1 + 1.
This is because in Erlang (and in Elixir) the `=` is not just assignment, it is also the operator for pattern matching. This has implications that are too broad for this post, but the key point here is that it's attempting to see if the the left side and the right side "can match".
Whereas writing the same thing in Elixir would look like:
x = 8
x = x + 1
x = x * 302
This is because Elixir allows `x` to be rebound, in effect changing what data `x` points to, but not mutating the underlying data itself. Under the hood Elixir rewrites each expression into something resembling the Erlang version.
The practical effect of this is that if you for example insert a process spawn somewhere in between any of the lines that references `x`, that process gets its own totally immutable version of the data that `x` points to at that point in time. This applies in both Erlang and Elixir, as data in both is completely immutable.
It should also be noted that handling state like that is not really idiomatic Erlang. State is updated at a process level, thus traditionally you spawn another process which is trivial to do. On the BEAM that is fast enough for 95% of cases. If you really need mutation on local variables for performance reasons, you should already be writing NIFs anyways.
State variables are what I think corpos call a "code smell". The BEAM/OTP isn't a number cruncher, there are better tools out there if you're doing a lot of that. Erlang is, at it's core, about constraint logic programming. It should be best thought as a tool for granular, scalable, distributable userspace scheduling. If you need something outside of that, NIFs or Ports. Both are quite nice.
This has nothing to do with math or number crunching on the BEAM. This has nothing to do with mutation. This has nothing to do with performance.
This kind of process and function-local static single-assignment code is all over the place in Erlang codebases. It's incredibly common. The other popular method is tail recursion.
I searched for literally 30 seconds and found these:
> It should also be noted that handling state like that is not really idiomatic Erlang.
It's not about the state but about intermediate results. When you have a value that you pass to one function, and then you need to pass the result to another function, you're not dealing with a "state" as OTP defines it, unless the calls are asynchronous. Often, they're not, and that's where variable rebinding comes in.
Worth noting: `|>` macro operator in Elixir serves a similar purpose, as long as you don't need pattern matching between calls. In that case, you don't have to name intermediate results at all, resulting in cleaner code.
> State variables are what I think corpos call a "code smell".
Having to call multiple functions in a sequence is the most natural thing to do, and Erlang code is littered with "X1 = ..., X2 = ...(X1), X3 = ...(X2)" kind of code everywhere.
There are some libraries (based on parse transforms) that introduce a sort of "do" notation to deal with this issue (erlando and its variations come to mind).
I love it. I didn't know. It's going to take a while to make this a pervasive feature of most Erlang codebases, but it seems like a good feature to introduce.
I know there are monad libraries using parse transforms and/or list comprehensions, but I often found their use is frowned upon in the Erlang community. I kind of assumed the GP in this thread would reject them, given their negative opinion on macros.
I was in a similar situation, ended up relying on libs that used parse transforms a lot and then found out most of my usage could have been replaced by the new `maybe` expression.
Not to pick on you, but there are always posts like this in every Erlang thread. One is not strictly superior to the other, and the BEAM community benefits from the variety IMO.
How about service proxy vs web proxy rather than reverse proxy and proxy? Makes more clear that one is a proxy on the service side and the other is a proxy on the client side. Service proxy and Client proxy might be even better.
https://github.com/features/copilot/cli