Incorrect. They need to appease/trick/threaten/etc those that are paying for their services. Shareholders just demand they do so at the greatest (often short term) rate.
No, mcp just is a server that returns prompts to the llm. The server can be/do whatever. You can have an echo mcp that list echoes back whatever you send it.
This should be all of Information Technology’s take. Your computers get hacked - IT’s fault. Users complain about how hard your software is or that it breaks all the time - IT’s fault.
The fact users deal with almost everything being objectively not very good if not outright bad is a testament to people adapting to bad circumstances more than anything.
If you read the history you’ll see the appropriate word is “restarted” the EV revolution. It was on and off again in a slow march to the point that allowed Tesla to exist. I’m not diminishing the role Tesla played, but it has to be taken in context. They stood on shoulders.
I think looking at every carmaker’s lineup should make it obvious that they don’t give a crap what powers a car, they are just trying to sell what’s popular. EVs were trendy for a couple years and a margin-subsidizing $7000 was available so everybody enthusiastically brought out EVs. Now they’re less popular so they’re all pulling back. Arguably even Tesla is doing so, given that Musk has intimidated that he didn’t really think Tesla was going to keep selling cars forever.
When the demand is sufficient, the cars will be sold in numbers to match it. Demand will increase as it becomes practical to own an EV for more people. This mainly has to do with charging infrastructure at every level, which is capital intensive for both individuals and governments.
Do you suggest we ignore or include in this history the original contributions of the first electric cars from all the way back in the single digits of the 1900s?
There was a long time between those cars and the modern electric car where the only thing electric was "golf carts" (not general purpose cars), or homemade conversions. The EV1 was the first commercial car in the memory of most people alive today. The 1900s ones were fun/interesting historical things, but not practical.
So the problem with Chris’ take is “This one for fun project didn’t produce anything particularly interesting.”
So outside of the fact that we have magic now that can just produce “conventional “ compilers. Take it to a Moore’s Law situation. Start 1000 create a compiler projects- have each have a temperature to try new things, experiment, mutate. Collate - find new findings - reiterate- another 1000 runs with some of the novel findings. Assume this is effectively free to do.
The stance that this - which can be done (albeit badly) today and will get better and/or cheaper - won’t produce new directions for software engineering seems entirely naive.
Moors law states that the number of transistors in an integrated circuit doubles about every two years. It has nothing to say about the capabilities of statistical models.
In fact in statistics we have another law which states that as you increase parameters the more you risk overfitting. And overfitting seems to already be a major problem with state of the art LLM models. When you start overfitting you are pretty much just re-creating stuff which is already in the dataset.
In their example it doesn't matter is this case if the models get better or not. It matters whether inference gets cheaper to the point that we can afford to basically throw huge amounts of tokens at exploring the problem space.
Further model improvements would be a bonus, but it's not required for us to get much further.
> Modern LLMs showed that overfitting disappears if you add more and more parameters.
I have not seen that. In fact this is the first time I hear this claim, and frankly it sounds ludicrous. I don‘t know how modern LLMs are dealing with overfitting but I would guess there is simply a content matching algorithm after the inference, and if there is a copyright match the program does something to alter or block the generation. That is, I suspect the overfitting prevention is algorithmic and not part of the model.
reply