> Unlike the machines of the first Industrial Revolution, A.I. does not necessarily need more input; it can sustain itself.
I'm not going to get into a bunch of debates about AI capabilities, and I don't tend to take an extreme stance one way or another what current AIs are capable of or will be capable of, but this sentence strikes me as very strange. Modern AI systems have pretty significant problems that will likely only be solved through heavy amounts of training and data curation, and maintaining good performance over time will likely require regular insertion of newer data.
The idea that these systems are self-sufficient is fiction, they're not even close to reaching that point and they have serious gaps in capability. Maybe those gaps will get closed -- again, I'm not going to get into a debate about what AI can theoretically do in the future or how fast it will improve, that discussion is a little bit too speculative for my tastes. But while modern LLMs can do a lot of impressive stuff, the idea that we've hit some threshold where we aren't going to need to make further jumps and where those jumps won't require additional data or training or input is just buying into the worst of LLM hype.
What we're seeing is that for many tasks that involve creative output, getting new data to put into LLMs to keep them current and up-to-date is important and is likely to continue to be important. And the typical shortcuts around that data (doing web searches, merging contexts) don't always work as well as retraining does (and also require their own up-to-date inputs). Again, not a commentary on whether LLMs are good at those tasks; just pointing out that AI does very definitively need continuous input for many of the highest-profile tasks people want to use it for. And the areas where it doesn't need that input are probably not in most cases the tasks people are most worried about being automated.
I'm not going to get into a bunch of debates about AI capabilities, and I don't tend to take an extreme stance one way or another what current AIs are capable of or will be capable of, but this sentence strikes me as very strange. Modern AI systems have pretty significant problems that will likely only be solved through heavy amounts of training and data curation, and maintaining good performance over time will likely require regular insertion of newer data.
The idea that these systems are self-sufficient is fiction, they're not even close to reaching that point and they have serious gaps in capability. Maybe those gaps will get closed -- again, I'm not going to get into a debate about what AI can theoretically do in the future or how fast it will improve, that discussion is a little bit too speculative for my tastes. But while modern LLMs can do a lot of impressive stuff, the idea that we've hit some threshold where we aren't going to need to make further jumps and where those jumps won't require additional data or training or input is just buying into the worst of LLM hype.
What we're seeing is that for many tasks that involve creative output, getting new data to put into LLMs to keep them current and up-to-date is important and is likely to continue to be important. And the typical shortcuts around that data (doing web searches, merging contexts) don't always work as well as retraining does (and also require their own up-to-date inputs). Again, not a commentary on whether LLMs are good at those tasks; just pointing out that AI does very definitively need continuous input for many of the highest-profile tasks people want to use it for. And the areas where it doesn't need that input are probably not in most cases the tasks people are most worried about being automated.