I'd be curious if there's an LLM prompt equivalent of a zip bomb that will explode the context window. I know there's deterministic limits on context window, but future LLMs _are_ going to have strange loops and going to be very susceptible to circular reasoning.
Before AGI, there will be a untenable gullible general intelligence.
I've seen LLMs get into loops because they forgot what they were trying to do. For instance, I asked an LLM to write some code to search for certain types of wordplay, and it started making a word list (rather than writing code to pull in a standard dictionary), and then it got distracted and just kept listing words until it ran out of time.
One of the things that will likely _characterize_ AGI are nondeterministic loops.
My bet is that if AGI is possible it will take a form that looks something like
x_(n+1) = A * x_n (1 - x_n)
Where x is a billions long vector and the parameters in A (sizeof(x)^2 ?) are trained and also tuned to have period 3 or nearly period three for a meta-stable near chaotic progression of x.
Whats confusing to me is the dual use of the word entropy in both the physical science and in communication. The local minimums are some how stable in a world of increasing entropy. How do these local minimums ever form when there's such a large arrow of entropy.
Certainly intelligence is a reduction of entropy, but it's also certainly not stable. Just like cellular automata (https://record.umich.edu/articles/simple-rules-can-produce-c...), loops that are stable can't evolve, but loops that are unstable have too much entropy.
So, we're likely searching for a system thats meta stable within a small range of input entropy (physical) and output entropy (information).
If you have any system that tries to gravitate to a local minimum it is almost impossible to not make Newton's fractal with it. Classical feed forward network learning does pretty much look like newtons method to me. Please take a look into https://en.m.wikipedia.org/wiki/Newton%27s_method
Before AGI, there will be a untenable gullible general intelligence.