I did my Ph.D. in computational neuroscience (2020 grad).
This project is often used to joke about the limitation of computational modelling of nervous system. If you can't compute the behavious of an effing worm with mere ~300 neurons, whats the point of all hot-air around connectomics (mapping connections of the brain). Connectomics used to be a big word when I started my Ph.D.. The apologists are always like, "real neuron is way too complicated!".
IMHO, chemical computations are often over-looked in neural "computation" communities which are extremely hard to model. Forgetting modelling, we don't know reaction parameters of most proteins and other molecules involved. Electrical side of computation is easy to measure and one can understand why we started with it. There are a thousands types of proteins even in a small structure such as synapse, and individual protein can implement interesting non-linear computation. E.g. CaMKII can implement and bistable switch (flip-flop) and thus store 1-bit of memory using just a few molecules (the real story is a much more complicated).
Yeah, like hormones, (I'm assuming what you are calling chemical computations). Maybe this isn't forgotten, just not figured out yet. Maybe future GPTs will have some other layer of weights, or different level of feedback, that could be called 'hormones'.
I used to follow their progress but I unsubscribed from the mailing list due to donation request spam. There have been very few public updates in the past year. There is still some movement but it seems to be happening at a geologic pace from my perspective.