Cognitive architectures (like Soar, ACT-R, Sigma) are half psychology and cognitive science (building general, computational models of the human mind to understand it) and half AI (a continuation of GOFAI, using similar symbolist/structured approaches to intelligence).
>TACAIR-SOAR is an intelligent, rule-based system that generates believable humanlike behavior for largescale, distributed military simulations. The innovation of the application is primarily a matter of scale and integration. The system is capable of executing most of the airborne missions that the U.S. military flies in fixed-wing aircraft. It accomplishes its missions by integrating a wide variety of intelligent capabilities, including real-time hierarchical execution of complex goals and plans, communication and coordination with humans and simulated entities, maintenance of situational awareness, and the ability to accept and respond to new orders while in flight.
For a successor project, there's the Sigma cognitive architecture, built by Paul Rosenbloom, who used to work on Soar: https://cogarch.ict.usc.edu/
These projects are a source of fascination to me, however a persistent question for me is how do these more symbolic approaches to cognitive modelling figure in today's world of ML and data-driven AI? I'm very curious to know where and to what extent the symbolic approaches of the past (and present) meet with ML?
I ask because you clearly have some exposure to these sorts of projects - any sources you can provide would be appreciated.
I'm very curious to know where and to what extent the symbolic approaches of the past (and present) meet with ML?
If you had a good answer to that, you'd probably be well on your way to a Ph.D., if not a Turing Award. The question of symbolic/sub-symbolic integration has been a big outstanding question in the AI world for a very long time now. I don't think many people were actively working on it for quite a while, but it seems like there has been at least a small uptick in interest in that idea recently. My personal belief is that this kind of integration will be essential, at least in the short term, to achieving something like what we might actually call AGI. And while I'm hardly alone in thinking this, this position is by no means universally held. There are people (Geoff Hinton among others, if memory serves correctly) who believe that "neural nets are completely sufficient".
And frankly, in the long (enough) term that might be right. Build ANN's that are sufficiently deep, sufficiently wide, and with just the right initial architecture, and maybe you get something that develops "the master algorithm" and figures it all out on its own. I think that's probably possible in principle; but my doubt about all of that is more about how realistic it is, especially over shorter time scales.
Anyway, if you're really interested in the topic, Ben Goertzel's OpenCog system includes a strong focus on symbolic/sub-symbolic integration, and borrows a lot of ideas from some well-known cognitive architecture work (LIDA, in particular).
Also, googling "symbolic / sub-symbolic integration" will turn up a ton of sites / papers / books / etc. that go into far more detail.
I went very deep into OpenCog and finally had to concede that there just wasn't enough rigor and coordination between the compoents. Goertzel seems easily distracted by various other subjects. I realize that he has to figure out ways to fund his work, so I am not being judgemental.
In addition to symbolic and deep learning, future AI systems will most likely have a causal learning component. Judea Pearl has been working on this subject for years. http://bayes.cs.ucla.edu/jp_home.html
Good points all around. I think OpenCog has a lot of good ideas, but I won't claim that it's the "be all, end all", as of today. That said, I think to some extent the statement "there just wasn't enough rigor and coordination between the compoents" may be true exactly because that is the central challenge that still remains to be solved.
At the very least, I think reading Goertzel's books[1] and looking at OpenCog is a good introduction to the issues at hand in a general sense.
Totally agree on the causal learning thing. And that's an area that also seems to have had a resurgence of interest and activity lately.
[1]: Here I specifically mean Engineering General Intelligence, Volumes 1 & 2
Probably the most famous application of Soar is TacAir-Soar: https://soartech.com/portfolio-posts/automated-intelligent-p...
>TACAIR-SOAR is an intelligent, rule-based system that generates believable humanlike behavior for largescale, distributed military simulations. The innovation of the application is primarily a matter of scale and integration. The system is capable of executing most of the airborne missions that the U.S. military flies in fixed-wing aircraft. It accomplishes its missions by integrating a wide variety of intelligent capabilities, including real-time hierarchical execution of complex goals and plans, communication and coordination with humans and simulated entities, maintenance of situational awareness, and the ability to accept and respond to new orders while in flight.
For a successor project, there's the Sigma cognitive architecture, built by Paul Rosenbloom, who used to work on Soar: https://cogarch.ict.usc.edu/