Most of the time, these things are resource hogs arriving way before their time to shine, either needing Moore's law to catch up the hardware, or some nerd to wrestle with the combinatorial explosion and win. Transformers can be seen as a variation on Markov chains, but the innovation of attention mechanisms means you can use hundreds of thousands of tokens and thousands of tokens in sequences without the problem space going all Buzz Lightyear on you.
Ultra Hal was a best in class chat bot when fixed response systems like Alice/ AIML were the standard. Ultra Hal used Markov chains and some clever pruning, but it dealt with a few hundred tokens as words and sequences only 2 or 3 tokens out. It occasionally produced novel and relevant output, like a really shitty gpt-2.
I think we may see a resurgence of expert systems soon, as gpt-3 and transformers have proved capable of automating rule creation in systems like Cyc. They've already incorporated direct lookups into static databases gpt / RETRO type models. Incorporating predicate logic inference engines seems like the logical and potent next step. GPT could serve as a personality and process engine that eliminates the flaw (tedium) in massive, tedious, human level micro-tasking systems from GOFAI.
It's worth going through all the literature all the way back to the 1956 summer of code and hunt for ideas that just didn't work yet.
https://www.zabaware.com/ultrahal/
Ultra Hal was a best in class chat bot when fixed response systems like Alice/ AIML were the standard. Ultra Hal used Markov chains and some clever pruning, but it dealt with a few hundred tokens as words and sequences only 2 or 3 tokens out. It occasionally produced novel and relevant output, like a really shitty gpt-2.
I think we may see a resurgence of expert systems soon, as gpt-3 and transformers have proved capable of automating rule creation in systems like Cyc. They've already incorporated direct lookups into static databases gpt / RETRO type models. Incorporating predicate logic inference engines seems like the logical and potent next step. GPT could serve as a personality and process engine that eliminates the flaw (tedium) in massive, tedious, human level micro-tasking systems from GOFAI.
It's worth going through all the literature all the way back to the 1956 summer of code and hunt for ideas that just didn't work yet.
https://en.wikipedia.org/wiki/Dartmouth_workshop