The abstract reasoning corpus (ARC) challenge by Francois Chollet has gained renewed attention due to the 1M prize announcement. This challenge is interesting to me because the idea of “abstraction” as “synthesizing cognitive programs” is something my team has worked on and
I always look forward to your writings. Your work at Vicarious and now Deep Mind have influenced greatly my thoughts and understanding of ML. Keep up the great work, Dr George!
My intuition is real brains perform massive parallel of a search algorithm, applying known/learned "motions", not just prediction or pattern matching as usually assumed.
Like (mini?)columns are an army of simpletons each attempting to do its own unique stupid, simple thing.
PS more precisely by search I mean a path search between a source and a target state. Source(state0)->action1->state1-action2->state2->... actionN->target state.
Regarding the question, let's call the million simpletons "agents".
I would experiment with a global big state (think of million size activation vector ) projecting into each local agent simplified view of the global state.
And agent's response (action) is projected back into changing the global state.
If they-re all under an "animal" virtual skull, then some of the global state generates motions, e.g. camera/wheel movement and sensory inputs project into the global state too.
Sparsity is assumed, both for global state encoding and for the subset of active agents at any given time, mostly for scaling/cost reasons.
Some global reward feedback which also projects into recent past local actions is needed too.
"Pure reason you Kan’t"
This is the best part. :P
“Concepts as cognitive programs”, I like this phrase, I think the ARC challenge is really going to be an induction challenge.
I always look forward to your writings. Your work at Vicarious and now Deep Mind have influenced greatly my thoughts and understanding of ML. Keep up the great work, Dr George!
My intuition is real brains perform massive parallel of a search algorithm, applying known/learned "motions", not just prediction or pattern matching as usually assumed.
Like (mini?)columns are an army of simpletons each attempting to do its own unique stupid, simple thing.
PS more precisely by search I mean a path search between a source and a target state. Source(state0)->action1->state1-action2->state2->... actionN->target state.
That's also just a far-fetched theory. Because what are the actions there supposed to be?
Sure, an intuition is as far fetched as it gets.
Regarding the question, let's call the million simpletons "agents".
I would experiment with a global big state (think of million size activation vector ) projecting into each local agent simplified view of the global state.
And agent's response (action) is projected back into changing the global state.
If they-re all under an "animal" virtual skull, then some of the global state generates motions, e.g. camera/wheel movement and sensory inputs project into the global state too.
Sparsity is assumed, both for global state encoding and for the subset of active agents at any given time, mostly for scaling/cost reasons.
Some global reward feedback which also projects into recent past local actions is needed too.