Evolving Neural Programs
Neural networks, probabilistic inference, and combinational devices form a subset of low level neural processing. These systems can all be iteratively clocked forming different representations of general processing elements. These low level process elements can be represented as frames with slots for the specific type of process and variables (neural network weights, probabilistic tables, or combinational relations). A frames' inputs can reference another frames' outputs. A frame can be duplicated and mutated in order to search for more useful representations through Hebbian similarity/recognition/learning.
How can a frame based system composed of general frame-based parallel processing elements be debugged? Who/what recognizes bugs, proposes solutions, tries solutions, remembers successful strategies, remembers failed strategies? (i.e. good ways to divide a space, simplify processes) Can these critics replace themselves through their own process of debugging? (by maybe creating imaginary copies of themselves in order to debug themselves and test the outcome against other critics?)
Why has the field of neural networks come to the dead end without becoming self-reflective? Are there examples of self-reflective neural networks?