Recurrent Neural Networks
This has just dawned on me and it is exciting for some strange low level (but inspirationally high level but as yet unexplicable) reason: recurrent neural networks can model processes such as the belief propagation algorithm for markov random fields. The weights of recurrent neural nets can probably also be tinkered with in order to train a subnet within the network to learn a feedforward function. I guess this makes sense because it is probably a turing complete system, but this is still exciting to me for some reason. I think I need to play with finite state machines a little more so that I see that this same thing cannot be simplified to just bits and more basic programming ideas.