The energy function for a Boltzmann machine is usually E = v^T x W x h, where v are the visible units, W contains the weights, and h represents the hidden units. The form of the energy function defines the activation function of each neuron and the learning rule that goes with it. Now, you can in theory start off with any energy function you like (this form just happens to be the simplest). You would then have to re-derive the activation function and learning rule.
Just what would the energy function look like for real neurons? I don't know but we do know that the activation function would have to "spike" in bursts. So that is a clue. We also have rudimentary ideas about the learning rule used in biological neural networks, so you would also want to take this into account when determining the actual energy function. Finally, real neurons do not send retrograde signals but are instead wired recurrently, which must also be taken into consideration.
Just what would the energy function look like for real neurons? I don't know but we do know that the activation function would have to "spike" in bursts. So that is a clue. We also have rudimentary ideas about the learning rule used in biological neural networks, so you would also want to take this into account when determining the actual energy function. Finally, real neurons do not send retrograde signals but are instead wired recurrently, which must also be taken into consideration.