Entropic Neuronal Summation

De Lillois Fractale Wiki
Aller à la navigation Aller à la recherche

Context

Natural intelligences, like the human brain, and artificial intelligences, like Assothink, involve numerous nodes exchanging signals.

Focusing on one node, it is assumed that this node receives a finite (discrete) number of input signals (inflows). The input signals may be considered and descibed as positive real values.

These signals determine the local excitation level of the node.

The question is mathematical: how do the various inflow combine into a global inflow value ?

Principle

The first answer to the question above is 'summation'. Neuroscience document doe not discuss this point (as far as I know), but it is generaly assumed that summation effect occur. It would be hard to prove that the summation effect is the correct model for the combination of inflows, but hard also to prove that it is not the correct model. So we can just make assumptions.

In this document, another assumption is built: the ENS (Entropic Neuronal Summation).

Obviously the model is quite similar to a simple summation model, but there is one point attracting our attention. The inflow coming from two different sources operates more than the inflow coming from one singel source, event when the simple summation of the values is identical. There is no demonstration for this. It is based on introspective considerations.

But anyway, wahy woudl it ne worse than the simple summation model ?

The critical point mentioned above, the difference between the summation model and the ENS model is that we want for the ENS a function Failed to parse (Cannot write to or create math output directory): \psi() , such that

Failed to parse (Cannot write to or create math output directory): <math>\psi(a/2,a/2) > \psi(a) </math>


Failed to parse (Cannot write to or create math output directory): \psi(a/2,a/2) > \psi(a)

psi(a/2,a/2) > psi(a)