Domain-Based Dendral Network.

Volodymyr Bykov
Ozolio
Published in
6 min readJul 22, 2019

--

The DBDN project is a result of collaboration between Ozolio Inc. and a small group of engineers who believe in the idea of synthetic consciousness. This idea is much older and bigger than reproduction of particular brain functions using computer algorithms. At the same time, computer simulation is the most effective approach available for us today. The only question is:

Where is the line between simulation of a biological prototype and a pure algorithm?

Although there is no single answer to this question, we believe the line is already pushed quite far towards pure algorithms. Nowadays, most ANN models use a variety of free-roaming algorithms in order to train and normalize the network. This approach is known as “back-propagation.” The confusing thing about back-propagation is that the brain does not use such algorithms. In other words, there is no process or entity that roams around the cerebral cortex and randomly adjusts neurons based on its own agenda.

On the other hand, it is common knowledge that the biological brain has more neural connections than the number of stars in our galaxy. Neural connections end with synapses. Some synapses are excitatory, which means they can increase the probability of producing an output signal. Some synapses are inhibitory, which means they can decrease the probability of producing an output signal. There are also synapses that can trigger an output signal without affecting the status of the neuron. It’s not clear why the brain forms particular pathways. But we do know that it’s not a simple all-to-all routing.

Let’s take a look at the basic model of ANN (Multilayer Perceptron).

There are many variations of this model, but I want to focus on two common problems:

a) The Multilayer Perceptron does not assume that the connections structure is playing any role in the cognitive process. Each neuron of a particular layer is simply connected to all neurons of the next layer.

b) When it comes to training (a.k.a. “learning”), things are getting unclear. The training process is not tied to any neural connection. We have to use an algorithm that makes a decision to adjust a random neuron, based on the status of the surrounding neurons or the entire ANN.

What if we do not want to use such a free-roaming algorithm? Is it possible to generate a universal tuning signal and distribute this signal using a persistent (connection-based) algorithm? Can a neuron adjust its own status without any knowledge about the status of the surrounding neurons? And the most interesting question:

How to plug such a tuning signal into the conventional model of an artificial neuron?

We can draw an extra arrow that points at the neuron body, but not without encountering a logical issue: the tuning signal should adjust a synapse that already has an incoming connection. Creating a version of conjoined synapse that receives two simultaneous signals would be a solution. But we wanted to avoid any confusion in describing our concept and decided to add a new element that combines several synapses named “Dendron,” following the tradition of naming artificial elements after the closest biological prototypes…

Finally, we came up with the Idealistic Model:

In this model:

- “Running Signal” is an info signal that represents the Neuron’s Input or Output.

- “Excitatory Signal” is a tuning signal that can increase Dendron’s Weight.

- “Inhibitory Signal” is a tuning signal that can decrease Dendron’s Weight.

- “Weight” is a measure of the contribution of Input Signal to Dendron’s Vector.

- “Vector” is a measure of the contribution of each Dendron to the Neuron’s Factor.

- “Factor” is a measure of the contribution of the Neuron to its Output Signal.

It is imperative to understand that the Idealistic Model is not an “implementation guide.” An actual engine may not even use the same terminology. We created this model in order to understand how different types of signals can enter the neuron and affect its status.

We did not know how to form the tuning signal at this stage. However, during our experiments with back-propagation, ANN consistently formed several groups of neurons that were reacting to a particular class of patterns. Some algorithms could produce a distributed group (neurons in a group were not exactly neighbors). However, the results were stable in the correlation between the number of groups and the number of classes.

We named these groups “Neural Domains” and came up with a hypothesis:

The cerebral cortex naturally forms several crossing pathways through the same area. These pathways play a key role in the cross-referencing mechanism that essentially allows the biological brain to recognize and classify patterns.

Of course, there is a high chance that this hypothesis does not reflect reality, but the idea seemed logical enough. In order to test our assumption, we decided to implement our own C++ engine with the following Abstract Schema:

This schema requires a couple of clarifications:

- There is no input layer. Commonly, the input layer acts as a buffer/repeater that distributes an input signal to the second layer. However, if the ANN should receive and process multiple patterns simultaneously, the logic of the input layer becomes ambiguous. Instead, we’ve added a multifunctional IO buffer called Signal Bus. Other elements can read signals from the Signal Bus or write signals into it.

- The Pattern Comparator calculates the Tuning Pattern by comparing the Output Pattern with the Control Pattern.

- The Uniform Routing is the process of distributing Running Signals across the receiving layer (the conventional all-to-all connection schema).

- The Domain Routing is the process of distributing Tuning Signals across Neural Domains.

The implementation of the engine is definitely more complex than this schema, and we are still learning how to determine the number of Domains and route the Tuning Signal properly. But we have already confirmed that the neuron can adjust its own status solely based on the Tuning Signal. No information about the status of other neurons is necessary.

Two videos below demonstrate fragments of the first and the third sequences of a training session with the MNIST database. This session was not meant to break any records with respect to the error rate. The test network consists of 40 neurons in the first layer and 10 neurons in the second layer. The purpose of the session was to confirm that even with such a small network, we can achieve an acceptable result. We deliberately slowed the engine down, so you can see how an empty network forms neural domains and that the error rate decreases over time.

This is the third sequence of the same session.

It is too early to talk about the practical application of this technology. We are continuing our testing and gathering information that will define the future research. Even though our results are pretty good for a small network, we believe that s single NN should not recognize such a complex class of patterns as handwritten digits. We can use more neurons or layers and increase the complexity of Domain Routing in order to reduce the error rate. However, the bigger issue is that the concept of a handwritten digit implies that the network already knows at least the concept of a digit. Instead, we are exploring the idea of combining elementary DBDN networks into a multilevel cluster.

For more information about this project, please contact dbdn@ozolio.com.

--

--