Neural networks: example, definition, meaning, scope

Table of contents:

Neural networks: example, definition, meaning, scope
Neural networks: example, definition, meaning, scope

Artificial intelligence and neural networks are incredibly exciting and powerful machine learning techniques that are being used to solve many real world problems. The simplest example of a neural network is learning punctuation and grammar to automatically create completely new text, following all spelling rules.

History of the neural network

Computer scientists have been trying to model the human brain for a long time. In 1943, Warren S. McCulloch and W alter Pitts developed the first conceptual model of an artificial neural network. In the article "A logical calculus of ideas related to neural activity" they described an example of a neural network, the concept of a neuron - a single cell living in a common network, receiving input, processing it and generating output signals.

History of the neural network
History of the neural network

Their work, like many other scientists, was not intended to accurately describe the workbiological brain. An artificial neural network was developed as a computational model that works on the principle of the functioning of the brain to solve a wide range of problems.

Obviously, there are exercises that are easy for a computer to solve but difficult for a human, such as taking the square root of a ten-digit number. This example is calculated by a neural network in less than a millisecond, but it takes a human in minutes. On the other hand, there are some that are incredibly easy for a human to solve but not for a computer, such as choosing the background of an image.

Scientists who discovered AI
Scientists who discovered AI

Scientists have spent a lot of time researching and implementing complex solutions. The most common example of a neural network in computing is pattern recognition. Applications range from optical character and photo recognition, printed or handwritten scans to digital text, to face recognition.

Biological computers

biological computers
biological computers

The human brain is an exceptionally complex and most powerful computer known. Its inner workings are modeled around the concept of neurons and their networks, known as biological neural networks. The brain contains about 100 billion neurons that are connected by these networks.

At a high level, they interact with each other through an interface consisting of axon terminals connected to dendrites through a gap - a synapse. In simple terms, one sends a message to the other via this interface if the sumweighted inputs from one or more neurons exceed the threshold to trigger transmission. This is called activation when the threshold is exceeded and the message is passed on to the next neuron.

The summation process can be mathematically complex. The input signal is a weighted combination of these signals, and weighting each means that this input can have a different effect on subsequent calculations and on the final output of the network.

Neural model elements

Deep learning is a term used for complex neural networks with multiple layers. Layers are made up of nodes. A node is simply a place where a computation takes place that fires when it encounters enough stimuli. A node combines inputs from a set of coefficients or weights that either amplify or attenuate that signal, thereby assigning them significance to the problem.

Deep learning networks are different from common neural networks with one hidden layer. An example of training neural networks is Kohonen networks.

Elements of a neural model
Elements of a neural model

In deep learning networks, each layer learns a given set of features based on the output of the previous layer. The further you go into the neural network, the more complex the objects that can be recognized by the nodes, as they combine and recombine objects from the previous level.

Deep learning networks perform automatic feature extraction without human intervention, unlike most traditional algorithms, and end up with an output layer:a boolean or softmax classifier that assigns a probability to a specific outcome and is called a prediction.

Black box ANN

Artificial neural networks (ANNs) are statistical models partially modeled on biological neural networks. They are capable of handling non-linear relationships between inputs and outputs in parallel. Such models are characterized by the presence of adaptive weights along the paths between neurons, which can be adjusted by the learning algorithm to improve the entire model.

Architectural artificial neural (ANN)
Architectural artificial neural (ANN)

A simple example of a neural network is an architecturally artificial neural ANN network, where:

  • Input layer - input layer.
  • Hidden layer - hidden layer.
  • Output layer - output layer.

It is modeled using layers of artificial neurons or computational units capable of taking input and using an activation function along with a threshold value to determine if messages are being transmitted.

In a simple model, the first layer is the input, followed by the hidden and finally the output. Each may contain one or more neurons. Models can become more complex with increasing abstraction and problem solving capabilities, the number of hidden layers, the number of neurons in any given layer, and the number of paths between them.

Architecture and model tuning are the main components of ANN methods in addition to the learning algorithms themselves. They are extremely powerful and are considered black algorithms.drawer, which means that their inner workings are very difficult to understand and explain.

Deep learning algorithms

Deep learning sounds big enough, but it's really just a term to describe certain types of neural networks and their associated algorithms that consume raw input data through many layers of non-linear transformations to compute a target output.

Unattended feature extraction is also an area where deep learning excels. Example of training neural networks - SKIL networks.

Deep Learning Algorithms
Deep Learning Algorithms

Traditionally, it is the responsibility of a data scientist or data programmer to perform the feature extraction process in most other machine learning approaches, along with feature selection and design.

Optimal algorithm parameters

Function learning algorithms authorize a machine to learn a particular task using a fine-tuned set of learning opportunities. In other words, they learn to learn. This principle has been successfully used in many applications and is considered one of the most advanced methods of artificial intelligence. Appropriate algorithms are often used for controlled, unsupervised and partially controlled tasks.

Neural network models have more layers than surface learning algorithms. Small algorithms are less complex and require a deeper knowledge of optimal features, which involves selection and design. In contrast, deep algorithmstrainings rely more on optimal model selection and its optimization through tuning. They are better suited for tasks where prior knowledge of features is less desirable or necessary, and fixed data is not available or required for use.

Input data is transformed in all their layers using artificial neurons or processing units. An example of neural network code is CAP.

CAP value

CAP is used to measure the architecture of a deep learning model. Most researchers in the field agree that it has more than two non-linear layers for CAP, and some believe that CAPs with more than ten layers require too deep learning.

CAP value
CAP value

Detailed discussion of the many different model architectures and algorithms for this kind of learning is very spatial and controversial. The most studied are:

  1. Direct neural networks.
  2. Recurrent neural network.
  3. Multilayer perceptrons (MLP).
  4. Convolutional Neural Networks.
  5. Recursive neural networks.
  6. Deep belief networks.
  7. Convolutional networks of deep beliefs.
  8. Self-organizing cards.
  9. Deep Boltzmann machines.
  10. Stacked denoise auto-encoders.

Top modern architectures

Perceptrons are considered first-generation neural networks, computational models of a single neuron. They were invented in 1956 by Frank Rosenblatt in The Perceptron: A Proposed Modelstorage and organization of information in the brain. The perceptron, also called the feedforward network, transmits information from the front to the back.

RNNs turn an input sequence into an output that is in a different region, for example, changing a sequence of sound pressures into a sequence of word identifiers.

John Hopfield introduced the Hopfield Net in a 1982 paper "Neural Networks and Physical Systems with Emerging Collective Computing Capabilities". In a Hopfield network (HN), every neuron is connected to every other. They are trained by setting their value to the desired schema, after which the weights can be calculated.

Boltzmann machine
Boltzmann machine

The Boltzmann machine is a type of stochastic recurrent neural network, which can be considered as an analogue of Hopfield networks. It was one of the first options, studying internal representations that solve complex combinatorial problems. Input neurons become output neurons at the end of a full update.

Ian Goodfellow's Generative Adversarial Network (GAN) consists of two networks. Often this is a combination of Feed Forwards and Convolutional Neural Nets. One generates generative content, and the other should evaluate discriminatory content.

Getting started with SKIL from Python

Deep learning of a neural network using the Python example matches inputs with outputs and finds correlations. It is known as a universal approximator because it can learn to approximate an unknown function f(x)=y between anyinput "x" and any output "y", assuming they are related by correlation or causation.

In the learning process, find the correct "f" or a way to turn "x" into "y", be it f(x)=3x + 12 or f(x)=9x - 0, 1.

Classification tasks are associated with datasets so that neural networks perform correlation between labels and data. The following types of supervised learning are known:

  • face recognition;
  • identification of people in images;
  • definition of facial expression: angry, joyful;
  • identifying objects in images: stop signs, pedestrians, lane signs;
  • gesture recognition in video;
  • determining the voice of speakers;
  • Spam text classification.

Convolutional Neural Network Example

A convolutional neural network is similar to a multilayer perceptron network. The main difference is that CNN studies how it is structured and for what purpose it is used. CNN was inspired by biological processes. Their structure has the appearance of a visual cortex present in an animal. They are applied in the field of computer vision and have been successful in achieving state-of-the-art performance in various fields of research.

Before one starts coding a CNN, a library, such as Keras with a Tensorflow backend, is used to build the model. First, perform the necessary imports. The library helps to build a convolutional neural network. Load the mnist dataset via keras. Import a sequential keras model into which you canadd convolution and pooling layers, dense layers as they are used for label prediction. The dropout layer reduces overfitting, and the flattener transforms the 3D vector into a 1D vector. Finally, import numpy for matrix operations:

  • Y=2the value 2 represents that the image has the number 2;
  • Y=[0, 0, 1, 0, 0, 0, 0, 0, 0, 0]3rd position in vector made 1;
  • Here the class value is converted to a binary class matrix.

Building algorithm:

  1. Add ultra-precise layers and maximum pool to the sequential model.
  2. Add dropdown layers between them. The dropdown randomly disables some neurons in the network, which forces the data to find new paths and reduces overfitting.
  3. Add dense layers that are used for class (0–9) prediction.
  4. Compile the model with categorical cross-entropy loss function, Adadelta optimizer and accuracy metric.
  5. After training, evaluate the loss and accuracy of the model from the test data and print it out.
Construction algorithm
Construction algorithm

Matlab simulation

Let's imagine a simple example of MATLAB modeling neural networks.

Assuming "a" the model has three inputs "a", "b" and "c" and generates output "y".

Modeling in Matlab
Modeling in Matlab

For data generation purposes: y=5a + bc + 7c.

First write a small script to generate data:

  • a=Rand(11000);
  • b=Rand(1, 1000);
  • s=Rand (1, 1000);
  • n=Rand (1, 1000)0.05;
  • y=a5 + bc + 7c + n,

where n is noise specially added to make it look like real data. The amount of noise is 0, 1 and is uniform.

So the input is a set of "a", "b" and "c" and the output is:

  • I=[a; b; c];
  • O=y.

Next, use the built-in matlab function newff to generate the model.

Examples of neural network tasks

First create a 32 matrix R. The first column will show the minimum of all three inputs, and the second column will show the maximum of three inputs. In this case, the three inputs are between 0 and 1, so:

R=[0 1; 0 1; 0 1].

Now create a size matrix that has the v-size of all layers: S=[51].

Now call the newff function as follows:

net=newff ([0 1; 0 1; 0 1], S, {'tansig', 'purelin'}).

The neural model {'tansig', 'purelin'} shows the mapping function of two layers.

Train it with the data that was created earlier: net=train(net, I, O).

The network is trained, you can see the performance curve as it learns.

performance curve
performance curve

Now model it again on the same data and compare the output:

O1=sim(net, I);

plot(1:1000, O, 1:1000, O1).

So the input matrix will be:

  • net. IW{1}
  • -0.3684 0.0308-0.5402
  • 0.4640 0.2340 0.5875
  • 1.9569 -1.6887 1.5403
  • 1.1138 1.0841 0.2439
  • net. LW{2, 1}
  • -11.1990 9.4589 -1.0006 -0.9138

AI Applications

Neural network implementation examples include online solutions for self-service and building robust workflows. There are deep learning models used for chatbots, and as they continue to evolve, it can be expected that this area will be used more for a wide range of businesses.


  1. Automatic machine translation. It is nothing new, deep learning helps improve automatic text translation with stacked networks and allows image translation.
  2. A simple application of neural networks is adding color to black and white images and videos. It can be done automatically with deep learning models.
  3. Machines learn the punctuation, grammar, and style of a piece of text and can use the model it develops to automatically generate completely new text with proper spelling, grammar, and text style.

ANNs and more sophisticated deep learning techniques are among the most advanced tools for solving complex problems. While an application boom is unlikely anytime soon, the progress of AI technologies and applications will certainly be exciting.

Although deductive reasoning, inference and acceptancecomputer-assisted solutions today are still very far from perfect, significant advances have already been made in the application of artificial intelligence methods and related algorithms.

Popular topic

Editor's choice

  • What is a proxy server and what is it for?
    What is a proxy server and what is it for?

    The word "proxy" has ever been heard by any of us, but not everyone knows what a proxy server is and what its purpose is

  • Google Chrome not working. What to do?
    Google Chrome not working. What to do?

    So why doesn't Chrome launch when we click on its desktop icon? Google Chrome not working for several reasons

  • Recovery console. Main benefits of using
    Recovery console. Main benefits of using

    Computers are quite technically complex devices. In this regard, many users from time to time encounter failures in the operation of the operating system and other programs. In most cases, you can fix problems in a matter of minutes, especially if the computer technician has the necessary skills. If a critical failure occurs, it is recommended to use the recovery console

  • What is the cluster size?
    What is the cluster size?

    The space of any storage medium (hard drive or flash drive) is not a whole piece, but a system of memory cells called clusters

  • What is defragmentation and why is it needed?
    What is defragmentation and why is it needed?

    Modern users are often spoiled by powerful computers and inexpensive components to such an extent that they do not even know the basic concepts. That is why they often find themselves in a situation where the car begins to shamelessly “slow down” and respond with extreme reluctance to any commands. As a rule, “evil viruses” that have entered the computer are blamed for everything, but sometimes the reality turns out to be much more prosaic