Parallel programming: description, technology, tasks and benefits

Table of contents:

Parallel programming: description, technology, tasks and benefits
Parallel programming: description, technology, tasks and benefits
Anonim

The ideas of parallel computing and information processing have long been the lot of specialists and a rather significant problem in terms of implementation. They acquired special significance and mass interest not so long ago.

It can be argued that it was the development of Internet technologies that gave a new impetus and parallel programming acquired new consumer qualities. This caused not only the obvious progress of technologies and programming languages. This really backfired on the understanding of the parallel process.

parallel programming
parallel programming

Parallel programming technologies have changed dramatically. The initial use of computer devices as calculators smoothly turned into their use as information processors. Rigid architectural solutions have given way to semantics and flexible distribution of software functionality across"hardware executors".

Parallel computing: meaning and implementation

Initially, the foundations of parallel programming were laid in the architecture of computing devices. A classification based on the concept of flow has been proposed.

A sequence of commands, data, functionally complete sequential algorithms was considered as an object that can be executed in parallel with another similar object.

parallel programming technologies
parallel programming technologies

With this approach, the essence of each object did not matter, but what mattered was such a division into parallel sections of code that could be executed independently, that is, the data at the input and output of each thread did not intersect. Each thread did not depend on another thread, and if it needed data from another thread, it went into a waiting mode.

This idea led to four architectures:

  • SISD - simple command stream and simple data stream;
  • MISD - multiple instruction stream and simple data stream;
  • SIMD - simple instruction stream and multiple data stream;
  • MIMD - Multiple Command Stream and Multiple Data Stream.

These ideas have existed for a relatively long time, but did not lead to special effects. Today is the story of a difficult beginning. But that beginning set the stage for today's advances.

An architectural flaw: lack of semantics

Like the design of a residential building, the architecture of a computing system was not concerned with semantics. How will the tenants live in the building, what kind ofbe able to make repairs and what furniture they decide to install, builders never cared.

At the very beginning, parallel programming systems also did not attach any importance to the algorithm that would have to be executed. The processor itself divided the code and data into sections that it executed in parallel. This gave a noticeable performance boost, but worried, in particular:

  • problem of sharing memory between processes;
  • logic for one thread to wait for the results of another thread;
  • Mechanism for protecting the memory of one process from another process;
  • logic of interaction of independent processors, cores;
  • logic for switching between processes;
  • on-the-fly data exchange between processes…

Developers focused more on hardware mechanisms, which deprived parallel multi-threaded programming of the opportunity to have semantics and did not allow the programmer to manage processes adequately to the task being solved.

Industrial application of parallelism

The first purpose of computers: complex mathematical calculations, industrial applications and everything that was not related to everyday life, mobility and the Internet. Naturally, when the tasks of parallel programming are so "limited", it is difficult to expect interesting achievements.

When computers became mass-produced products, the Internet and mobile devices appeared, the requirements for parallelism changed dramatically and developers had to radically change the style and speed of work.

Messaging ideas were the first signbetween processes. The MPI messaging interface, parallel programming, developer needs, and consumer expectations have become an intermediate step.

parallel programming systems
parallel programming systems

Windows and similar systems consolidated this idea and actually made it a legal norm: concurrency and messaging are one thing for any multiprocessor, multicore, and in fact - for any information system.

From computing to information processing

Computing is a special case of information processing. From parallel architectures implemented in hardware to mobile software solutions: parallel programming languages have really become history. The modern language provides real parallelism of processes, but for this it is not at all necessary to have special operators in the syntax or additional libraries for the language.

"Industrial" thinking in programming, when parallel multi-threaded programming is the goal, not the means, did not exist for long. It is difficult to say what fundamental results it led to. However, there is no doubt that the programming that was before the era of Internet programming became the basis for great ideas and good potential of modern languages and tools.

Hardware

The first computers were monsters, they occupied a quarter of a football field and generated so much heat that it was possible to safely heat a small town, and not spend money on building power plants.

The next generation of computers is personal. Personal computers were placed on the desktop, and cell phones could be worn on the shoulder. Personal computers quickly changed and acquired a modern look, gave life to laptops, tablets and other devices, and cell phones turned into convenient multifunctional smartphones.

parallel programming languages
parallel programming languages

An electronics manufacturer has taken full advantage of the ideas of yesteryear, and parallel programming now exists on any device, regardless of how this or that software developer feels about it.

Today, the number of processor cores, the number of processors, the level of technology, the parallelism and functionality of the code are critical even for an uninitiated user.

Mathematical apparatus

Graph theory and queuing, as particular variants, calculations of lines and curves for visual display of information, as the basis for video cards, determined the clear functionality of the hardware component, which gained the status and quality of the standard.

You can talk about the number of cores in the device's processor, but the processor for displaying information has been alone for a long time and is doing its job. A video card may have more than one processor and more than one core, but the mathematical apparatus is implanted in it.

The computer processor only formulates simple commands to display information or fetch it from the video memory, the rest is the care of the video card processor.

Actually, mathematical calculations have long been isolated frommain processor to math coprocessor. This is also the norm these days.

In fact, considering parallel programming at the hardware level, one can imagine a modern computer as a set of parallel subsystems that provide the developer with everything necessary to implement all sorts of ideas of distributed and parallel information processing.

basics of parallel programming
basics of parallel programming

It is generally accepted that the fundamental hardware resources for any information system are in perfect condition and are developing steadily. The only thing left for a programmer is to write quality code.

Object-oriented programming

In classical programming, the solution algorithm is a sequence of commands. In object-oriented programming, a solution algorithm is a collection of objects, each of which has its own data and its own methods.

mpi parallel programming
mpi parallel programming

Through methods, objects interact with each other, which means that the programmer is the least concerned about how they will be executed by the hardware of the computer (device). However, the logic of how objects interact is the domain of the programmer.

An information system built on objects, as a rule, is some kind of abstraction system that allows various options for creating objects of various types and purposes. Being described at the level of abstractions, information systems can provide various combinations of objects, including the creationthe last of themselves.

To put it simply, in object-oriented programming it is difficult to bind the execution of an object to the kernel or processor to ensure its parallel execution. This will significantly slow down the overall process. One object can exist in ten instances, but this does not mean that the existence of each must wait for the existence of the previous one to end.

Clusters and distributed parallelism

Modern Internet programming for solving complex and unique problems offers the only possible solution: manual work! Many and varied content management systems are used for everyday and commercial use.

Characteristic of internet programming:

  • uncertainty;
  • plurality;
  • simultaneity.

When creating a website, a programmer (more often a team of programmers) does not know how many visitors a web resource will have to receive, but he knows for sure that the site must provide the same and minimum response time to any action for all clients.

Obvious solution: to place the site on multiple servers or clusters based on a territorial basis, and then each region will serve a specific server. But a modern site not only provides information, but also uses it. For example, an online store cannot sell air, and if one item was purchased in Moscow, it must disappear from the warehouse for the consumer in Vladivostok.

parallel programming problems
parallel programming problems

Makedistributed processing of information actually means to ensure parallel operation of the same functionality on different servers for different groups of consumers, provided that the actions of consumers are reflected in the system and do not contradict each other.

In this context, parallel programming takes on a completely different meaning. If earlier the developer focused on the mechanism for implementing parallelism, not having in mind the task itself, today the developer is the least concerned about how parallelism is implemented at the hardware or tool level, he is interested in parallelism at the client level, that is, the task itself, the very scope of the web -resource.

Cluster as a variant of parallel implementation

It is generally accepted that a cluster is a kind of distributed parallel processing of information. This is a collection of computers connected by high-speed communication lines.

It is typical that a cluster can consist of different computers that can be located in different places on the planet, but by definition: a cluster is a single whole. Cluster-based site management systems do not allow direct control of the computers that make up the cluster, but they provide hidden parallel control of all processes at the level of the task being solved.

A developer, working with clusters, can plan and implement his own functionality of parallel distributed information processing. This is a very significant advance in modern development.

"Life" of modernobject

Today it is very difficult to find a web resource based on static or dynamic pages, formed entirely. A modern site is a set of dynamic pages that are filled in parallel using AJAX technology.

A modern dynamic page consists of various content, each part of the page can be loaded independently, depending on the behavior of the visitor. In this context, object-oriented programming shows that far from its full potential is revealed.

Indeed, client behavior causes a request to the server to update part of the page. The request is processed, a lot of objects are created, the server sends the result back. The next request … again a mass of objects, the result is sent again. In fact, it turns out that with the modern approach, the server "does not remember" what, when and where it sent. With each call, it repeats the minimum necessary actions, creates all the same systems of objects.

parallel multi-threaded programming
parallel multi-threaded programming

The programmer cannot change the logic of the server, but he can easily emulate his own server at the physical level available to him. You will get a completely new quality of distributed parallel processing of information.

The own server will keep the required system of objects up to date, which will significantly speed up the processing of requests from both one page and all pages open in the entire Internet space for a specific web resource.

Popular topic

Editor's choice

  • The PHP mail function: description, application features
    The PHP mail function: description, application features

    E-mail is an integral part of any modern project or business. Nowadays, speed and responsiveness are of great value, especially when it comes to customer feedback. This is a decisive factor that users consider when making purchases

  • Shareware - what is it List of programs, description of programming principles
    Shareware - what is it List of programs, description of programming principles

    Shareware has battled the stigma of misunderstanding for decades. While enterprise software giants can no longer ignore the marketing potential of a trial, small startups still struggle with new software challenges and costs

  • What is var in Pascal
    What is var in Pascal

    Variable var is a name that the user assigns to computer memory cells and uses to store values in a computer program. It defines the type of information stored, describes the format of the value of the occupied memory and methods for manipulating the content

  • Java library: creating, processing, working with files
    Java library: creating, processing, working with files

    Experienced Java developer has extensive knowledge of APIs including JDK, libraries for everyday projects including Log4j, JSON parsing, Jackson. The problem is that not all Java library designers think about their users, how the API will be used in practice, and how the code will look and be tested

  • Compression algorithms: description, basic techniques, characteristics
    Compression algorithms: description, basic techniques, characteristics

    Currently, processor processing power is increasing faster than storage capacity and network bandwidth. Therefore, in order to compensate for the increase in the amount of data, they compress them. The compressor uses an optimization algorithm of the appropriate type. For subsequent recovery, a decompressor with the opposite direction of the process is required