Shared memory in multi-threaded applications

Lecture



Shared memory is the fastest way to exchange data between processes. [1]

In other means of interprocess communication (IPC), the exchange of information between processes passes through the core, which leads to a context switch between the process and the core, i.e. loss of productivity. [2]

The shared memory technique allows the exchange of information through a common memory segment for processes without the use of kernel system calls. The shared memory segment is connected to the free part of the process’s virtual address space [3]. Thus, two different processes may have different addresses of the same cell of the connected shared memory.

  Shared memory in multi-threaded applications

Visual representation of shared memory

Content

  • 1Short job description
  • 2 The implementation of technology "client-server"
    • 2.1 Shared Memory Usage Scenario
  • 3Programme implementation
    • 3.1B UNIX-like operating systems
      • 3.1.1 Shared Memory in UNIX System V Style
      • 3.1.2 POSIX shared memory
    • 3.2In the operating systems of the Windows family
    • 3.3Support in programming languages
  • 4SM. also
  • 5Notes

Brief job description

After creating a shared memory segment, any of the user processes can connect it to its own virtual space and work with it as with a normal memory segment. The disadvantage of such information exchange is the absence of any synchronization means, however, to overcome this disadvantage, you can use the semaphore technique.

Implementation of the client-server technology

In a data exchange scheme between two processes - (client and server) using shared memory - a group of two semaphores must function. The first semaphore serves to block access to shared memory, its enabling signal is 1, and the inhibiting one to 0. The second semaphore serves to signal the server that the client has started, while access to the shared memory is blocked and the client reads data from the memory. Now when the server calls the operation, its operation will be suspended until the client releases the memory.

Shared Memory Scenario

  1. The server accesses the shared memory using the semaphore.
  2. The server writes data to shared memory.
  3. After the data has been written, the server frees access to shared memory using a semaphore.
  4. The client accesses shared memory by locking access to this memory for other processes using a semaphore.
  5. The client reads data from shared memory, and then frees memory access with a semaphore.

Software implementation

In software, shared memory is called:

  • The interprocess communication (IPC) method, that is, a method for exchanging data between programs running simultaneously. One process creates an area in memory that can be accessed by other processes.
  • The method of saving memory by direct access to the source data, which in the usual approach are separate copies of the source data, instead of displaying virtual memory or the described method. This approach is commonly used for shared libraries and for XIP.

Since both processes can access the shared memory region as normal memory, this is a very fast communication method (unlike other IPC mechanisms, such as named pipes, UNIX sockets, or CORBA). On the other hand, such a method is less flexible, for example, exchanging processes should be run on the same machine (of the IPC methods listed, only network sockets, not to be confused with UNIX domain sockets, can exchange data through the network), and you must be careful to avoid problems when using shared memory on different processor cores and hardware architecture without a coherent cache.

Data exchange via shared memory is used, for example, to transfer images between an application and an X server on Unix systems, or inside an IStream object returned by CoMarshalInterThreadInterfaceInStream in a COM library under Windows.

Dynamic libraries are usually loaded into memory once and mapped to several processes, and only pages that are specific to an individual process (since some identifiers differ) are duplicated, usually with the help of a mechanism known as copy-on-write, which when attempting to write to shared memory, it is unnoticeable for the recording process to copy the pages of memory, and then write data to this copy.

On UNIX-like operating systems

POSIX provides a standardized API for working with shared memory - POSIX Shared Memory. One of the key features of the UNIX family of operating systems is the process copying mechanism (the fork () system call), which allows you to create anonymous parts of shared memory before copying a process and inherit them from descendant processes. After copying the process, the shared memory will be available to both the parent and child processes. [3] [4]

There are two different approaches to connecting and using shared memory:

  • in UNIX System V style, using the POSIX extension functions: XSI (part of the POSIX.1-2001 standard) shmget, shmctl, shmat and shmdt [5];
  • through the POSIX functions shm_open, shm_unlink, ftruncate and mmap (standard POSIX.1-2001) [6].

UNIX System V style shared memory

UNIX System V provides a set of C language functions that allow you to work with shared memory [7]:

  • shmget — creates a shared memory segment with a binding to an integer identifier, or an anonymous shared memory segment (if you specify the IPC_PRIVATE value instead of the identifier) ​​[8];
  • shmctl - set parameters of a memory segment [9];
  • shmat - connect a segment to the address space of the process [4];
  • shmdt — disconnect a segment from the process address space [10].

Named shared memory implies an association with each memory section of a unique numeric key within the operating system, through which you can later connect shared memory in another process. [8]

POSIX shared memory

POSIX allows you to associate a file descriptor with a shared memory object, which is a more unified mechanism than the UNIX System V mechanism. The following C language functions can be used to work with memory:

  • shm_open - create or attach a POSIX shared memory object by its name [6];
  • shm_unlink - deleting a shared memory object by its name (the shared memory segment will exist until it is disconnected from all processes) [11];
  • ftruncate - sets or changes the size of shared memory (or mapped to memory file) [12];
  • mmap — connects an existing or creates an anonymous shared memory segment to the process address space [3].

On Windows operating systems

In the Windows operating system, CreateSharedMemory [13] from the Win32-SDK is used to create shared memory. On the other hand, it is possible to use the CreateFileMapping and MapViewOfFile [14] functions from MSDN.

Support in programming languages

Some C ++ libraries offer cross-platform access to working with shared memory. For example, the Boost library provides the class boost :: interprocess :: shared_memory_object [15] for POSIX-compatible operating systems, and the Qt library provides the class QSharedMemory that unifies access to shared memory for different operating systems with some restrictions [16].

In Java 7 under the GNU / Linux operating system, shared memory can be implemented by mapping a file from the / dev / shm / directory (or / run / shm /, depending on the distribution) to memory [17] using the map method of the java.nio class. MappedByteBuffer [18].

Support for shared memory implemented in many other programming languages. For example, PHP provides an API [19] for creating shared memory, whose functions are similar to POSIX functions.

see also

  • Race condition
  • Critical section
  • Semaphore
  • Inter-Process Communication (IPC)
  • ABA problem

Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Operating Systems and System Programming

Terms: Operating Systems and System Programming