当前位置:网站首页>Baoyan postgraduate entrance examination interview - operating system
Baoyan postgraduate entrance examination interview - operating system
2022-06-26 07:53:00 【moluggg】
Baoyan went ashore to a midstream 985, Prepare some important basic knowledge of the operating system , The interview may ask :
1. The common state of a process ? And the transition conditions between States ?
- be ready : The process is ready to run , That is, the process has been assigned to divide CPU After all the necessary resources , Just get it again CPU, It can be executed immediately .
- perform : The process has gained CPU, The program is executing .
- Blocking : The executing process is due to an event ( Such as I/O request 、 Failed to request buffer, etc ) The state of being unable to continue execution temporarily .
2. Process synchronization
The main task of process synchronization : It is to coordinate the execution order of multiple related processes , In order to enable the concurrent execution of various processes to effectively share resources and cooperate with each other , Thus the execution of the program has reproducibility .
The principle of synchronization mechanism :
(1) Free let into ;
(2) Busy, wait ( Ensure mutually exclusive access to critical areas );
(3) Limited waiting ( Limited means limited time , Avoid waiting );
(4) Let the right waiting ,( When a process cannot enter its own critical area , The processor should be released , So as not to fall into a busy state ).
Process synchronization method : Mutex read / write lock condition variable record lock (record locking) Semaphore barrier
3. What are the communication methods of processes ?
Process of communication , It refers to the exchange of information between processes ( If the amount of information is small, a state or value , Tens of thousands of bytes ). therefore , For mutual exclusion and synchronization between processes with semaphores , Due to the small amount of information exchanged, it is attributed to low-level communication .
Advanced process communication refers to : A communication mode in which a user can use a set of communication commands provided by the operating system to transmit a large amount of data . The operating system hides the implementation details of process communication . Or say , The communication process is transparent to users .
Advanced communication mechanisms can be divided into three categories :
(1) Shared memory system ( Shared storage area divided in storage ); In practice, the corresponding is “ clipboard ”( The clipboard is actually a memory area maintained and managed by the system ) Mode of communication , For example :word Process press ctrl+c, stay ppt Process press ctrl+v, That's it word The process and ppt Communication between processes , Put data on clipboard when copying , Take data from clipboard when pasting , And then it shows up in ppt On the window .
(2) Messaging system ( To exchange data between processes (message) In units of , In today's most popular microkernel operating system , Communication between microkernel and server , Without exception, message passing mechanism is adopted .
(3) Pipeline communication system ( The pipe is : A shared file that connects read-write processes to communicate between them (pipe file , Like a first in first out queue , Written by a process , Another process reads )).
The Conduit :** The pipe is one-way 、 First in, first out 、 Unstructured 、 Fixed size byte stream , It connects the standard output of one process to the standard input of another .** The write process writes data at the end of the pipe , The read process reads data at the end of the pipe . The data is read out and removed from the pipe , No other read process can read this data any more . Pipes provide a simple flow control mechanism . When a process attempts to read an empty pipe , Before data is written to the pipeline , The process will always block . similarly , When the pipe is full , The process tries to write the pipeline again , Before other processes remove data from the pipeline , The write process will always block .
Semaphore : A semaphore is a counter , It can be used to control the access of multiple processes to shared resources . It is often used as a locking mechanism , Prevent a process from accessing shared resources , Other processes also access the resource . therefore , Mainly as a means of synchronization between processes and different threads in the same process .
Message queue : Is a system kernel used to save data The rest of the queue , It appears in the form of message linked list in the system kernel . Message queuing overcomes the lack of signaling information 、 The pipeline can only support unformatted byte stream and the buffer size is limited .
Shared memory : Shared memory allows two or more processes to access the same logical memory . This piece of memory can be mapped to its own address space by two or more processes , A process writes information to shared memory , Can be used by other processes that use this shared memory , Through a simple memory read , Thus, the communication between processes is realized **. If a process writes data to shared memory , The changes will immediately affect any other process that can access the same shared memory
Socket : Socket is also an inter process communication mechanism , Unlike other communication mechanisms , It can be used for process communication between different machines .
4. Context switch
Context : Static description of the whole process of executing activities , That is, the register contents of the current process and the details of the memory table .
For single core and single thread CPU for , You can only execute one at a time CPU Instructions . Context switch (Context Switch) It's a kind of take CPU The mechanism by which resources are allocated from one process to another . From the perspective of users , Computers can run multiple processes in parallel , This is exactly the result of the operating system's rapid context switching . In the process of switching , The operating system needs to store the state of the current process first ( Including pointers to memory space , Currently executed instructions and so on ), Read in the state of the next process , Then execute the process .
5. The difference and connection between process and thread ?
- process It is a program with certain independent functions about a running activity on a data set , process It is an independent unit for resource allocation and scheduling of the system .
- Threads It's an entity of the process , yes CPU Basic unit of dispatch and dispatch , It's a smaller, independent, basic unit than a process .
The relationship between process and thread
(1) A thread can only belong to one process , And a process can have multiple threads , But at least one thread . Threads are the smallest unit of execution and scheduling recognized by the operating system .
(2) Resources are allocated to processes , All threads of the same process share all resources of the process . Multiple threads in the same process share code snippets ( Code and constants ), Data segment ( Global and static variables ), Extension ( Heap storage ). But each thread has its own stack segment , Stack segment is also called runtime , Used to store all local and temporary variables .
(3) Thread in execution , Need to work together to synchronize . Threads of different processes should use the method of message communication to achieve synchronization .
The difference between a process and a thread ?
(1) Process has its own independent address space , Threads don't have
(2) A process is the smallest unit of resource allocation , Thread is CPU Minimum unit of scheduling
(3) Processes and threads communicate differently ( The communication between threads is more convenient . Threads in the same process share data ( For example, global variables , Static variables ), Communication through these data is not only fast but also convenient , Of course, how to deal with the synchronization and mutual exclusion of these accesses is the difficulty of writing multithreaded programs . The communication between processes can only be through Process of communication By .)
(4) Process context switching costs a lot , Low thread overhead
(5) If one process hangs, it will not affect other processes , If a thread hangs, it will affect other threads
(6) The general cost of process operation is relatively large , The thread overhead is small
Why is process context switching more expensive than thread context switching ?
Process switching is divided into two steps :
1. Switch page directories to use the new address space
2. Switch between kernel stack and hardware context
about linux Come on , The biggest difference between a thread and a process is the address space , For thread switching , The first 1 Step is not necessary , The first 2 Is the process and thread switching to do .
The set of data that must be loaded into registers before the process can resume execution becomes the hardware context .
The performance cost of switching :
The main difference between thread context switching and process context switching is that the virtual memory space of thread switching is still the same , But process switching is different . Both of these context switches are handled through the operating system kernel . The most significant performance loss associated with this switching process of the kernel is to switch the contents of the register out .
link :https://www.zhihu.com/question/25532384/answer/81152571
Process and thread are both descriptions of a time period , yes CPU Description of working hours .
- process (process) With threads (thread) The big difference Processes have their own address space , Threads within a process are not visible to other processes , Process A It is not possible to read or write processes directly by sending addresses B The storage area of . Communication between processes requires inter process communication (Inter-process communication,IPC). In contrast , Information can be transferred directly between threads of the same process by passing addresses or global variables .
- ** Process as the basic unit with resources and independent scheduling in operating system , You can have multiple threads .** Usually a program running in the operating system corresponds to a process . In the same process , Thread switching does not cause process switching . Thread switching in different processes , For example, when switching from a thread in one process to a thread in another process , Causes process switching . Compared with process switching , The overhead of thread switching is much smaller . The combination of threads and processes can improve the running efficiency of the system .
Threads can be divided into two categories :
- User-level threads (user level thread):. The advantage of user level threads is that they are very efficient , No need to go into kernel space , But concurrency efficiency is not high .
- Kernel level threads (kernel level thread): The kernel can better allocate different threads to different CPU, To achieve real parallel computing .
in fact , In modern operating systems , It is often used to implement multithreading in combination , In other words, thread creation is completely completed in user space , And multiple user level threads in an application are mapped to some kernel level threads , It's a compromise .
6. Process scheduling
Dispatch type
- Advanced scheduling :(High-Level Scheduling) Also known as job scheduling , It decides which jobs in the backup queue on the external memory are transferred into the memory according to some algorithm , And create a process for them , Allocate the necessary resources , And put them in the ready queue .
- Low level scheduling :(Low-Level Scheduling) Also known as process scheduling , It decides to get a process in the ready queue CPU;
- Intermediate dispatch :(Intermediate-Level Scheduling) Also known as introducing... Into virtual memory , , 、 Process exchange in external memory exchange area .
Preemptive scheduling and non preemptive scheduling
- Non preemptive : Once the dispatcher allocates a processor to a process, it keeps it running , Until the process is completed or a process scheduling event occurs and the process is blocked , To assign the processor to another process .
- Preemptive : The operating system forcibly pauses the running process , The scheduler will CPU Scheduling method assigned to other ready processes .
Design of scheduling strategy
- response time : Time from user input to reaction
- Turnaround time : The time from the beginning of the task to the end of the task
CPU Tasks can be divided into Interactive tasks and Batch task , The ultimate goal of scheduling is Fair use CPU, Make the response time of interactive tasks as short as possible , The user will not feel delayed , At the same time, the turnaround time of batch tasks is as short as possible , Reduce user waiting time .
Scheduling algorithm :
FIFO or First Come, First Served (FCFS) First come, first served
- The scheduling order is the order in which the tasks arrive at the ready queue .
- fair 、 Simple (FIFO queue )、 non-preemptive 、 Not suitable for interactive .
- Task characteristics are not considered , The average waiting time can be reduced .
Shortest Job First (SJF)
- The shortest assignment (CPU Minimum interval length ) First scheduling .
- SJF The minimum average waiting time can be guaranteed .
Shortest Remaining Job First (SRJF)
- SJF Preemptive version of , Than SJF Have more advantages .
- SJF(SRJF): How to know the next CPU Range size ? Make predictions based on history : Exponential average method .
Priority scheduling
- Each task is associated with a priority , Scheduling the task with the highest priority .
- Be careful : Tasks with too low priority are always ready , Can't get running , appear “ hunger ” The phenomenon .
Round-Robin(RR) round-robin scheduling
- Set a time slice , Schedule by time slice (“ Round call ” Algorithm )
- advantage : Respond regularly , Short waiting time ; shortcoming : More context switches ;
- The time slice is too big , Response time is too long ; Throughput becomes smaller , The turnaround time is getting longer ; When the time slice is too long , Degenerate to FCFS.
Multi level queue scheduling
- Set up multiple process queues according to certain rules
- Different queues have a fixed priority ( High priority has preemption )
- Different queues can give different time slices and adopt different scheduling methods
- Existing problems 1: There's no way to tell I/O bound and CPU bound;
- Existing problems 2: There is also a certain degree of “ hunger ” The phenomenon ;
Multistage feedback queue
- On the basis of multi-level queue , Tasks can move between queues , More detailed task differentiation .
- According to “ enjoy ”CPU How much time to move the queue , prevent “ hunger ”.
- The most common scheduling algorithm , Most of the OS All use this method or its deformation , Such as UNIX、Windows etc. .
Description of multilevel feedback queue scheduling algorithm :
- The process is waiting in the queue to be scheduled , First go to the highest priority Q1 wait for .
- ** First, schedule the processes in the queue with high priority . If there is no scheduled process in the high priority queue , Then the processes in the secondary priority queue are scheduled .** for example :Q1,Q2,Q3 Three queues , Only in Q1 When there is no process waiting in the, it will be scheduled Q2, Empathy , Only Q1,Q2 It will be dispatched only when all are empty Q3.
- ** For each process in the same queue , Schedule according to the time slice rotation method .** such as Q1 The time slice of the queue is N, that Q1 The homework in experienced N If it hasn't been completed after a time slice , entering Q2 Wait in line , if Q2 The job cannot be completed after the time slice of is used up , All the way to the next level of queue , Until completion .
- A process in a low priority queue at run time , There are new assignments , So after running this time slice ,CPU Immediately assign to the newly arrived homework ( Preemptive ).
7. Conditions for Deadlock ? And how to deal with deadlock ?
Definition : If each process in a group of processes is waiting for an event that can only be raised by other processes in the group , Then this group of processes is deadlock . Or in two or more concurrent processes , If each process holds a resource and waits for other processes to release it or the resources they currently hold , You can't move forward until you change that , Call this group of processes deadlock . informally , That is, two or more processes are blocked indefinitely 、 A state of waiting for each other .
The necessary conditions for deadlock :
- mutual exclusion (Mutual exclusion): Resources cannot be shared , Can only be used by one process .
- Request and hold conditions (Hold and wait): Processes that already have resources can request new resources again .
- Non preemptive condition (No pre-emption): The allocated resources cannot be forcibly deprived from the corresponding process .
- Loop wait condition (Circular wait): Several processes in the system form a loop , Each process in the loop is waiting for the resource being occupied by the adjacent process .
How to deal with deadlocks :
Deadlock prevention : Setup protocol
Deadlock avoidance : Carefully and dynamically allocate resources , Keep the system in a safe state at all times , Banker's algorithm can be used
Deadlock detection and release : Resource allocation chart 、 Seize resources 、 Terminate the process
8. Critical resources
- In the operating system , A process is the smallest unit of resources ( A thread can access all the resources in its process , But the thread itself doesn't own resources or just a little necessary resources ). but For some resources , It can only be occupied by one process at a time . These resources that can only be occupied by one process at a time are called critical resources . Typical critical resources like physical printers , Or there are some variables and data shared by multiple processes in hard disk or memory ( If such resources are not regarded as critical resources to be protected , Then it is likely to cause the problem of data loss ).
- ** Access to critical resources , Must be mutually exclusive .** That is, when critical resources are occupied , Another process of applying for critical resources will be blocked , Until the critical resources it has applied for are released . Code that accesses critical resources in a process is called a critical area .
9. The whole process of a program from the beginning to the end ( Four processes )
1、 Preprocessing : Conditional compilation , The header file contains , Macro replacement processing , Generate .i file .
2、 compile : Convert the preprocessed file into assembly language , Generate .s file
3、 assembly : Assembly becomes object code ( Machine code ) Generate .o The file of
4、 link : Connect object code , Generate executable
10. Memory pool 、 The process of pool 、 Thread pool
First, introduce a concept “ Pool technology ”. Pooling technology is : Save a lot of resources in advance , In case of emergency and reuse . Pool technology is widely used , Such as memory pool , Thread pool , Connection pools and so on . Memory pool related content , Advice to see Apache、Nginx Open source web The memory pool implementation of the server .
Because in practical application as , Allocate memory 、 Create a process 、 Threads will design some system calls , The system call needs to cause the program to switch from user state to kernel state , It's a very time-consuming operation . therefore , When the program needs frequent memory application release , process 、 Thread creation, destruction and other operations , Memory pools are usually used 、 The process of pool 、 Thread pool technology to improve program performance .
Thread pool : The principle of thread pool is very simple , Similar to the concept of buffer in the operating system , It goes like this : Start with a number of threads , And put the threads to sleep , When you need to open up a thread to do specific work , Will wake up the thread pool One of the sleep threads in , Let it do specific work , When the work is completed , The thread is asleep again , Instead of destroying the thread .
The process of pool Same as thread pool .
Memory pool :** Memory pool refers to that a program requests a large enough memory from the operating system in advance , thereafter , When the program needs to apply for memory , Not directly apply to the operating system , Instead, it gets... Directly from the memory pool ; Empathy , When the program releases memory , Not really returning memory to the operating system , Instead, return to the memory pool .** When the program exits ( Or a specific time ) when , The memory pool will really release the previously applied memory .
️ 11. The difference between dynamic link library and static link library
Static library
- A static library is a collection of external functions and variables . File contents of static library , It usually contains a bunch of variables and functions determined by the programmer , Its content is not as complex as dynamic link library , It is integrated into the application by the compiler and linker during compilation , And make it into target files and executable files that can operate independently . And this executable file and the program that compiles the executable file , Are static creation of a program (static build).
Dynamic library :
- Static libraries are very convenient , But if we just want to use a function in the library , And still have to link everything in . A more modern approach is to use shared libraries , Avoid a lot of duplication of static libraries in files .
- Dynamic links can be executed on the first load (load-time linking), This is a Linux Standard practice of , The dynamic linker ld-linux.so complete , Like standard C library (libc.so) It's usually dynamically linked , such All programs can share the same library , It doesn't have to be packaged separately .
- Dynamic linking can also be completed at the beginning of program execution (run-time linking), stay Linux Use in dlopen() Interface to complete ( Can use function pointers ), Usually used for distributed software , On high-performance servers . And shared libraries can also be shared among multiple processes .
- Links allow us to construct our programs from multiple object files . It can be done at different stages of the program ( compile 、 load 、 During operation ), Understanding links can help us avoid strange mistakes .
difference :
- When using static libraries , Static link library to participate in compilation , During the linking process before the execution file is generated , To link all the instructions of the static link library directly into the executable file . Dynamic libraries provide a way , Enables a process to call functions that are not part of its executable code . The executable code of the function is in a .dll In file , The dll Contains one or more compiled , Functions linked and stored separately from the processes that use them .
- Static libraries cannot contain other dynamic or static libraries , The dynamic library can also contain other dynamic or static libraries .
- Static libraries are compiled , The library functions are loaded into the program , Dynamic library functions must be loaded at runtime , So using a static library is faster .
12. Virtual memory ? Advantages and disadvantages ?
Definition : It has the function of calling in and replacing , A memory system that can logically expand the capacity of memory . Its logical capacity is determined by the sum of memory and external memory .
Compared with traditional memory, virtual memory has three main characteristics :
- Many times , It doesn't need to load all the memory at once when the job is running , Instead, it is allowed to be run in memory multiple times .
- Interchangeability , It means that you don't have to stay in memory while the job is running , Instead, it allows you to , Make a swap in and swap out .
- Virtuality , It means to expand the capacity of memory logically , The amount of memory that the user sees , Much larger than the actual memory capacity .
There are two ways to implement virtual memory :
- Request paging storage management .
- Request staging Management .
13. Page replacement algorithm
The operating system manages the memory by pages , Only when necessary can the corresponding part of the process be called into memory . When a page break occurs , You need to select a page to write . If the page to be swapped out has been modified in memory , Turned into “ dirty ” page , You need to write to the disk first . Page replacement algorithm , Is to choose the most appropriate page , Make the efficiency of replacement the highest . There are many page replacement algorithms , Brief introduction , Focus on the more important LRU And its implementation algorithm .
One 、 Optimal page replacement algorithm
Ideally , Let's mark the page , Pick a page that is farthest away from being used again . Of course , Such an algorithm cannot be realized , Because I'm not sure when a page will be used .
Two 、 First in first out page replacement algorithm (FIFO) And its improvement
The idea of this algorithm is the same as that of queue , The algorithm always knocks out the first page in memory , That is, select the page that has the longest residence time in the memory to be eliminated . Realization : Link pages that a process has called into memory into a queue in order , And set a pointer to always point to the oldest page . shortcoming : For some frequently visited pages, such as those containing global variables 、 Common functions 、 Routine, etc , There is no guarantee that these will not be eliminated .
3、 ... and 、 Least recently used page replacement algorithm LRU(Least Recently Used)
Make decisions based on the usage of pages after they are transferred into memory .LRU The replacement algorithm is to select the most recently unused pages for elimination .
1. Configure... For each page in memory A shift register .(P165) The timing signal will shift the register to the right one bit at regular intervals . The page corresponding to the register with the smallest value is the longest unused page .
2. Use a special stack to save the page number of each page currently in use . Whenever a process visits a page , Remove the page number of the page from the stack , Push it to the top of the stack . therefore , The top of the stack is always the most recently accessed page number , At the bottom of the stack is the most recently accessed page number .
link : Segmented memory management
14. Interrupts and system calls
** The so-called interrupt is in the process of computer program execution , Because something special happened , bring CPU To suspend execution of a program , Instead, execute the program that deals with the event . Wait for these special things to be handled before going back to the previous program .** Interrupts are generally divided into three categories :
- An interruption caused by an abnormal or faulty computer hardware , be called Internal exception interrupt ;
- An interrupt caused by the execution of an instruction in a program that causes an interrupt , be called Soft interrupt ( This is also the interrupt related to the system call we will explain );
- Interrupt caused by an external device request , be called External interrupt . Simply speaking , The understanding of interruption is to deal with some special things .
A concept closely related to interrupts is Interrupt handler 了 . When the interruption occurs , The system needs to deal with interrupts , Processing of these interrupts is done by specific functions in the operating system kernel , These specific functions that handle interrupts are what we call interrupt handlers .
Another concept closely related to interrupts is Interrupt priority . Interrupt priority indicates when an interrupt is being processed , The level of interrupts that the processor can accept . The priority of the interrupt also indicates the urgency of the interrupt to be handled .** Each interrupt has a corresponding priority , When the processor is processing an interrupt , Only interrupts with higher priority can be accepted and processed by the processor .** Interrupts with a lower priority than the currently being processed interrupt will be ignored .
Typical interrupt priorities are as follows :
- Machine error > The clock > disk > Network devices > terminal > Software interrupt
Before talking about system calls , First say Process execution at two levels on the system : User level and core level , Also known as User state and system state (user mode and kernel mode).
User space is the memory area where the user process is located , Relative , System space is the memory area occupied by the operating system . All data of user process and system process is in memory . Programs in user mode can only access user space , Programs in kernel state can access user space and kernel space .
The way to switch from user mode to kernel mode is as follows :
- system call : The execution of the program is generally executed in the user state , But when the program needs to use the services provided by the operating system , For example, turning on a device 、 create a file 、 Read and write files ( These are all system calls ) etc. , You need to send a request to the operating system to invoke the service , This is the system call .
- abnormal : When CPU When executing a program running in user mode , Something unexpected happened , This will trigger the switch from the current running process to the kernel related program handling this exception , It's going to be kernel mode , For example, page missing is abnormal .
- ** Interruption of peripheral devices :** When the peripheral device completes the operation requested by the user , Will send to CPU Send out corresponding interrupt signal , At this time CPU It will pause the execution of the next instruction to be executed and execute the handler corresponding to the interrupt signal , If the previously executed instruction is a program in user mode , Then the process of the transformation naturally takes place from user state to kernel state . For example, the hard disk read-write operation is completed , The system will switch to the interrupt handler of hard disk reading and writing to perform subsequent operations .
User state and nuclear state of mind ( Kernel mode ) What's the difference between ?
Authority is different .
- User mode processes can access their own instructions and data , But you can't access kernel instructions and data ( Or other process instructions and data ).
- ** Processes in the core state can access the kernel and user addresses. Some machine instructions are privileged instructions , Executing privileged instructions in user mode can cause errors .** In the system, the kernel is not a collection of estimated processes parallel to the user process .
15.C++ Multithreading , Mutually exclusive , Sync
Synchronization and mutual exclusion
When there are multiple threads , Often need to go to ** Sync ( notes : Synchronization is not simultaneous )** These threads to access the same data or resource .
So-called Sync , It refers to a number of program fragments between different processes , They must be operated in strict accordance with a specified sequence , This order depends on the particular task to be done . If defined by access to resources ,** Synchronization is based on mutual exclusion ( In most cases ), Through other mechanisms, visitors can access resources orderly .** in the majority of cases , Synchronization has achieved mutual exclusion , In particular, all write resources must be mutually exclusive . In a few cases, multiple visitors can be allowed to access resources at the same time .
So-called Mutually exclusive , It refers to a number of program fragments scattered among different processes , When a process runs one of the program segments , Other processes cannot run any of them , You can only run this program segment after the process has finished running . If defined by access to resources , Exclusive of a resource, only one visitor is allowed to access it at the same time , Unique and exclusive . But mutual exclusion can't limit the order in which visitors access resources , That is, access is out of order .
There are several ways to realize multithreading synchronization and mutual exclusion
There are two kinds of synchronization methods between threads : User mode and kernel mode . seeing the name of a thing one thinks of its function , Kernel mode is to use the singleness of system kernel object to synchronize , Kernel state and user state need to be switched when using , And user mode is that you don't need to switch to kernel mode , Only in user mode .
The methods in user mode are : Atomic manipulation ( For example, a single global variable ), A critical region .
The methods in kernel mode are : event , Semaphore , The mutex .
1、 A critical region : Access to common resources or a piece of code by serializing multiple threads , Fast , Suitable for controlling data access .
2、 The mutex : Designed to coordinate common individual access to a shared resource .
3、 Semaphore : Designed to control a resource with a limited number of users .
4、 things Pieces of : Used to inform the thread that some events have occurred , So as to start the next task .
16. Logical address VS Physical address VS Virtual memory
- ** The so-called logical address , Refers to computer users ( For example, program developers ), See the address .** for example , When creating a length of 100 When an integer array of , The operating system returns a logically contiguous space : Pointer to the memory address of the first element of the array . Because the size of the integer element is 4 Bytes , Therefore, the address of the second element is the starting address plus 4, And so on . in fact , The logical address is not necessarily the real address of the element store , The address of the physical element of the array ( In the memory module ), Not continuous , Just the operating system through address mapping , Map logical addresses into contiguous , This is more in line with people's intuitive thinking .
- Another important concept is virtual memory . The operating system can read and write memory several orders of magnitude faster than the disk . however , Memory prices are also relatively high , It cannot be expanded on a large scale . therefore , The operating system can move some less commonly used data out of memory ,“ Store it in the disk cache with relatively low price , To achieve memory expansion . The operating system can also predict which part of the data stored in the disk cache needs to be read and written through the algorithm , Read this part of data back to memory in advance . Virtual memory space is much smaller than disk space , therefore , Even searching virtual memory space is faster than searching disk directly . The only possibility that is slower than disk is , Memory 、 There is no required data in virtual memory , Finally, you need to read directly from the hard disk . This is why memory and virtual memory need to store data that will be read and written repeatedly , Otherwise, it will lose the meaning of cache . There is a special in modern computers Translation buffer (Translation Lookaside Buffer,TLB), It is used to realize the fast conversion from virtual address to physical address .
With memory / There are two other concepts related to virtual memory :
Reference link :https://blog.csdn.net/newcong0123/article/details/52792070
17. Internal debris and external debris
In memory management , Internal fragmentation The allocated memory space is larger than the memory space required by the request .
External debris Has not been allocated yet , But because the size is too small to be allocated to the memory space free block of the new process requesting space .
There are internal fragments in the fixed partition , Variable partition allocation will have external fragmentation ;
Paged virtual storage The system exists Internal fragmentation ; Segmented virtual storage System , There is External debris
In order to use memory effectively , Make memory less fragmented , To page memory , Memory is used in pages , The last page is often not full , So the internal fragments are formed .
Segment for sharing , External fragments are formed when segments are swapped in and out , such as 5K After the segment of is swapped out , There is one 4k Put the paragraph in the original 5k The place of , And so it came into being 1k External fragments of .
18. The difference between synchronization and mutual exclusion
When there are multiple threads , It is often necessary to synchronize these threads to access the same data or resource . for example , Suppose there is a program , One of the threads is used to read files into memory , Another thread counts the number of characters in the file . Of course , Before transferring the entire file into memory , It's meaningless to count it . however , Because each operation has its own thread , The operating system will treat the two threads as unrelated tasks and execute them separately , This makes it possible to count the number of words when the entire file is not loaded into memory . To solve this problem , You have to make the two threads work synchronously .
So-called Sync , It refers to several program fragments walking between different processes , They must be operated in strict accordance with a specified sequence , This order depends on the particular task to be done . If Defined in terms of access to resources , Synchronization is based on mutual exclusion ( In most cases ), Through other mechanisms, visitors can access resources orderly . in the majority of cases , Synchronization has achieved mutual exclusion , In particular, all write resources must be mutually exclusive . In a few cases, multiple visitors can be allowed to access resources at the same time .
So-called Mutually exclusive , It refers to a number of program fragments scattered among different processes , When a process runs one of the program segments , Other processes cannot run any of them , You can only run this program segment after the process has finished running . If Defined in terms of access to resources , Exclusive of a resource, only one visitor is allowed to access it at the same time , Unique and exclusive . But mutual exclusion can't limit the order in which visitors access resources , That is, access is out of order .
19. What is thread safety
If multithreaded programs run with predictable results , And the result is the same as that of a single threaded program , So it means “ Thread safety ” Of .
20. Synchronous and asynchronous
Sync :
- Definition of synchronization : When a process executes a request , If the request takes a while to return information , that , This process will wait forever , It doesn't continue until the return message is received .
- characteristic :
- Synchronization is blocking mode ;
- Synchronization is performed sequentially , After executing one, execute the next , Need to wait , Coordinated operation ;
asynchronous :
- It means that the process does not need to wait all the time , Instead, continue to do the following , Regardless of the status of other processes . When a message is returned, the system will notify the process to process , This can improve the efficiency of execution .
- characteristic :
- Asynchronous is non blocking mode , No need to wait ;
- Asynchrony is independent of each other , While waiting for an event , Continue to do your own thing , There is no need to wait for this event to be completed before working . Threads are one way to implement asynchronously .
Advantages and disadvantages of synchronous and asynchronous :
- Synchronization can avoid deadlock , The occurrence of reading dirty data . Generally, when sharing a resource , If everyone has permission to modify , Modify a file at the same time , It is possible to make one read content that another person has deleted , Will go wrong , Synchronization will not go wrong . but , Synchronization needs to wait for resource access to end , A waste of time , Low efficiency .
- Asynchrony can improve efficiency , but , Low security .
21. The difference between system call and library function
- ** system call (System call) It is the way that a program requests services from the system kernel .** It can include hardware related services ( for example , Access to hard disk, etc ), Or create a new process , Scheduling other processes, etc . System call is an important interface between program and operating system .
- Library function : Write some common functions and put them in a file , Call when writing an application , This is provided by a third party , It happens in the user address space .
- stay Portability , The system calls of different operating systems are generally different , Poor portability ; And at all levels ANSI C Compiler version ,C The library functions are the same .
- stay Call overhead , System calls need to switch between user space and kernel environment , It's expensive ; Library function calls belong to “ Procedure call ”, It costs less .
22. guardian 、 Corpse 、 The concept of orphan process
- Daemon : A special process that runs in the background , Perform certain tasks independently of the control terminal and periodically .
- Zombie process : A process fork Subprocesses , Subprocess exit , The parent process does not wait/waitpid Subprocesses , that The process descriptor of the subprocess is still stored in the system , Such a process is called a zombie process .
- Orphan process : One The parent process exits , And one or more of its subprocesses is still running , These child processes are called orphan processes .( The orphan process will be init Process adoption and status collection for them )
23.Semaphore( Semaphore ) Vs Mutex( The mutex )
- When a user creates multiple threads / Process time , If different threads / The process reads and writes the same content at the same time , May cause reading and writing errors , Or the data is inconsistent . here , It needs to be locked , Control critical zone (critical section) Access rights of .** about semaphore for , When initializing variables, you can control how many threads are allowed / The process accesses a critical region at the same time ,** Other threads / The process will be blocked , Until someone unlocks .
- Mutex amount to ** Only one thread is allowed / Process access semaphore.** Besides , According to actual needs , People have also implemented a read-write lock (read-write lock), It allows multiple readers to exist at the same time (reader), But at most one writer at any time (writer), And cannot coexist with readers .
24.IO Multiplexing
The server has to respond to two independent I/O How the event is handled
IO Multiplexing refers to one or more processes specified by the kernel once it is found IO Condition ready to read , It informs the process .IO Multiplexing is used in the following situations :
- When a customer processes multiple descriptors ( Generally interactive input and network socket interface ), You have to use I/O Reuse .
- When a customer processes multiple socket interfaces at the same time , And it's possible , But there are very few .
- If one TCP The server should deal with the monitor socket interface , Also handle the connected socket , In general, it also needs to use I/O Reuse .
- If a server is going to handle TCP, And deal with UDP, Generally use I/O Reuse .
- If a server has to handle multiple services or multiple protocols , Generally use I/O Reuse .
- Compared to multiprocessing and multithreading ,I/O The biggest advantage of multiplexing technology is low system overhead , The system doesn't have to create processes / Threads , And you don't have to maintain these processes / Threads , Thus greatly reducing the cost of the system .
️25. Thread safety
If your code is in a process where multiple threads are running at the same time , These threads may run the code at the same time . If the result of each run and Single thread The result of running is the same , And the values of other variables are the same as expected , It's thread safe . Or say : The interface provided by a class or program is... For a thread Atomic manipulation Or the switch between multiple threads will not result in the ambiguity of the execution result of the interface , That is to say, we don't need to think about synchronization .
Thread safety problems are all caused by Global variables And Static variables Caused by the .
If global variables in each thread 、 Static variables are read only , No write operation , Generally speaking , This global variable is thread safe ; If there are multiple threads performing write operations at the same time , In general, we need to consider Thread synchronization , Otherwise, thread safety may be affected .
26. Thread sharing and exclusive resources
All threads in a process share the address space of the process , But they have their own (/ Private ) Stack (stack),Windows The default stack size for threads is 1M. Pile up (heap) The allocation of is different from the stack , Generally, a process has a C Runtime heap , This heap is shared by all threads in this process ,windows The process also has the so-called process default heap , Users can also create their own heap .
In operating system terms , When switching threads, you actually switch a structure that can be called thread control block (TCB), It stores all the necessary information for restoring the thread environment in the future , Include all register sets that must be saved , The state of the thread, etc .Pile up : It's a common space , It is divided into global heap and local heap . The global heap is all the unallocated space , Local heap is the space allocated by users . The heap is allocated when the operating system initializes the process , You can also ask the system for additional heap during operation , But remember to return it to the operating system , Or it's a memory leak .
** Stack :** Is unique to a thread , Save its running state and local automatic variables . The stack is initialized at the beginning of the thread , The stacks of each thread are independent of each other , therefore , The stack is thread safe Of . The operating system will automatically switch the stack when switching threads , It's switching SS/ESP register . Stack space does not need to be explicitly allocated and released in high-level languages .
Add :
1. What are kernel state and user state ? How to switch between kernel mode and user mode ?
Two states of user program execution , The kernel can execute all instructions such as privileged instructions , All storage space , The user state can only access the user address space
Switch , Except for system calls ( Soft interrupt ) accident , also : abnormal , External interrupt
️2. Process of communication
Low level data communication : Shared data structure , Such as bounded buffer between producer and consumer
advanced : Shared storage area
Pipeline communication system : Shared files , Write progress , Reading process , Character stream
Messaging system : Communicating through primitives ; Through intermediate entities , E.g. email communication
3. Timing of process scheduling , Can't schedule the time
The execution of the current process is complete , Blocked , The time slice has been used up , Return to the process after the system call , Preempt processes with higher scheduling priority
You can't : When the terminal handler executes 、 In the critical area of the kernel program
Virtual memory
Expand the memory logically , Take virtual paging storage management as an example , He divides the main memory and auxiliary memory into several copies of the same page size , Page in / out is performed by page , A process consists of several pages ,CPU The logical address of the data to be accessed gets the physical address through the address transformation mechanism , The data can be found in memory or external memory through the physical address .( In memory or external memory , See if it's in memory , Involved in page transfer work )
For users , It has a speed close to main memory , It also has a capacity close to the auxiliary storage .
️LRU、LFU Implementation mechanism
LRU:
Hardware support : Use stack or register
Use a special stack to save the serial number of each page currently in use :
When visiting a page , Move the page number from the stack to the top of the stack . The bottom of the stack is the longest unused in the recent time .
LFU:
Least unused in recent time
Each page is set with a shift register , Every time a page is accessed , The highest position of the shift register is set to 1, And then move right once every certain time .
The advantages and disadvantages of paragraph and page :
advantage : There is no outer fragment , Each inner fragment does not exceed the size of the page .
shortcoming : All programs are loaded into memory , Corresponding hardware support is required , For example, the generation of missing page interrupt in the address transformation mechanism and the selection of obsolete pages require corresponding hardware support . Increased machine costs and system overhead .
Paragraph style :
advantage : Sure Write and compile separately , For different types of segments Take different protection , You can share by segment , Including through dynamic links Code sharing .
You can put a process in the same segment , To facilitate operation “ Share, etc
shortcoming : It creates external debris .
边栏推荐
- PyTorch-12 GAN、WGAN
- 我想造SQL数据(存储结构)
- Detach an entity from jpa/ejb3 persistence context
- B站增量数据湖探索与实践
- Ping An technology's practice of migrating from Oracle to ubisql
- Flower instruction WP
- [UVM basics] understanding of sequence and sequencer
- I want to create SQL data (storage structure)
- Teach you how to use the harmonyos local simulator
- [NLP] vector retrieval model landing: Bottleneck and solution!
猜你喜欢
Children play games (greed, prefix and) - Niuke winter vacation training camp
Take you three minutes to get started typescript
The 9th zero code training camp is officially open for registration! (Beijing, Shanghai, Guangzhou and Shenzhen)
ReW_ p
Chapter 4 (functions and preprocessing)
Redis(4)----浅谈整数集合
PCB miscellaneous mail
The difference between setstoragesync and setstorage
Basic use of swiperefreshlayout, local refresh of flutterprovider
Web technology sharing | webrtc recording video stream
随机推荐
C#/. Net phase VI 01C Foundation_ 02:vs2019 basic operations, excluding code files, smart tips, data types, differences between float and double, and differences between string and string
The long path of Xiao Sha (graph theory, Euler diagram)
Junit
Redis (5) -- Talking about compressed list
Junit
My colleague asked a question I never thought about. Why did kubernetes' superfluous' launch the static pod concept?
Detailed explanation and code implementation of soft voting and hard voting mechanism in integrated learning
Apache InLong毕业成为顶级项目,具备百万亿级数据流处理能力!
buuresevewp
Uniapp uses uviewui
Chapter 5 (array)
Minor problems in importing D
The first multi platform webcast of 2021ccf award ceremony pays tribute to the winners! CCF outstanding engineer
Arrangement and insertion structure
Gavin teacher's insight on transformer live class - multi state transition of financial BOT and rasa interactive behavior analysis of Rasa project (52)
What is the difference between bone conduction earphones and ordinary earphones? Advantages of bone conduction earphones
Esp32-c3 introductory tutorial WiFi part ⑥ - WIFI intelligent distribution network based on serial port
The 9th zero code training camp is officially open for registration! (Beijing, Shanghai, Guangzhou and Shenzhen)
[UVM practice] Chapter 2: a simple UVM verification platform (4) the ultimate masterpiece of UVM: sequence
[UVM practice] Chapter 3: UVM Fundamentals (3) field automation mechanism