当前位置:网站首页>On the difference between process and thread

On the difference between process and thread

2022-06-24 13:19:00 Software up

What is a process ?

First , At the operating system level , process (Progress) It is the basic unit of resource allocation and system scheduling, and can also be understood as the basic execution entity of a program ; When a program is loaded into memory and ready to execute , It's a process ! When the process is created , The operating system assigns a unique... To the process 、 Not repeated ID, Used to distinguish between different processes

What is thread ?

Threads (Thread) yes CPU Minimum unit of scheduling , Is the smallest unit of program execution flow , Threads cannot own resources independently ( Should be shared by multiple threads ), It's a lot less expensive to create a thread than , Because creating a thread only requires Stack pointer and Program counter That's all right. , Creating a process requires the operating system to allocate a new address space , Data resources, etc , This one costs a lot . Every process ( Program ) Have at least one thread , A process is a container for threads , Running multiple threads at the same time in a single program to complete different work , It's called multithreading !

Difference between process and thread

The differences between processes and threads can be summarized in the following points

  • The same process can contain several threads , A thread contains at least one thread , A thread can only exist in one process . That is, the thread must depend on the process
  • The threads in the same process are not independent of each other , Need to share process resources . Each process is basically independent , They don't interfere with each other
  • Threads are lightweight processes , Its creation and destruction require much less time and resources than the process
  • In the operating system , A process can have its own resources , Threads cannot have their own resources independently .

Process scheduling

In a normal operating system , Processes used by users , Such as :QQ、 music 、 The browser etc. , The number of these user processes is generally more than CPU Check the number , This will cause them to compete with each other in the process of operation CPU, This requires the operating system to have certain policies to allocate processes . Generally, there are five commonly used process scheduling algorithms

  • First come first serve algorithm

First come, first served (FCFS) Scheduling algorithm is one of the simplest scheduling algorithms , This algorithm can be used in job scheduling , It can also be used for process scheduling . When this algorithm is used in job scheduling , Each scheduling is to select one or more jobs from the backup job queue that enter the queue first , Put them in memory , Allocate resources to them 、 Create a process , And put it in the ready queue . In process scheduling, we use FCFS When the algorithm , Then each scheduling is to select the first process from the ready queue to enter the queue , Assign a processor to it , Put it into operation . The process does not give up the processor until it completes or blocks due to an event .

  • Short job first algorithm

Short assignments ( process ) Priority scheduling algorithm , It refers to the algorithm of priority scheduling for short jobs or short processes . They can be used for job scheduling and process scheduling respectively . Short job preferred (SJF) The scheduling algorithm is to select one or several jobs with the shortest estimated running time from the backup queue , Put them in memory to run . The short process takes precedence (SPF) The scheduling algorithm is to select a process from the ready queue with the shortest estimated running time , Assign the processor to it , Make it execute immediately and continue until it is finished , Or when an event occurs and is blocked, it will be rescheduled when the processor is abandoned .

  • Time slice rotation algorithm

The system queues all the ready processes according to the principle of first come, first served , Every time when scheduling , hold CPU Assigned to team leader process , And make it execute a time slice . What's the size of the time slice ms To hundreds ms. When the execution time slice runs out , A clock interrupt request is issued by a timer , The scheduler will stop the execution of the process according to this signal , And send it to the end of the ready queue ; then , Then assign the processor to the new queue leader process in the ready queue , Also let it execute a time slice . This ensures that all processes in the ready queue can get a slice of processor execution time within a given time . In other words , The system can respond to all users' requests within a given time .

  • Multilevel feedback queue scheduling algorithm

The multi-level feedback queue scheduling algorithm does not need to know the execution time of various processes in advance , It can also meet the needs of various types of processes , So it is a better process scheduling algorithm which is generally accepted at present . In the system of multi-level feedback queue scheduling algorithm , The implementation process of the scheduling algorithm is as follows : 1) Multiple ready queues should be set , And give different priority to each queue . The first queue has the highest priority , The second line is next to , The priority of the remaining queues is reduced one by one . This algorithm gives different execution time slices to each queue , In the higher priority queue , The smaller the execution time slice specified for each process . for example , The time slice of the second queue is twice as long as that of the first queue , The first i+1 The time slice of a queue is better than the i The time of each queue is twice as long . 2) When a new process enters memory , First put it at the end of the first queue , Press FCFS The principle of queuing for scheduling . When it's the turn of the process to execute , If it can be completed in this time slice , You can prepare the evacuation system ; If it's not finished at the end of a time slice , The scheduler then moves the process to the end of the second queue , Press... Again FCFS The principle waits for the schedule to execute ; If it does not complete after running a time slice in the second queue , Put it in the third queue in turn ,……, Go on like this , When a long assignment ( process ) From the first line down to the n After queue , In the n The queue is rotated by time slice . 3) Only when the first queue is free , The scheduler schedules the processes in the second queue to run ; Only if the 1~(i-1) When all queues are empty , I'll dispatch the i Processes in the queue run . If the processor is i When serving a process in the queue , There are new processes coming into the higher priority queue ( The first 1~(i-1) Any one of the queues in ), At this time, the new process will preempt the running process's processor , That is to say i When the time slice of a running process in the queue runs out , The scheduler selects the process in the queue with higher priority , Assign the processor to it .

  • Priority scheduling algorithm

This algorithm is often used in batch processing systems , As a job scheduling algorithm , It is also used as a process scheduling algorithm in a variety of operating systems , It can also be used in real-time systems . When the algorithm is applied to job scheduling , The system will select several jobs with the highest priority from the backup queue and load them into memory . When used for process scheduling , The algorithm assigns the processor to the process with the highest priority in the ready queue , At this time , The algorithm can be further divided into the following two types .

  • Preemptive scheduling algorithm In this way , The system also assigns the processor to the process with the highest priority , Make it carry out . But during its execution , As long as there is another process with higher priority , The process scheduler immediately stops the current process ( The highest priority process ) Implementation , Reassign the processor to the newly arrived process with the highest priority . therefore , When using this scheduling algorithm , Every time a new ready process appears in the system i when , Give it priority Pi With the process being executed j The priority of Pj Compare . If Pi≤Pj, Original process Pj Then continue to carry out ; But if it is Pi>Pj, Stop immediately Pj Implementation , Do process switching , send i Process input and Implementation . obviously , This preemptive priority scheduling algorithm can better meet the requirements of urgent jobs , Therefore, it is often used in strict real-time systems , And in batch and time-sharing systems with high performance requirements .
  • Preemptive scheduling algorithm In this way , The system also assigns the processor to the process with the highest priority , Make it carry out . But during its execution , As long as there is another process with higher priority , The process scheduler immediately stops the current process ( The highest priority process ) Implementation , Reassign the processor to the newly arrived process with the highest priority . therefore , When using this scheduling algorithm , Every time a new ready process appears in the system i when , Give it priority Pi With the process being executed j The priority of Pj Compare . If Pi≤Pj, Original process Pj Then continue to carry out ; But if it is Pi>Pj, Stop immediately Pj Implementation , Do process switching , send i Process input and Implementation . obviously , This preemptive priority scheduling algorithm can better meet the requirements of urgent jobs , Therefore, it is often used in strict real-time systems , And in batch and time-sharing systems with high performance requirements .

Java Default thread

In a Java There are several threads in the program by default ?

2 individual , One main Function runs , This main Function is a single threaded program ! But this so-called single threaded program is just JVM A thread in this program ,JVM Itself is a multithreaded program , Except for the main function , also GC Threads ( Garbage collector thread )

Java Can you really enable multithreading ?

public class Test {
    public static void main(String[] args) {
        Thread thread = new Thread();
        thread.start();
    }
}

The above is an example code of thread startup , that Java Can you really enable multithreading ? Look at this. start() Method implementation ! as follows :

start() Methods are local by calling methods start0() To enable multithreaded , The principle still calls C++ To start a thread ,Java It is impossible to call the hardware directly !

State of thread

One of the threads getState() Can return to a State The enumeration types are as follows :

public enum State {
    
    NEW,        //  Freshmen 

    
    RUNNABLE,    //  function 

    
    BLOCKED,    //  Blocking 

    
    WAITING,    //  wait for         

  
    TIMED_WAITING,//  Overtime waiting 

   
    TERMINATED;    //  End 
}

namely Java There are several states as above .

Waiting and Sleep The difference between

although wait and sleep Can change the thread state into a waiting state , But they are completely different in behavior and use .

  • The position used is different

wait() Must be used in a synchronized code block , Such as synchronized or Lock Use in ; and sleep() Method is called without resynchronization , You can use it as you like .

  • Different classes

wait() Methods are used and defined in Object Class , and sleep Method operates on the current thread , It's defined in java.lang.Thread In class .

  • The way the lock is released is different

call wait() The method will release the currently held lock , and sleep Method does not release any locks ( Sleep with the lock )

Java A way to implement multithreading

1、 Inherit Thread class

public class Test {
    public static void main(String[] args) {
        MyThread thread = new MyThread();
        thread.start();
    }
}
class MyThread extends Thread{

    @Override
    public void run() {
        System.out.println("new thread");
    }
}

By inheritance Thread class , And rewrite its run Method , We can create a thread .

  • First define a class to inherit Thread class , rewrite run Method .
  • Then create the subclass object , And call start Method to start the thread .

2、 Realization Runnable Interface

public class Test {
    public static void main(String[] args) {
        MyThread thread =new MyThread();
        new Thread(thread).start();
    }
}
class MyThread implements Runnable{

    @Override
    public void run() {
        System.out.println("new Thread");
    }
}

By implementing Runnable , And implement run Method , You can also create a thread .

  • First define a class implementation Runnable Interface , And implement run Method .
  • Then create Runnable Implementation class object , And use it as target Pass in Thread In the constructor of
  • Last call start Method to start the thread .

The code above is not very good , This will reduce the coupling of thread classes , You can use the following Lambda Expression creation thread ( Recommended )

public class Test {
    public static void main(String[] args) {
        MyThread myThread = new MyThread();
        Thread thread = new Thread(() -> { //  Use here lambda expression 
            myThread.print();
        });
        thread.start();
    }
}

class MyThread {
    public void print() {
        System.out.println(" I am a thread class ");
    }
}

3、 Realization Callable Interface

public class Test {
    public static void main(String[] args) throws ExecutionException, InterruptedException {
        FutureTask<Integer> task = new FutureTask<>(new MyThread());
        new Thread(task).start();
        Integer result = task.get(); //  Get the pointing result of the thread , Blocking type 
        System.out.println(result);

    }
}

class MyThread implements Callable<Integer> {

    @Override
    public Integer call() throws Exception {
        return new Random().nextInt(100);
    }
}
  • So let's define a Callable Implementation class of , And implement call Method .call Method has a return value .
  • And then through FutureTask Construction method of , Put this Callable The implementation class is passed in .
  • hold FutureTask As Thread Class target , establish Thread Thread object .
  • adopt FutureTask Of get Get the result of thread execution .

4、 Thread pool creation

public class Test {
    public static void main(String[] args) throws ExecutionException, InterruptedException {
        ExecutorService executorService = Executors.newFixedThreadPool(10);
        for (int i=0;i<10;i++){
            executorService.execute(new MyThread());
        }
        executorService.shutdown();

    }
}

class MyThread implements Runnable {

    @Override
    public void run() {
        System.out.println(Thread.currentThread().getName()+"=> Yes ");
    }
}

This is used here JDK Self contained Executors To create a thread pool object .

  • First , Order one Runnable Implementation class of , rewrite run Method .
  • Then create a thread pool with a fixed number of threads .
  • Finally through ExecutorService Object's execute Method passes in the thread object .

Concurrency and parallelism

  • Concurrent : One processor handles multiple tasks at the same time
  • parallel : Multiple processors or multi-core processors handle multiple different tasks at the same time

The former is logically simultaneous , The latter is physically simultaneous

  • concurrency (concurrency), It's also called synchronicity , The ability to handle multiple simultaneous activities , Concurrent events do not have to occur at the same time .
  • parallel (parallelism) It refers to two concurrent events that occur at the same time , Has the meaning of concurrency , Concurrency is not necessarily parallel .

原网站

版权声明
本文为[Software up]所创,转载请带上原文链接,感谢
https://yzsam.com/2021/05/20210525165026951z.html

随机推荐