Java's Concurrent AQS Principle Explained


Disclosure: This article was reprinted from.https://juejin.im/post/5c11d6376fb9a049e82b6253 Really great write up, thanks for sharing old money.

Opening Java's Two Veins - The Building Blocks of Concurrent Data Structures

A question every advanced programmer in Java needs to ask themselves after experiencing multi-threaded program development is how the locks built into Java are implemented. The most commonly used and simplest lock is ReentrantLock, which blocks the current thread if it does not succeed immediately in adding a lock, waiting for other threads to release the lock before retrying to add the lock, how does a thread implement blocking itself? How do other threads wake up the current thread after releasing the lock? How did the current thread come to the conclusion that it did not succeed in adding the lock? This content will answer all the questions mentioned above from the root

Thread blocking proto-language

Java's thread blocking and waking is done through the park and unpark methods of the Unsafe class.

public class Unsafe {
  ...
  public native void park(boolean isAbsolute, long time);
  public native void unpark(Thread t);
  ...
}
 Copy Code

Both of these methods are native methods, which are themselves core functions implemented by the C language. park means to park and let the currently running thread Thread.currentThread() sleep, unpark means to unpark and wake up the specified thread. These two methods are implemented at the bottom using the semaphore mechanism provided by the operating system. The implementation process requires a deep dive into the C code, so I won't get into the details here. The two parameters of the park method are used to control how long to sleep. The first parameter, isAbsolute, indicates whether the second parameter is an absolute or relative time in milliseconds.

The thread will keep running from the time it is started, except for the OS's task scheduling policy, and it will only pause when park is called. The mystery of how a lock can suspend a thread is precisely because the lock calls the park method at the bottom.

parkBlocker

The thread object Thread has an important property inside it, parkBlocker, which holds what the current thread is parked for. It's like having a lot of cars parked on a parking lot, and all of these owners are coming to an auction, waiting to auction off the items they want, and then taking their cars away. So the parkBlocker here is about this "auction". It is the manager coordinator of a series of conflicting threads, and it is the one that controls which threads should sleep or wake up.

class Thread {
  ...
  volatile Object parkBlocker;
  ...
}
 Copy Code

This property will be set to null when the thread is woken up by unpark. Unsafe.park and unpark don't set the parkBlocker property for us; the tool class responsible for managing this property is LockSupport, which wraps these two methods of Unsafe in a simple way.

class LockSupport {
  ...
  public static void park(Object blocker) { Thread t = Thread.currentThread(); setBlocker(t, blocker); U.park(false, 0L); setBlocker(t, null); //  wake up postnull }

  public static void unpark(Thread thread) { if (thread != null) U.unpark(thread); }
  }
  ...
}
 Copy Code

Java's lock data structure implements hibernation and wake-up precisely by calling LockSupport. The value of the parkBlocker field inside the thread object is the "queue manager" we'll talk about below.

Queue Manager

When multiple threads compete for the same lock, there must be a queuing mechanism to string together those threads that don't get the lock. When the lock is released, the lock manager picks an appropriate thread to take possession of the just-released lock. Each lock will have such a queue manager inside it, and the manager will maintain a queue of waiting threads inside it. The queue manager inside ReentrantLock is AbstractQueuedSynchronizer, and its internal wait queue is a bi-directional list structure, with the following structure for each node in the list.

class AbstractQueuedSynchronizer {
   volatile Node head;   // The thread at the head of the queue will get the lock first
   volatile Node tail;   // Threads that failed to grab a lock are appended to the end of the queue
   volatile int state;  // Lock Count
}

class Node {
  Node prev;
  Node next;
   Thread thread;  // One thread per node
  
   // The following two special fields can be left unexplained
   Node nextWaiter;  // Whether the request is for a shared or exclusive lock
   int waitStatus;  // Fine state descriptors
}
 Copy Code

When locking is unsuccessful, the current thread includes itself at the end of the waiting chain and then calls LockSupport.park to put itself to sleep. When other threads unlock, they take a node from the head of the chain table and call LockSupport.unpark to wake it up.

The AbstractQueuedSynchronizer class is an abstract class that is the parent of all locking queue managers. The various forms of locks in the JDK inherit from their internal queue managers, and it is the core building block of the Java concurrency world. For example, ReentrantLock, ReadWriteLock, CountDownLatch, Semaphore, and ThreadPoolExecutor internal queue manager are all subclasses of it. This abstract class exposes a number of abstract methods, and each type of lock requires customization of this manager. And all of the concurrent data structures built into the JDK are protected by these locks, which are the foundation of the JDK's multithreaded high-rise.

The lock manager maintains just an ordinary queue in the form of a two-way list, a simple data structure, but quite complex to maintain carefully, as it requires fine consideration of multi-threaded concurrency and every line of code is written with immense care.

The JDK lock manager was implemented by Douglas S. Lea, who wrote the Java concurrency package almost entirely single-handedly, and in the world of algorithms the more elaborate something is, the better it is for one person to do it.

Douglas S. Lea is Professor of Computer Science and current Chair of the Department of Computer Science at SUNY Oswego, specializing in concurrent programming and the design of concurrent data structures. He is an executive committee member of the Java Community Process and chairs JSR 166, which adds concurrency utilities to the Java programming language.

Later we will abbreviate AbstractQueuedSynchronizer to AQS. I must remind readers that AQS is so complex that it's normal to get frustrated on the way to understanding it. There is no book on the market that makes it easy to understand AQS, and too few and far between, and I don't count myself, as someone who can eat AQS thoroughly.

Fair vs. non-fair locks

A fair lock will ensure the order in which locks are requested and obtained; if at some point a lock is going free and a thread wants to try to add a lock at that point, the fair lock must also see if there are any other threads currently in the queue, whereas a non-fair lock can just plug in. Think of the queue at KFC for a burger.

Perhaps you ask, if some lock is in a free state, then how can it have threads queued for it? Let's assume that the thread holding the lock has just released the lock and it wakes up the first node thread in the waiting queue. The woken thread has just returned from the park method and it will try to add the lock next, so the state between the return of park and the lock is the free state of the lock, which is very short, and there may be other threads trying to add the lock in this short time.

Secondly, it is important to note that a thread that has executed the Lock.park method does not have to wait until another thread has unparked itself before it wakes up; it may wake up at any time for some unknown reason. As we look at the source code comments, park returns for four reasons

  1. Other threads unpark the current thread
  2. Time to wake up (park has a time parameter)
  3. Other threads interrupt the current thread
  4. Other unknown causes of 'false awakenings'

The documentation does not explicitly state what unknown causes a false wake, it does state that when the park method returns it does not mean the lock is free, the waking thread will park itself again after a failed retry to get the lock. So the process of adding a lock needs to be written in a loop, and multiple attempts may be made before the lock is successfully obtained.

The computer world is served more efficiently by non-fair locks than by fair locks, so Java uses non-fair locks for its locks by default. But the real world seems to be a little less efficient with non-fair locks, like at KFC if you can keep cutting in line, you can imagine the scene must be chaotic. The reason why there is a difference between the computer world and the real world is presumably because in the computer world a particular thread plugging in does not cause other threads to complain.

public ReentrantLock() { this.sync = new NonfairSync();
}

public ReentrantLock(boolean fair) { this.sync = fair ? new FairSync() : new NonfairSync();
}
 Copy Code

Shared vs. exclusive locks

ReentrantLock locks are exclusive locks, held by one thread, and all other threads must wait. The read lock inside ReadWriteLock is not an exclusive lock, it allows multiple threads to hold the read lock at the same time, it is a shared lock. Shared locks and exclusive locks are distinguished by the nextWaiter field inside the Node class.

class AQS {
  static final Node SHARED = new Node();
  static final Node EXCLUSIVE = null;

  boolean isShared() {
    return this.nextWaiter == SHARED;
  }
}
 Copy Code

So why isn't this field named mode or type or just shared? This is because nextWaiter has a different use in other scenarios, it is as random as a field of the C language union type, except that the Java language does not have a union type.

Conditional variables

The first question that needs to be asked about conditional variables is why do they need them, isn't just locks enough? Consider the following pseudo-code, which does something only when a certain condition is met

 void doSomething() {
   locker.lock();
    while(!condition_is_true()) { // first see if you can mess with it
      locker.unlock();   // If you can't do it, take a break and see if you can.
     sleep(1);
      locker.lock();  // You need a lock to do something, and you need a lock to determine if you can do something.
   }
    justdoit();   / / fuss
   locker.unlock();
 }
 Copy Code

When the condition is not met, it loops and retries (other threads will modify the condition by adding locks), but needs to sleep at intervals, otherwise the CPU will spike due to idling. One problem here is that it's not easy to control how long sleep is. Too long an interval slows down the overall efficiency and even misses the timing (the condition is instantly met and immediately reset), too short an interval causes the CPU to idle. With conditional variables, this problem can be solved

void doSomethingWithCondition() {
  cond = locker.newCondition();
  locker.lock();
  while(!condition_is_true()) {
    cond.await();
  }
  justdoit();
  locker.unlock();
}
 Copy Code

await() method will keep blocking on the cond Conditional variables on until it is called by another thread cond.signal() perhaps cond.signalAll() method before returning the,await() Locks held by the current thread are automatically released when blocking,await() Will try to hold the lock again after being awakened( There may be a queue again.), After getting the lock successfully await() method to successfully return the。

blockage Conditional variables Threads on the, These blocking threads will be concatenated into a Conditional Waiting Queue。 proper signalAll() When called, Will wake up all blocking threads, Let all the blocking threads start fighting for locks again。 If the call is to signal() Only the thread at the head of the queue will be woken up, This will avoid「 The Problem of Surprise Groups」。

The await() method must release the lock immediately, otherwise the state of the critical zone cannot be modified by other threads and the result returned by condition_is_true() will not change. This is why the condition variable must be created by the lock object, and the condition variable needs to hold a reference to the lock object so that it can release the lock and reload it when it is awakened by the signal. If a shared lock is released by the await() method, it does not guarantee that the state of the critical area can be modified by other threads; only exclusive locks can modify the state of the critical area. This is why the newCondition method of the ReadWriteLock.ReadLock class is defined as follows

public Condition newCondition() { throw new UnsupportedOperationException();
}
 Copy Code

With conditional variables, the problem that sleep is not well controlled is solved. When the condition is met, the signal() or signalAll() method is called and the blocking thread can be woken up immediately with almost no delay.

Conditional Waiting Queue

When multiple threads await() on the same condition variable, a conditional wait queue is formed. Multiple conditional variables can be created for the same lock, and multiple conditional wait queues will exist. This queue is very close to the queue structure of AQS, except that it is not a two-way queue, but a one-way queue. The nodes in the queue are of the same class as the nodes in the AQS wait queue, but instead of prev and next, the node pointers are nextWaiter.

class AQS {
  ...
  class ConditionObject {
     Node firstWaiter;   // Point to the first node
     Node lastWaiter;   // Point to the second node
  }
  
  class Node {
    static final int CONDITION = -2;
    static final int SIGNAL = -1;
     Thread thread;   // Current waiting threads
     Node nextWaiter;   // Point to the next conditional waiting node
  
    Node prev;
    Node next;
    int waitStatus;  // waitStatus = CONDITION
  }
  ...
}

 Copy Code

ConditionObject is an internal class of AQS, this object will have a hidden pointer this$0 pointing to the external AQS object, ConditionObject can directly access all the properties and methods of the AQS object (locking and unlocking). The waitStatus status of all nodes located in the conditional wait queue is marked CONDITION, indicating that the node is waiting because of a conditional variable.

Queue transfer

proper Conditional variables of signal() When the method is called, Conditional Waiting Queue The head node threads of the, The node was created from Conditional Waiting Queue It was taken off in, Then it was transferred to AQS in the waiting queue for, Prepare to queue to try to reacquire the lock。 At this point the state of the node changes from CONDITION convert to SIGNAL, Indicates that the current node was created by Conditional variables Wake up the transfer over。

class AQS {
  ...
  boolean transferForSignal(Node node) {
     // Reset node state
    if (!node.compareAndSetWaitStatus(Node.CONDITION, 0))
      return false
    Node p = enq(node); //  go into AQS  Waiting queue int ws = p.waitStatus; //  Modify the status again toSIGNAL
    if (ws > 0 || !p.compareAndSetWaitStatus(ws, Node.SIGNAL)) LockSupport.unpark(node.thread); return true;
  }
  ...
}
 Copy Code

The meaning of the nextWaiter field of the node being transferred has also changed; in the conditional queue it is a pointer to the next node, and in the AQS wait queue it is an indication of whether it is a shared or mutually exclusive lock.

Java Concurrency Package Common Class Library Dependency Structure

ReentrantLock locking process

Below we finely analyze the locking process and understand the lock logic control in depth. I must affirm that Dough Lea's code is written in a minimalist form like the one below, which is still quite difficult to read and understand.

class ReentrantLock {
    ...
    public void lock() { sync.acquire(1); }
    ...
}

class Sync extends AQS {
  ...
  public final void acquire(int arg) {
    if (!tryAcquire(arg) &&
      acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
         selfInterrupt();
  }
  ...
}
 Copy Code

The if statement for acquire is divided into three parts. The tryAcquire method indicates that the current thread is trying to add a lock, and if the lock is unsuccessful it needs to be queued, at which point the addWaiter method is called to queue the current thread. Then the acquireQueued method is called again, starting a cycle of park, wake up and retry to add locks, and continue to park if the locking is unsuccessful. The acquire method will not return until the locking is successful.

The acquireQueued method will return true if it is interrupted by another thread during a loop retry to add a lock. At this point the thread needs to call the selfInterrupt() method to set an interrupted identifier bit for the current thread.

 // Interrupting the current thread is really just setting a marker bit
static void selfInterrupt() { Thread.currentThread().interrupt();
}
 Copy Code

How does a thread know it's been interrupted by another thread? You'll know this by calling Thread.interrupted() after park wakes up, but this method can only be called once, as it clears the interrupt flag bit immediately after it is called. This is why selfInterrupt() needs to be called in the acquire method, in order to reset the interrupt flag bit. This is so that the upper level logic can know if it has been interrupted by Thread.interrupted().

The acquireQueued and addWaiter methods are provided by the AQS class, and tryAcquire needs to be implemented by the subclass itself. Different locks will have different implementations. Let's take a look at the implementation of ReentrantLock's fair lock tryAcquire method

There is an if else branch, where the else if part represents lock reentry, and the thread currently trying to add the lock is the thread that already holds the lock, i.e. the same thread is repeatedly adding locks, so you just need to increase the count. The state of the lock is recorded as the lock count, +1 on a single reentry. There is an exclusiveOwnerThread field in the AQS object that keeps track of the threads that currently hold exclusive locks.

if(c == 0) means that the current lock is free and the count value is zero. At this point there is a need to contend for locks, as there may be multiple threads calling tryAcquire at the same time. The contention is done by using the CAS operation compareAndSetState, and the thread that successfully changes the lock count value from 0 to 1 will get the lock, recording the current thread in exclusiveOwnerThread.

The code also has a hasQueuedPredecessors() judgment, this judgment is very important, it means to see the current AQS waiting queue there are no other threads in the queue, a fair lock in the lock before you need to check a, if there is a queue, you can not insert the queue. A check is not required for non-fair locks, and the entire difference between the implementation of fair and non-fair locks lies in the fact that this one check determines whether the lock is fair or not.

Let's take a look at the implementation of the addWaiter method. The parameter mode indicates whether it is a shared or exclusive lock, and it corresponds to the Node.nextWaiter property.

addWaiter is required to add a new node to the end of the AQS wait queue. If the tail of the queue is empty it means that the queue has not been initialized yet, so it needs to be initialized. AQS queues require a redundant header node at initialization time, which has an empty thread field.

Adding new nodes to the end of the queue also requires consideration of multi-threaded concurrency, so once again the code uses the CAS operation compareAndSetTail to compete for the end-of-queue pointer. Threads that don't compete will move on to the next round of competition for(; ;) Continue to add the new node to the end of the queue using the CAS operation.

Let's look at the code implementation of the acquireQueue method, which repeats the loop of park, try to add lock again, fail to add lock and continue to park.

acquireQueue will see if it is the first node in the AQS wait queue before attempting to add a lock, if not it will continue to park. This means that whether they are fair or non-fair locks, they uniformly take the fair option here to see if it's their turn in the queue. That means 'once in line, always in line'.

private final boolean parkAndCheckInterrupt() { LockSupport.park(this); return Thread.interrupted();
}
 Copy Code

The thread should check to see if it has been interrupted by another thread as soon as it wakes up from park. But even if an interrupt occurs, it will continue to try to acquire the lock, and if it fails to do so, it will continue to sleep until the lock is acquired before returning the interrupt state. This means that interrupting a thread does not cause a deadlock state (not getting a lock) to exit.

We can also note that locks can be canceled cancelAcquire(), to be precise, cancels the state in which the thread is waiting to be locked and the thread is in the AQS waiting queue waiting to be locked. So what would throw an exception and cause the locking to be cancelled? The only possibility is the tryAcquire method, which is implemented by a subclass whose behavior is not controlled by AQS. When the tryAcquire method of a subclass throws an exception, the best way for AQS to handle it is to unlock it. cancelAcquire removes the current node from the wait queue.

ReentrantLock unlocking process

The process of unlocking is a bit simpler, waking up the first valid node in the waiting queue after bringing the lock count down to zero.

public final boolean release(int arg) { if (tryRelease(arg)) { Node h = head; if (h != null && h.waitStatus != 0) unparkSuccessor(h); return true; } return false;
}

protected final boolean tryRelease(int releases) { int c = getState() - releases; //  lit. the bell-ringer must still be tied (idiom); fig. whoever started the trouble must untie the bell
    if (Thread.currentThread() != getExclusiveOwnerThread()) throw new IllegalMonitorStateException(); boolean free = false; if (c == 0) { free = true; setExclusiveOwnerThread(null); } setState(c); return free;
}
 Copy Code

Considering reentrant locks, it is necessary to determine whether the lock count drops to zero to determine whether the lock is completely released. Only when the lock is completely freed can the successor wait node be awakened. The unparkSuccessor skips over invalid nodes (cancelled nodes) and finds the first valid node to call unpark() to wake up the corresponding thread.

read-write lock

The read and write locks are divided into two lock objects, ReadLock and WriteLock, which share the same AQS. The lock count variable state of AQS will be divided into two parts, the first 16 bits for the shared lock ReadLock count and the second 16 bits for the mutually exclusive WriteLock count. Mutual exclusion locks record the number of current write lock reentries, and shared locks record the total number of reentries for all threads currently holding shared read locks.

Read and write locks likewise require consideration of fair and non-fair locks. The fair locking strategy for shared and mutually exclusive locks is the same as for ReentrantLock, which is to see if there are any other threads currently in the queue and to dutifully queue themselves to the end of the queue. A non-fair locking policy is different; it will be more biased towards giving more opportunities for write locks. If there are any threads with read or write requests queued in the current AQS queue, then the write lock can go directly to contention, but if the head of the queue is a write lock request, then the read lock needs to give up the opportunity to the write lock and go to the end of the queue. After all, read-write locks are suitable for situations where there are many reads and few writes, and for the occasional write lock request it should get a higher priority to be handled.

Write locking and locking process

read-write lock of write locking and locking is logically identical in its entirety to ReentrantLock It's the same., The difference is that tryAcquire() approach

public final void acquire(int arg) {
    if (!tryAcquire(arg) &&
      acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
         selfInterrupt();
}

protected final boolean tryAcquire(int acquires) {
    Thread current = Thread.currentThread();
    int c = getState(); int w = exclusiveCount(c);
    if (c != 0) {
         if (w == 0 || current != getExclusiveOwnerThread()) return false; if (w + exclusiveCount(acquires) > MAX_COUNT) throw new Error("Maximum lock count exceeded"); setState(c + acquires); return true; } if (writerShouldBlock() || !compareAndSetState(c, c + acquires)) return false; setExclusiveOwnerThread(current);
     return true;
}
 Copy Code

Write locks also need to be considered reentrant, if the current thread holding the AQS mutex lock happens to be the current thread to add the lock, then the write lock is being reentered, and the reentry only needs to increment the lock count value. When c!=0, that is, the lock count is not zero, it can be either because the current AQS has a read lock or a write lock, and determining w == 0 is to determine whether the current count is brought about by a read lock.

If the count value is zero, then start scrambling for the lock. Depending on whether the lock is fair or not, call the writerShouldBlock() method before scrambling to see if you need to queue, and if not, use the CAS operation to scramble, and the thread that successfully sets the count value from 0 to 1 will have exclusive access to the write lock.

Read locking and locking process

The read lock locking process is much more complex than write locking, it is the same as write locking in terms of overall flow, but the details are very different. In particular, it needs to record read lock counts for each thread, and this part of the logic takes up quite a bit of code.

public final void acquireShared(int arg) {
    //  If an attempt to add a lock is unsuccessful,  Just get in line for hibernation, Then retry in a loop if (tryAcquireShared(arg) < 0) //  queue up、 retry in a loop doAcquireShared(arg);
}
 Copy Code

If the current thread already holds a write lock, it can continue to add a read lock, which is the logic that must be supported in order to reach lock degradation. Lock degradation is the addition of a read lock while holding a write lock, and then the unlocking of the write lock. This eliminates the need to queue the lock twice as opposed to writing to unlock it first and then adding a read lock. Because of the presence of lock degradation, the read and write counts in the lock count can be non-zero at the same time.

wlock.lock();
if(whatever) {
   // Downgrading
  rlock.lock();
  wlock.unlock();
  doRead();
  rlock.unlock();
} else {
   // No downgrading
  doWrite()
  wlock.unlock();
}
 Copy Code

In order to do a lock count for each read lock thread, it sets a ThreadLocal variable.

private transient ThreadLocalHoldCounter readHolds;

static final class HoldCounter {
    int count;
    final long tid = LockSupport.getThreadId(Thread.currentThread());
}

static final class ThreadLocalHoldCounter extends ThreadLocal<HoldCounter> { public HoldCounter initialValue() { return new HoldCounter(); }
}
 Copy Code

But ThreadLocal variables are not efficient enough to access, so the cache is set up again. It will store the lock count of the last thread that acquired a read lock. In cases where thread contention is not particularly frequent, it is more efficient to read the cache directly.

private transient HoldCounter cachedHoldCounter;
 Copy Code

Dough Lea decided that using cachedHoldCounter still wasn't efficient enough, so added another layer of cache records firstReader, which records the first thread to change the read lock count from 0 to 1 and the lock count. When there is no thread contention, it is more efficient to read these two fields directly.

private transient Thread firstReader;
private transient int firstReaderHoldCount;

final int getReadHoldCount() {
    //  Access the read count portion of the lock global count first if (getReadLockCount() == 0) return 0;

    //  revisit firstReader Thread current = Thread.currentThread(); if (firstReader == current) return firstReaderHoldCount;

    //  revisit Recent read thread lock count HoldCounter rh = cachedHoldCounter; if (rh != null && rh.tid == LockSupport.getThreadId(current)) return rh.count;

    //  read without choice ThreadLocal  bar (loanword) (serving drinks, or providing Internet access etc) int count = readHolds.get().count; if (count == 0) readHolds.remove(); return count;
}
 Copy Code

So we see that the author has gone to great lengths to record this read lock count, And what is the purpose of this read count?? That's the count value by which the thread can know if it holds this read-write lock。

Read locking also has a spin process, so called spin is the first locking failure, then just loop and retry without hibernation, sounds a bit like the dead loop retry method.

final static int SHARED_UNIT = 65536
 // Read count is high 16 bits

final int fullTryAcquireShared(Thread current) {
  for(;;) {
    int c = getState(); //  If another thread adds a write lock, Let's go back to bed. bar (loanword) (serving drinks, or providing Internet access etc)
    if (exclusiveCount(c) != 0) { if (getExclusiveOwnerThread() != current) return -1;
    ...
     // Exceeded count limit
    if (sharedCount(c) == MAX_COUNT) throw new Error("Maximum lock count exceeded");
    if (compareAndSetState(c, c + SHARED_UNIT)) {
        // Got the read lock.
       ...
       return 1
    }
    ...
     // Loop retries
  }
}
 Copy Code

Because a read lock requires a CAS operation to modify the total read count value of the underlying lock, a successful one can obtain a read lock. A failed CAS operation to obtain a read lock only means that there is competition for CAS operations between read locks, it does not mean that the lock is occupied by someone else at the moment and you cannot obtain it. A few more tries will definitely add the lock successfully, that's where the spin comes in. There is also a circular retry process for CAS operations when releasing read locks.

protected final boolean tryReleaseShared(int unused) {
   ...
   for (;;) { int c = getState();
       int nextc = c - SHARED_UNIT; if (compareAndSetState(c, nextc)) {
         return nextc == 0;
       }
   }
   ...
}

Recommended>>
1、C Programming 3
2、You can actually use Google search without a ladder
3、I hear logistics tracking is also on board the blockchain ship
4、Talking about hospital information construction in the era of big data
5、Tension at NDSU takes you to the cutting edge of sports research

    已推荐到看一看 和朋友分享想法
    最多200字,当前共 发送

    已发送

    朋友将在看一看看到

    确定
    分享你的想法...
    取消

    分享想法到看一看

    确定
    最多200字,当前共

    发送中

    网络异常,请稍后重试

    微信扫一扫
    关注该公众号