Sunday, December 20, 2015

what happened to the old sstables after apache cassandra compaction is done

Last we study into apache cassandra 1.0.8 compaction and now in this article , we will focus on what will happen after the sstables were compacted. Reading on the class CompactionTask method execute(...) with snippet on the compaction sstables.

1:      ...  
2:      ...  
3:      cfs.replaceCompactedSSTables(toCompact, sstables, compactionType);  
4:      // TODO: this doesn't belong here, it should be part of the reader to load when the tracker is wired up  
5:      for (Entry<SSTableReader, Map<DecoratedKey, Long>> ssTableReaderMapEntry : cachedKeyMap.entrySet())  
6:      {  
7:        SSTableReader key = ssTableReaderMapEntry.getKey();  
8:        for (Entry<DecoratedKey, Long> entry : ssTableReaderMapEntry.getValue().entrySet())  
9:          key.cacheKey(entry.getKey(), entry.getValue());  
10:      }  

After the sstables compaction process is done, we see that the new sstable is persist and the old sstables are replaces. After that, the key cache is also updated. Onto the sstables replacements is where we interested in this article. Tracing down execution calls made.

ColumnFamilyStore.java

1:    public void replaceCompactedSSTables(Collection<SSTableReader> sstables, Iterable<SSTableReader> replacements, OperationType compactionType)  
2:    {  
3:      data.replaceCompactedSSTables(sstables, replacements, compactionType);  
4:    }  

DataTracker.java 

1:    public void replaceCompactedSSTables(Collection<SSTableReader> sstables, Iterable<SSTableReader> replacements, OperationType compactionType)  
2:    {  
3:      replace(sstables, replacements);  
4:      notifySSTablesChanged(sstables, replacements, compactionType);  
5:    }  

DataTracker.java

1:    private void replace(Collection<SSTableReader> oldSSTables, Iterable<SSTableReader> replacements)  
2:    {  
3:      View currentView, newView;  
4:      do  
5:      {  
6:        currentView = view.get();  
7:        newView = currentView.replace(oldSSTables, replacements);  
8:      }  
9:      while (!view.compareAndSet(currentView, newView));  
10:    
11:      addNewSSTablesSize(replacements);  
12:      removeOldSSTablesSize(oldSSTables);  
13:    
14:      cfstore.updateCacheSizes();  
15:    }  

DataTracker.java

1:    public void notifySSTablesChanged(Iterable<SSTableReader> removed, Iterable<SSTableReader> added, OperationType compactionType)  
2:    {  
3:      for (INotificationConsumer subscriber : subscribers)  
4:      {  
5:        INotification notification = new SSTableListChangedNotification(added, removed, compactionType);  
6:        subscriber.handleNotification(notification, this);  
7:      }  
8:    }  

At this point, replace compacted sstables consists of actual replacements and notify on sstables changed. But first we will take a look at replacement process.

DataTracker.View.java

1:      public View replace(Collection<SSTableReader> oldSSTables, Iterable<SSTableReader> replacements)  
2:      {  
3:        List<SSTableReader> newSSTables = newSSTables(oldSSTables, replacements);  
4:        IntervalTree intervalTree = buildIntervalTree(newSSTables);  
5:        return new View(memtable, memtablesPendingFlush, Collections.unmodifiableList(newSSTables), compacting, intervalTree);  
6:      }  

DataTracker.View.java

1:      private List<SSTableReader> newSSTables(Collection<SSTableReader> oldSSTables, Iterable<SSTableReader> replacements)  
2:      {  
3:        ImmutableSet<SSTableReader> oldSet = ImmutableSet.copyOf(oldSSTables);  
4:        int newSSTablesSize = sstables.size() - oldSSTables.size() + Iterables.size(replacements);  
5:        assert newSSTablesSize >= Iterables.size(replacements) : String.format("Incoherent new size %d replacing %s by %s in %s", newSSTablesSize, oldSSTables, replacements, this);  
6:        List<SSTableReader> newSSTables = new ArrayList<SSTableReader>(newSSTablesSize);  
7:        for (SSTableReader sstable : sstables)  
8:        {  
9:          if (!oldSet.contains(sstable))  
10:            newSSTables.add(sstable);  
11:        }  
12:        Iterables.addAll(newSSTables, replacements);  
13:        assert newSSTables.size() == newSSTablesSize : String.format("Expecting new size of %d, got %d while replacing %s by %s in %s", newSSTablesSize, newSSTables.size(), oldSSTables, replacements, this);  
14:        return newSSTables;  
15:      }  

DataTracker.View.java

1:      private IntervalTree buildIntervalTree(List<SSTableReader> sstables)  
2:      {  
3:        List<Interval> intervals = new ArrayList<Interval>(sstables.size());  
4:        for (SSTableReader sstable : sstables)  
5:          intervals.add(new Interval<SSTableReader>(sstable.first, sstable.last, sstable));  
6:        return new IntervalTree<SSTableReader>(intervals);  
7:      }  

DataTracker.java

1:    private void addNewSSTablesSize(Iterable<SSTableReader> newSSTables)  
2:    {  
3:      for (SSTableReader sstable : newSSTables)  
4:      {  
5:        assert sstable.getKeySamples() != null;  
6:        if (logger.isDebugEnabled())  
7:          logger.debug(String.format("adding %s to list of files tracked for %s.%s",  
8:                sstable.descriptor, cfstore.table.name, cfstore.getColumnFamilyName()));  
9:        long size = sstable.bytesOnDisk();  
10:        liveSize.addAndGet(size);  
11:        totalSize.addAndGet(size);  
12:        sstable.setTrackedBy(this);  
13:      }  
14:    }  

DataTracker.java  

1:    private void removeOldSSTablesSize(Iterable<SSTableReader> oldSSTables)  
2:    {  
3:      for (SSTableReader sstable : oldSSTables)  
4:      {  
5:        if (logger.isDebugEnabled())  
6:          logger.debug(String.format("removing %s from list of files tracked for %s.%s",  
7:                sstable.descriptor, cfstore.table.name, cfstore.getColumnFamilyName()));  
8:        liveSize.addAndGet(-sstable.bytesOnDisk());  
9:        sstable.markCompacted();  
10:        sstable.releaseReference();  
11:      }  
12:    }    

SSTableReader.java

1:    /**  
2:     * Mark the sstable as compacted.  
3:     * When calling this function, the caller must ensure that the SSTableReader is not referenced anywhere  
4:     * except for threads holding a reference.  
5:     */  
6:    public void markCompacted()  
7:    {  
8:      if (logger.isDebugEnabled())  
9:        logger.debug("Marking " + getFilename() + " compacted");  
10:      try  
11:      {  
12:        if (!new File(descriptor.filenameFor(Component.COMPACTED_MARKER)).createNewFile())  
13:          throw new IOException("Unable to create compaction marker");  
14:      }  
15:      catch (IOException e)  
16:      {  
17:        throw new IOError(e);  
18:      }  
19:    
20:      boolean alreadyCompacted = isCompacted.getAndSet(true);  
21:      assert !alreadyCompacted : this + " was already marked compacted";  
22:    }  

SSTableReader.java

1:    public void releaseReference()  
2:    {  
3:      if (references.decrementAndGet() == 0 && isCompacted.get())  
4:      {  
5:        // Force finalizing mmapping if necessary  
6:        ifile.cleanup();  
7:        dfile.cleanup();  
8:    
9:        deletingTask.schedule();  
10:      }  
11:      assert references.get() >= 0 : "Reference counter " + references.get() + " for " + dfile.path;  
12:    }   

SSTableDeletingTask.java

1:    public void schedule()  
2:    {  
3:      StorageService.tasks.submit(this);  
4:    }  

ColumnFamilyStore.java  

1:    /**  
2:     * Resizes the key and row caches based on the current key estimate.  
3:     */  
4:    public synchronized void updateCacheSizes()  
5:    {  
6:      long keys = estimateKeys();  
7:      keyCache.updateCacheSize(keys);  
8:      rowCache.updateCacheSize(keys);  
9:    }  

As shown above, there are many things even in the replacement process! We can summarized based on the code trace above,

  • an interval tree is built using the replacement sstable. After that, that new view is returned.
  • the process above is repeated until the view become equal.
  • addNewSSTablesSize make the replacement sstable become active.
  • finally it is time to remove the old sstables.
  • the old sstables will be marks as compacted and then remove when it is no longer reference by threads.


Onto the method notifySSTablesChanged(),

DataTracker.java

1:    public void notifySSTablesChanged(Iterable<SSTableReader> removed, Iterable<SSTableReader> added, OperationType compactionType)  
2:    {  
3:      for (INotificationConsumer subscriber : subscribers)  
4:      {  
5:        INotification notification = new SSTableListChangedNotification(added, removed, compactionType);  
6:        subscriber.handleNotification(notification, this);  
7:      }  
8:    }  

For each of the subscribers, the sstable list change is notified and class that implement the interface should handle the changed.

Saturday, December 19, 2015

Learning into samsung ssd 850 PRO

In the last article, I talk about an ssd vs hdd comparison which you can be found here. In this article, I will spend sometime to learn into samsung ssd 850. Samsung ssd 850 pro is a solid state drive. At this moment, this drive produced a staggering of IOPS 100,000 read IOPS and 90,000 write IOPS[19] . Support sata interface of 6gigabits per seconds.

Additional characteristics of this model.

  • 4 KB aligned random I/O at QD32
  • 10,000 read IOPS, 36,000 write IOPS at QD1
  • 550 MB/s sequential read, 520 MB/s sequential write on 256 GB and larger models
  • 550 MB/s sequential read, 470 MB/s sequential write on 128 GB model[19]

The capacity for this model range from 128GB, 256GB, 512GB, 1024GB, 2048GB. A 2.5inch form factor with a dimension of 100W x 69.85H x 6.8D in millimeter. The primary storage is made of Samsung 3-core MEX controller with DRAM as the caching memory. Caching memory size with respect to the ssd capacity are 256 MB (128 GB) or 512 MB (256 GB & 512 GB) or 1 GB (1 TB) LPDDR2 or 2 GB LPDDR3 respectively. From the official sites, the following performances are outlines.

 Sequential Read         Max. 550 MB/s  
 Sequential Write        Max. 520 MB/s (256 GB/512 GB/1 TB/2 TB)  
                         Max. 470 MB/s (128 GB)  
 4KB Random Read (QD1)   Max. 10,000 IOPS  
 4KB Random Write (QD1)  Max. 36,000 IOPS  
 4KB Random Read (QD32)  Max. 100,000 IOPS  
 4KB Random Write (QD32) Max. 90,000 IOPS  


With this blazing speed, as for me as a software engineer, I really think the speed help me in term of application launching like eclipse and especially code grepping or file finding. But I think this speed is definitely a ideal match for software like cassandra and elasticsearch where disk I/O speed is much sought after. With the number show above and with my previously hdparm comparison, I think it will give very much difference if switch from normal hdd spinning disk to this ssd.

Full support for TRIM, garbage collection, and SMART technology are guaranteed. With a staggering reliability of 2 million hours mean time between failures and 150Terabytes written and TEN YEARS warranty period, I must say I am impressed and convinced. The power consumption is also very low. During active read/write, maximum of 3.5W and 3.0W respectively. If you are using this for laptop which power is always a matter of concern, ssd is a wise decision.

With 24 awards under the belt for this model, I think for the skeptical section of people, it is safe to really start to use ssd. If you have a budget, I think rather than just up the cpu and ram, a switch from hdd to ssd should give very much difference too.


I leave this article with samsung 850 pro ssd videos




Friday, December 18, 2015

Learn java util concurrent part5

This is the last series of learning into java util concurrent package. If you have not read the series before, you can find part1, part2, part3 and part4 at the respectively links. In this series, we will study remaining 19 classes in java.util.concurrent package.

ForkJoinTask<V>

  • Abstract base class for tasks that run within a ForkJoinPool.

1:        ForkJoinTask<Integer> fjt = ForkJoinTask.adapt(new Summer(44,55));  
2:        fjt.invoke();  
3:        Integer sum = fjt.get();  
4:        System.out.println(sum);  
5:        System.out.println(fjt.isDone());  
6:        fjt.join();  

Noticed that a new callable class was adapted into ForkJoinTask. The execution is commenced with invokeking the task. You can check if the task is complete using isDone method.

ForkJoinWorkerThread

  • A thread managed by a ForkJoinPool, which executes ForkJoinTasks.

1:        ForkJoinWorkerThreadFactory customFactory = new ForkJoinWorkerThreadFactory() {  
2:           @Override  
3:           public ForkJoinWorkerThread newThread(ForkJoinPool pool) {  
4:              return null;  
5:           }  
6:        };  

As explained by ForkJoinWorkerThread javadoc, ForkJoinWorkerThreadFactory return a thread from the pool.

FutureTask<V>

  • A cancellable asynchronous computation.
  • A FutureTask can be used to wrap a Callable or Runnable object. Because FutureTask implements Runnable, a FutureTask can be submitted to an Executor for execution.

1:        FutureTask<Integer> ft = new FutureTask<Integer>(new Summer(66,77));  
2:        ft.run();  
3:        System.out.println(ft.get());  

If you have a long running task, you can use FutureTask so the task can be cancel. Next, we will go into another three queues.

LinkedBlockingDeque<E>

  • An optionally-bounded blocking deque based on linked nodes.
  • The capacity, if unspecified, is equal to Integer.MAX_VALUE.
  • Most operations run in constant time (ignoring time spent blocking). Exceptions include remove, removeFirstOccurrence, removeLastOccurrence, contains, iterator.remove(), and the bulk operations, all of which run in linear time.


LinkedBlockingQueue<E>

  • An optionally-bounded blocking queue based on linked nodes.
  • This queue orders elements FIFO (first-in-first-out).
  • The head of the queue is that element that has been on the queue the longest time.
  • The tail of the queue is that element that has been on the queue the shortest time.
  • New elements are inserted at the tail of the queue, and the queue retrieval operations obtain elements at the head of the queue.
  • The capacity, if unspecified, is equal to Integer.MAX_VALUE.


LinkedTransferQueue<E>

  • An unbounded TransferQueue based on linked nodes.
  • This queue orders elements FIFO (first-in-first-out) with respect to any given producer.
  • The head of the queue is that element that has been on the queue the longest time for some producer.
  • the size method is NOT a constant-time operation.

1:        LinkedBlockingDeque<Integer> lbd = new LinkedBlockingDeque<Integer>();  
2:        lbd.add(1);  
3:        lbd.add(2);  
4:        lbd.add(3);  
5:    
6:        LinkedBlockingQueue<Integer> lbq = new LinkedBlockingQueue<Integer>();  
7:        lbq.add(4);  
8:        lbq.add(5);  
9:        lbq.add(6);  
10:          
11:        LinkedTransferQueue<Integer> ltq = new LinkedTransferQueue<Integer>();  
12:        ltq.add(7);  
13:        ltq.add(8);  
14:        ltq.add(9);  

Like the queues we talked about in part4, these queues will shown its benefits under multithreaded codes.

Phaser

  • A reusable synchronization barrier, similar in functionality to CyclicBarrier and CountDownLatch but supporting more flexible usage.
  • This implementation restricts the maximum number of parties to 65535.


1:        Phaser phaser = new Phaser();  
2:        phaser.register();  
3:        System.out.println("current phase number : " + phaser.getPhase());  
4:        testPhaser(phaser, 2000);  
5:        testPhaser(phaser, 4000);  
6:        testPhaser(phaser, 6000);  
7:          
8:        phaser.arriveAndDeregister();  
9:        Thread.sleep(10000);  
10:        System.out.println("current phase number : " + phaser.getPhase());  

PriorityBlockingQueue<E>

  • An unbounded blocking queue that uses the same ordering rules as class PriorityQueue and supplies blocking retrieval operations.
  • This class does not permit null elements.
  • The Iterator provided in method iterator() is not guaranteed to traverse the elements of the PriorityBlockingQueue in any particular order


1:        PriorityBlockingQueue<Integer> pbq = new PriorityBlockingQueue<Integer>();  
2:        pbq.add(10);  
3:        pbq.add(11);  
4:        pbq.add(12);  

RecursiveAction

  • A recursive resultless ForkJoinTask.


1:        long[] array = {1,3,2,5,4,9,5,7,8};  
2:        RecursiveAction ar = new SortTask(array);  
3:        ar.invoke();  
4:        System.out.println("array " + array[0]);  
5:        System.out.println("array " + array[1]);  
6:        System.out.println("array " + array[2]);  
7:        System.out.println("array " + array[3]);  
8:        System.out.println("array " + array[4]);  
9:        System.out.println("array " + array[5]);  
10:        System.out.println("array " + array[6]);  
11:        System.out.println("array " + array[7]);  
12:        System.out.println("array " + array[8]);  
13:    
14:     static class SortTask extends RecursiveAction {  
15:          
16:        final long[] array;  
17:        final int lo, hi;  
18:          
19:        SortTask(long[] array, int lo, int hi) {  
20:           this.array = array;  
21:           this.lo = lo;  
22:           this.hi = hi;  
23:        }  
24:          
25:        SortTask(long[] array) {  
26:           this(array, 0, array.length);  
27:        }  
28:    
29:        @Override  
30:        protected void compute() {  
31:           if (hi - lo < THRESHOLD)  
32:              sortSequentially(lo,hi);  
33:           else {  
34:              int mid = (lo + hi) >>> 1;  
35:              invokeAll(new SortTask(array, lo, mid), new SortTask(array, mid, hi));  
36:              merge(lo, mid, hi);  
37:           }  
38:        }  
39:          
40:        // implementation details follow:  
41:        static final int THRESHOLD = 1000;  
42:          
43:        void sortSequentially(int lo, int hi) {  
44:           Arrays.sort(array, lo, hi);  
45:        }  
46:          
47:        void merge(int lo, int mid, int hi) {  
48:           long[] buf = Arrays.copyOfRange(array, lo, mid);  
49:           for (int i = 0, j = lo, k = mid; i < buf.length; j++)  
50:              array[j] = (k == hi || buf[i] < array[k]) ? buf[i++] : array[k++];  
51:        }  
52:           
53:     }  

recursive sorting to the array by invoke commence to the object ar.

RecursiveTask<V>

  • A recursive result-bearing ForkJoinTask.


1:        RecursiveTask<Integer> fibo = new Fibonacci(10);  
2:        fibo.invoke();  
3:        System.out.println(fibo.get());  
4:          
5:     static class Fibonacci extends RecursiveTask<Integer> {  
6:          
7:        final int n;  
8:    
9:        Fibonacci(int n) {  
10:           this.n = n;  
11:        }  
12:    
13:        protected Integer compute() {  
14:           if (n <= 1)  
15:              return n;  
16:           Fibonacci f1 = new Fibonacci(n - 1);  
17:           f1.fork();  
18:           Fibonacci f2 = new Fibonacci(n - 2);  
19:           return f2.compute() + f1.join();  
20:        }  
21:     }  

ScheduledThreadPoolExecutor

  • A ThreadPoolExecutor that can additionally schedule commands to run after a given delay, or to execute periodically.


1:        ScheduledThreadPoolExecutor stpe = new ScheduledThreadPoolExecutor(10);  
2:        Future<Integer> total = stpe.submit(new Summer(88,99));  
3:        System.out.println(total.get());  
4:        stpe.shutdown();  

Semaphore

  • A counting semaphore.
  • Semaphores are often used to restrict the number of threads than can access some (physical or logical) resource.


1:        ConnectionLimiter cl = new ConnectionLimiter(3);  
2:        URLConnection conn = cl.acquire(new URL("http://www.google.com"));  
3:        conn = cl.acquire(new URL("http://www.yahoo.com"));  
4:        conn = cl.acquire(new URL("http://www.youtube.com"));  
5:        cl.release(conn);  
6:          
7:     static class ConnectionLimiter {  
8:        private final Semaphore semaphore;  
9:          
10:        private ConnectionLimiter(int max) {  
11:           semaphore = new Semaphore(max);  
12:        }  
13:          
14:        public URLConnection acquire(URL url) throws IOException, InterruptedException {  
15:           semaphore.acquire();  
16:           return url.openConnection();  
17:        }  
18:          
19:        public void release(URLConnection conn) {  
20:           try {  
21:              // blahblah  
22:           } finally {  
23:              semaphore.release();  
24:           }  
25:        }  
26:     }  

SynchronousQueue<E>

  • A blocking queue in which each insert operation must wait for a corresponding remove operation by another thread, and vice versa.
  • This queue does not permit null elements.

1:        final SynchronousQueue<String> queue = new SynchronousQueue<String>();  
2:        Thread a = new Thread(new QueueProducer(queue));  
3:        a.start();  
4:        Thread b = new Thread(new QueueConsumer(queue));  
5:        b.start();  
6:          
7:        Thread.sleep(1000);  
8:          
9:        a.interrupt();  
10:        b.interrupt();  
11:          
12:          
13:     static class QueueProducer implements Runnable {  
14:          
15:        private SynchronousQueue<String> queue;  
16:    
17:        public QueueProducer(SynchronousQueue<String> queue) {  
18:           this.queue = queue;  
19:        }  
20:    
21:        @Override  
22:        public void run() {  
23:           String event = "SYNCHRONOUS_EVENT";  
24:           String another_event = "ANOTHER_EVENT";  
25:             
26:           try {  
27:              queue.put(event);  
28:              System.out.printf("[%s] published event : %s %n", Thread.currentThread().getName(), event);  
29:                
30:              queue.put(another_event);  
31:              System.out.printf("[%s] published event : %s %n", Thread.currentThread().getName(), another_event);  
32:           } catch (InterruptedException e) {  
33:           }  
34:             
35:        }  
36:          
37:     }  
38:       
39:     static class QueueConsumer implements Runnable {  
40:          
41:        private SynchronousQueue<String> queue;  
42:    
43:        public QueueConsumer(SynchronousQueue<String> queue) {  
44:           this.queue = queue;  
45:        }  
46:    
47:        @Override  
48:        public void run() {  
49:           try {  
50:              String event = queue.take();  
51:              // thread will block here  
52:              System.out.printf("[%s] consumed event : %s %n", Thread.currentThread().getName(), event);  
53:           } catch (InterruptedException e) {  
54:           }  
55:             
56:        }  
57:          
58:     }  

ThreadLocalRandom

  • A random number generator isolated to the current thread.


1:        ThreadLocalRandom tlr = ThreadLocalRandom.current();  
2:        System.out.println(tlr.nextInt());  

ThreadPoolExecutor

  • An ExecutorService that executes each submitted task using one of possibly several pooled threads, normally configured using Executors factory methods.


1:        BlockingQueue<Runnable> blockingQueue = new ArrayBlockingQueue<Runnable>(4);  
2:        ThreadPoolExecutor tpe = new ThreadPoolExecutor(1, 1, 1, TimeUnit.SECONDS, blockingQueue);  

The last four classes are policies of the previous ThreadPoolExecutor which execute under specific condition.

ThreadPoolExecutor.AbortPolicy

  • A handler for rejected tasks that throws a RejectedExecutionException.


ThreadPoolExecutor.CallerRunsPolicy

  • A handler for rejected tasks that runs the rejected task directly in the calling thread of the execute method, unless the executor has been shut down, in which case the task is discarded.


ThreadPoolExecutor.DiscardOldestPolicy

  • A handler for rejected tasks that discards the oldest unhandled request and then retries execute, unless the executor is shut down, in which case the task is discarded.


ThreadPoolExecutor.DiscardPolicy

  • A handler for rejected tasks that silently discards the rejected task.


1:        ThreadPoolExecutor.AbortPolicy ap = new ThreadPoolExecutor.AbortPolicy();  
2:        try {  
3:        ap.rejectedExecution(() -> System.out.println("abort"), tpe);  
4:        } catch (Exception e) {  
5:             
6:        }  
7:          
8:        ThreadPoolExecutor.CallerRunsPolicy crp = new ThreadPoolExecutor.CallerRunsPolicy();  
9:        try {  
10:        crp.rejectedExecution(() -> System.out.println("run"), tpe);  
11:        } catch (Exception e) {  
12:             
13:        }  
14:          
15:        ThreadPoolExecutor.DiscardOldestPolicy dop = new ThreadPoolExecutor.DiscardOldestPolicy();  
16:        try {  
17:        dop.rejectedExecution(() -> System.out.println("abort"), tpe);  
18:        } catch (Exception e) {  
19:             
20:        }  
21:          
22:        ThreadPoolExecutor.DiscardPolicy dp = new ThreadPoolExecutor.DiscardPolicy();  
23:        try {  
24:        dp.rejectedExecution(() -> System.out.println("discard"), tpe);  
25:        } catch (Exception e) {  
26:             
27:        }  
28:    

That's it for these long learning series of java util concurrent.

Sunday, December 6, 2015

Learn java util concurrent part4

This is yet another series of learning into java util concurrent package. If you have not read the series before, you can find part1, part2 and part3 at the respectively links. In this series, we will study the classes in java.util.concurrent package.

AbstractExecutorService
Provides default implementations of ExecutorService execution methods. This class implements the submit, invokeAny and invokeAll methods using a RunnableFuture returned by newTaskFor, which defaults to the FutureTask class provided in this package.

1:        AbstractExecutorService aes = null;  
2:          
3:        aes = new ForkJoinPool();  
4:        System.out.println(aes.isShutdown());  
5:        Future<Integer> total = aes.submit(new Summer(33, 44));  
6:        System.out.println(total.get());  
7:        aes.shutdown();  
8:          
9:          
10:        BlockingQueue<Runnable> blockingQueue = new ArrayBlockingQueue<Runnable>(4);  
11:        aes = new ThreadPoolExecutor(1, 1, 1, TimeUnit.SECONDS, blockingQueue);  
12:        System.out.println(aes.isShutdown());  
13:        total = aes.submit(new Summer(33, 44));  
14:        System.out.println(total.get());  
15:        aes.shutdown();  

In the example above, we see that two concrete implementation of AbstractExecutorService, ForkJoinPool and ThreadPoolExecutor both invoking method submit from abstract class  AbstractExecutorService.

ArrayBlockingQueue
A bounded blocking queue backed by an array. This queue orders elements FIFO (first-in-first-out). The head of the queue is that element that has been on the queue the longest time. The tail of the queue is that element that has been on the queue the shortest time. New elements are inserted at the tail of the queue, and the queue retrieval operations obtain elements at the head of the queue.

1:        ArrayBlockingQueue<Integer> abq = new ArrayBlockingQueue<Integer>(5);  
2:        abq.add(1);  
3:        abq.offer(2);  
4:        System.out.println(abq.size());  

A simple queue implementation, just like any other collections in java collection framwork, you can add, remove, or drain the collection.

CompletableFuture
A Future that may be explicitly completed (setting its value and status), and may be used as a CompletionStage, supporting dependent functions and actions that trigger upon its completion.

1:        CompletableFuture<Integer> cf = new CompletableFuture<Integer>();  
2:        System.out.println(cf.isCancelled());  

ConcurrentHashMap
A hash table supporting full concurrency of retrievals and high expected concurrency for updates.
However, even though all operations are thread-safe, retrieval operations do not entail locking, and there is not any support for locking the entire table in a way that prevents all access.
this class does not allow null to be used as a key or value.

1:        ConcurrentHashMap<String,Integer> chm = new ConcurrentHashMap<String,Integer>();  
2:        chm.put("one", 1);  
3:        chm.put("two", 2);  
4:        chm.put("six", 6);  

ConcurrentHashMap.KeySetView<K,V>
A view of a ConcurrentHashMap as a Set of keys, in which additions may optionally be enabled by mapping to a common value.

1:        ConcurrentHashMap.KeySetView<String, Integer> keys = chm.keySet(10);  
2:        System.out.println(keys.isEmpty());  
3:        System.out.println(keys.toString());  
4:        keys.add("ten");  
5:          
6:        keys.forEach((s) -> System.out.println(s));  
7:        System.out.println(chm.toString());  
8:          
9:        ConcurrentHashMap.KeySetView<String, Boolean> keys1 = chm.newKeySet();  
10:        System.out.println(keys1.isEmpty());  
11:        System.out.println(keys1.toString());  
12:        keys1.add("four");  
13:          
14:        keys1.forEach((s) -> System.out.println(s));  
15:        System.out.println(chm.toString());  

The above give two examples of usage of ConcurrentHashMap.KeySetView. The first one notice that changes to the keys affect the original concurrentHashMap chm whilst the second does not. So read the javadoc and pick the implementation that suit your requirements.

Now, we will take a look at two for the concurrent linked queues.

ConcurrentLinkedDeque<E>

  • An unbounded concurrent deque based on linked nodes.
  • Concurrent insertion, removal, and access operations execute safely across multiple threads. 
  • this class does not permit the use of null elements.
  • the size method is NOT a constant-time operation. Because of the asynchronous nature of these deques, determining the current number of elements requires a traversal of the elements, and so may report inaccurate results if this collection is modified during traversal.

ConcurrentLinkedQueue<E>

  • An unbounded thread-safe queue based on linked nodes.
  • This queue orders elements FIFO (first-in-first-out).
  • The head of the queue is that element that has been on the queue the longest time. 
  • The tail of the queue is that element that has been on the queue the shortest time. 
  • this class does not permit the use of null elements.


  insert                              element  
  always                              oldest  
  at tail                             at head  
  and                                 and retrieve  
  youngest                            here  
    +------------------------------------+  
    |                                    |  
    |                                    |  
    +------------------------------------+  
   tail                               head  

1:    
2:        ConcurrentLinkedDeque<Integer> cldq = new ConcurrentLinkedDeque<Integer>();   
3:        cldq.add(1);  
4:        cldq.add(2);  
5:        cldq.add(3);  
6:          
7:    
8:        ConcurrentLinkedQueue<Integer> clq = new ConcurrentLinkedQueue<Integer>();  
9:        clq.add(4);  
10:        clq.add(5);  
11:        clq.add(6);  

With the examples above, the apparent benefits is not actually express as the only thread is the main adding element to the queues serially. Like the javadoc mentioned, you should really use these queues on multithreaded situation.

ConcurrentSkipListMap<K,V>

  • A scalable concurrent ConcurrentNavigableMap implementation.
  • The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used.
  • providing expected average log(n) time cost for the containsKey, get, put and remove operations and their variants. 
  • Insertion, removal, update, and access operations safely execute concurrently by multiple threads.
  • Ascending key ordered views and their iterators are faster than descending ones.
  • the size method is not a constant-time operation.

ConcurrentSkipListSet<E>

  • A scalable concurrent NavigableSet implementation based on a ConcurrentSkipListMap.
  • The elements of the set are kept sorted according to their natural ordering, or by a Comparator provided at set creation time, depending on which constructor is used.
  • expected average log(n) time cost for the contains, add, and remove operations and their variants.
  • Insertion, removal, and access operations safely execute concurrently by multiple threads.
  • Ascending ordered views and their iterators are faster than descending ones.
  • the size method is not a constant-time operation. 

1:        ConcurrentSkipListMap<String,Integer> cslm = new ConcurrentSkipListMap<String, Integer>();  
2:        cslm.put("one", 1);  
3:        cslm.put("two", 2);  
4:        cslm.put("six", 6);  
5:          
6:        ConcurrentSkipListSet<Integer> csls = new ConcurrentSkipListSet<Integer>();  
7:        csls.add(1);  
8:        csls.add(1);  
9:        System.out.println("set size " + csls.size());  

With the examples above, the apparent benefits is not actually express as the only thread is the main adding element to the collections serially. Like the javadoc mentioned, you should really use these collections on multithreaded situation.

CopyOnWriteArrayList<E>

  • A thread-safe variant of ArrayList in which all mutative operations (add, set, and so on) are implemented by making a fresh copy of the underlying array.
  • This is ordinarily too costly, but may be more efficient than alternatives when traversal operations vastly outnumber mutations, and is useful when you cannot or don't want to synchronize traversals, yet need to preclude interference among concurrent threads.
  • All elements are permitted, including null.



CopyOnWriteArraySet<E>

  • A Set that uses an internal CopyOnWriteArrayList for all of its operations.
  • It is best suited for applications in which set sizes generally stay small, read-only operations vastly outnumber mutative operations, and you need to prevent interference among threads during traversal.
  • It is thread-safe.
  • Mutative operations (add, set, remove, etc.) are expensive since they usually entail copying the entire underlying array.


1:        CopyOnWriteArrayList<Integer> cowal = new CopyOnWriteArrayList<Integer>();  
2:        cowal.add(1);  
3:        cowal.add(2);  
4:        cowal.add(3);  
5:          
6:        CopyOnWriteArraySet<Integer> cowas = new CopyOnWriteArraySet<Integer>();  
7:        cowas.add(1);  
8:        cowas.add(1);  
9:        System.out.println("set size " + cowas.size());  

Just like the four collections above, these beneifts best shown on multithreaded applications.

CountDownLatch

  • A synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes.
  • A CountDownLatch is initialized with a given count. The await methods block until the current count reaches zero due to invocations of the countDown() method, after which all waiting threads are released and any subsequent invocations of await return immediately. This is a one-shot phenomenon -- the count cannot be reset


1:        int N = 10;  
2:        CountDownLatch startSignal = new CountDownLatch(1);  
3:        CountDownLatch doneSignal = new CountDownLatch(N);  
4:          
5:        for (int i = 0; i < N; ++i) // create and start threads  
6:           new Thread(new Worker(startSignal, doneSignal)).start();  
7:    
8:        doSomethingElse();     // don't let run yet  
9:        startSignal.countDown();  // let all threads proceed  
10:        doSomethingElse();  
11:        doneSignal.await();    // wait for all to finish  
12:    
13:    
14:     private static void doSomethingElse() throws InterruptedException {  
15:          Thread.sleep(3000);  
16:        System.out.println(Thread.currentThread().getName() + " doing something else");  
17:    
18:     }  
19:    
20:    
21:     static class Worker implements Runnable {  
22:          private final CountDownLatch startSignal;  
23:          private final CountDownLatch doneSignal;  
24:          Worker(CountDownLatch startSignal, CountDownLatch doneSignal) {  
25:           this.startSignal = startSignal;  
26:           this.doneSignal = doneSignal;  
27:          }  
28:          public void run() {  
29:           try {  
30:            startSignal.await();  
31:            doWork();  
32:            doneSignal.countDown();  
33:           } catch (InterruptedException ex) {} // return;  
34:          }  
35:    
36:          void doWork() { System.out.println(Thread.currentThread().getName() + " doing work"); try {  
37:           Thread.sleep(200);  
38:      }  

we see that ten worker threads were started but it was in waiting state in the run method. Until the startSignal started to count down, then only all the workers thread started. In the individual worker threads, we will see doneSignal is counting down one by one for 10 tens for each worker thread respectively. In the main thread, doneSignal is in waiting state before all the worker thread done all the signals.

CountedCompleter<T>

  • A ForkJoinTask with a completion action performed when triggered and there are no remaining pending actions.
  • Sample Usages.
  • Parallel recursive decomposition.
  • Searching. 
  • Recording subtasks. 
  • Completion Traversals. 
  • Triggers.
1:        // CountedCompleter<T>  
2:        Integer[] numbers = {1,2,3,4,5};  
3:        // null ?  
4:        MapReducer<Integer> numbersReducer = new MapReducer<Integer>(null, numbers, new MyMapper(), new MyReducer(), 1, 10);  
5:        Integer result = numbersReducer.getRawResult();  
6:        System.out.println(result);  

CyclicBarrier

  • A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point.
  • CyclicBarriers are useful in programs involving a fixed sized party of threads that must occasionally wait for each other. 
  • The barrier is called cyclic because it can be re-used after the waiting threads are released. 
  • The CyclicBarrier uses an all-or-none breakage model for failed synchronization attempts: If a thread leaves a barrier point prematurely because of interruption, failure, or timeout, all other threads waiting at that barrier point will also leave abnormally via BrokenBarrierException (or InterruptedException if they too were interrupted at about the same time). 

1:        // CyclicBarrier  
2:        float[][] matrix = {{1,2}, {2,3}};  
3:        new Solver(matrix);  
4:    
5:  public class Solver {  
6:    
7:     final int N;  
8:     final float[][] data;  
9:     final CyclicBarrier barrier;  
10:    
11:     class Worker implements Runnable {  
12:        int myRow;  
13:        boolean done;  
14:    
15:        Worker(int row) {  
16:           myRow = row;  
17:        }  
18:    
19:        public void run() {  
20:           while (!done()) {  
21:              processRow(myRow);  
22:    
23:              try {  
24:                 barrier.await();  
25:              } catch (InterruptedException ex) {  
26:                 return;  
27:              } catch (BrokenBarrierException ex) {  
28:                 return;  
29:              }  
30:           }  
31:        }  
32:          
33:        public boolean done() {  
34:           return done;  
35:        }  
36:          
37:        private void processRow(int row) {  
38:           System.out.println(Thread.currentThread().getName() + " processing row " + row );  
39:           done = true;  
40:        }  
41:     }  
42:    
43:     public Solver(float[][] matrix) {  
44:        data = matrix;  
45:        N = matrix.length;  
46:        Runnable barrierAction = new Runnable() {   
47:           public void run() {   
48:              //mergeRows(...);   
49:              System.out.println("merging row");  
50:           }  
51:        };  
52:        barrier = new CyclicBarrier(N, barrierAction);  
53:    
54:        List<Thread> threads = new ArrayList<Thread>(N);  
55:        for (int i = 0; i < N; i++) {  
56:         Thread thread = new Thread(new Worker(i));  
57:         threads.add(thread);  
58:         thread.start();  
59:        }  
60:    
61:        // wait until done  
62:        for (Thread thread : threads)  
63:           try {  
64:              thread.join();  
65:           } catch (InterruptedException e) {  
66:              e.printStackTrace();  
67:           }  
68:       }  
69:  }  

In the example above, we see the main class initialized a new solver object passing a two dimentional floating matrix for process. In the solver class, we see that a cyclicbarrier is initialized with a runnable barrier action. Depending on the matrix length, the length shall be used to initialize the workers threads. The solver thread waits until all the workers thread done.

In the worker thread, I simplified the processRow to just printout and set done to true, you can of cause process data[myRow] to make the sample code near to the real world problem. Noticed that barrier in each individual worker thread is call method await. IN this example, if two workers are done process the row and barrier await is executed, then the final barrierAction object will run the final merging row.


DelayQueue<E extends Delayed>

  • An unbounded blocking queue of Delayed elements, in which an element can only be taken when its delay has expired.
  • The head of the queue is that Delayed element whose delay expired furthest in the past.
  • If no delay has expired there is no head and poll will return null.
  • Expiration occurs when an element's getDelay(TimeUnit.NANOSECONDS) method returns a value less than or equal to zero.
  • the size method returns the count of both expired and unexpired elements.
  • This queue does not permit null elements.

1:        DelayQueue<SalaryDelay> delayQueue = new DelayQueue<SalaryDelay>();  
2:        delayQueue.add(new SalaryDelay("August", 1));  
3:        delayQueue.add(new SalaryDelay("September", 2));  
4:        delayQueue.add(new SalaryDelay("October", 3));  
5:          
6:        System.out.println(delayQueue.size());  
7:        System.out.println(delayQueue.poll());  

Like this queue before, you add the class that implement delayed and be place in this delayqueue.

Exchanger<V>

  • A synchronization point at which threads can pair and swap elements within pairs.

1:        Exchanger<?> exchanger = new Exchanger<>();  
2:        ExchangerRunnable exchangerRunnable1 = new ExchangerRunnable(exchanger, "keychain");  
3:        ExchangerRunnable exchangerRunnable2 = new ExchangerRunnable(exchanger, "chocalate");  
4:          
5:        new Thread(exchangerRunnable1).start();  
6:        new Thread(exchangerRunnable2).start();  
7:          
8:  public class ExchangerRunnable implements Runnable {  
9:    
10:     Exchanger exchanger = null;  
11:     Object object = null;  
12:    
13:     public ExchangerRunnable(Exchanger exchanger, Object object) {  
14:        this.exchanger = exchanger;  
15:        this.object = object;  
16:     }  
17:    
18:     public void run() {  
19:        try {  
20:           Object previous = this.object;  
21:    
22:           this.object = this.exchanger.exchange(this.object);  
23:    
24:           System.out.println(Thread.currentThread().getName() + " exchanged "  
25:                 + previous + " for " + this.object);  
26:        } catch (InterruptedException e) {  
27:           e.printStackTrace();  
28:        }  
29:     }  
30:    
31:  }  

With the code above, there are two thread that exchange a string object to each other.

ExecutorCompletionService<V>

  • A CompletionService that uses a supplied Executor to execute tasks.

1:        ExecutorService executorService = Executors.newFixedThreadPool(1);  
2:        CompletionService<Integer> longRunningCompletionService = new ExecutorCompletionService<Integer>(executorService);  
3:        longRunningCompletionService.submit(() -> {System.out.println("done"); return 1;});  
4:        longRunningCompletionService.take();  
5:        executorService.shutdown();  

Moving onto our last 2 classes in this lesson.

Executors
- Factory and utility methods for Executor, ExecutorService, ScheduledExecutorService, ThreadFactory, and Callable classes defined in this package.

1:        Executors.newCachedThreadPool();  
2:        Executors.defaultThreadFactory();  
3:        Executors.newFixedThreadPool(10);  
4:        Executors.newScheduledThreadPool(1);  
5:        Executors.newSingleThreadExecutor();  
6:        Executors.privilegedThreadFactory();  
7:        Executors.newWorkStealingPool();  

Just try different pools implementation in java to get some idea the specific pools.

ForkJoinPool

  • An ExecutorService for running ForkJoinTasks.
  • This implementation restricts the maximum number of running threads to 32767. 
1:        ForkJoinPool fjPool = new ForkJoinPool();  
2:        Future<Integer> sum = fjPool.submit(new Summer(11, 89));  
3:        System.out.println(sum.get());  
4:        fjPool.shutdown();  

A trivial example of using ForkJoinPool to submit a runnable task which return a result.

That's it for this long but brief lesson. This article end with the source code where you can get from the links below.

https://github.com/jasonwee/videoOnCloud/commit/0e7549249490ef5f9e528e101354440b7ad5cbdb
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/ExchangerRunnable.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/LearnConcurrentClassP3.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/MapReducer.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/MyMapper.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/MyReducer.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/Solver.java

Saturday, December 5, 2015

Learn java util concurrent part3

This series is the next learning series of java.util.concurrent. You should read the first part and second part too. Today we will learn remaining ten interfaces in package java.util.concurrent.

Okay, let's start with ForkJoinPool.ForkJoinWorkerThreadFactory. For programming wise, you should not be worry as the class ForkJoinPool takes care of this implementation. Within class ForkJoinPool, we see that, there are two classes


  • DefaultForkJoinWorkerThreadFactory
  • InnocuousForkJoinWorkerThreadFactory

which implement ForkJoinWorkerThreadFactory with access modifier to default. So unless you know what you want and you know how ForkJoinPool work, then subclass ForkJoinPool away. For beginner in this article, it is sufficient to just use ForkJoinPool.

ManagedBlocker is an interface for extending managed parallelism for tasks running in ForkJoinPools. There are two methods to be implemented, block() and isReleasable()

1:  public class QueueManagedBlocker<T> implements ManagedBlocker {  
2:       
3:     final BlockingQueue<T> queue;  
4:     volatile T value = null;  
5:       
6:     QueueManagedBlocker(BlockingQueue<T> queue) {  
7:        this.queue = queue;  
8:     }  
9:    
10:     @Override  
11:     public boolean block() throws InterruptedException {  
12:        if (value == null)  
13:           value = queue.take();  
14:        return true;  
15:     }  
16:    
17:     @Override  
18:     public boolean isReleasable() {  
19:        return value != null || (value = queue.poll()) != null;  
20:     }  
21:       
22:     public T getValue() {  
23:        return value;  
24:     }  
25:    
26:  }  

Next, we have interface Future<V> which we have see before in the previous learning series.

1:  ExecutorService executorService = Executors.newFixedThreadPool(1);  
2:  Future<Integer> future = executorService.submit(new Summer(11,22));  

It's very clear you can obtain the result via future variable above. Interface RejectedExecutionHandler is mostly for error handling.

1:  RejectedExecutionHandler executionHandler = new MyRejectedExecutionHandlerImpl();  
2:  ThreadPoolExecutor executor = new ThreadPoolExecutor(3, 3, 10, TimeUnit.SECONDS, worksQueue, executionHandler);  
3:    
4:  public class MyRejectedExecutionHandlerImpl implements RejectedExecutionHandler {  
5:    
6:     @Override  
7:     public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {  
8:        System.out.println(r.toString() + " : I've been rejected !");  
9:     }  
10:    
11:  }  

So you can set the implementation classed to ThreadPoolExecutor and if a task cannot be executor by the ThreadPoolExecutor, rejectedExecution will be executed. Moving onto the next interface, RunnableFuture<V> .

1:  RunnableFuture<Integer> rf = new FutureTask<Integer>(new Summer(22,33));  

so we see an initialization of object FutureTask with a callable class Summer class which we created in the previous learning series. Interface RunnableScheduledFuture which extend the previous interface RunnableFuture has another additional method to implement upon on.

1:  RunnableScheduledFuture<Integer> rsf = new Summer1();  
2:  System.out.println(rsf.isPeriodic());RunnableFuture<Integer> rf = new FutureTask<Integer>(new Summer(22,33));  

In the class Summer1, you should determine if the class is periodic or not. ScheduledExecutorService is pretty common if you google this interface and given the code below.

1:  ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);  
2:  scheduler.scheduleAtFixedRate(() -> System.out.println("hihi"), 1, 1, TimeUnit.SECONDS);  
3:  Thread.sleep(3000);  
4:  scheduler.shutdown();  

so we see a thread is executed every second.

1:  ScheduledFuture<Integer> sf = new ScheduledFutureImpl();  
2:  sf.isCancelled();  

ScheduledFuture<V> is a delayed result-bearing action that can be cancelled. Usually a scheduled future is the result of scheduling a task with a ScheduledExecutorService. This class is pretty common if you have a future task which get delay for whatever reason or it may get cancel, you want to look further into this class.

ThreadFactory is another interface which creates new threads on demand. Using thread factories removes hardwiring of calls to new Thread, enabling applications to use special thread subclasses, priorities, etc.

1:  ThreadFactory tf = Executors.defaultThreadFactory();  
2:  tf.newThread(()->System.out.println("ThreadFactory")).start();  

In this last series, we take a look at the last interface, TransferQueue.  A TransferQueue may be useful for example in message passing applications in which producers sometimes (using method transfer(E)) await receipt of elements by consumers invoking take or poll, while at other times enqueue elements (via method put) without waiting for receipt.

1:  TransferQueue<Integer> tq = new LinkedTransferQueue<Integer>();  

That's it for this learning series. Thank you. Oh, and the source code.

https://github.com/jasonwee/videoOnCloud/commit/ce479e5befaf7abe84d3d85930d5196a639e2643

https://github.com/jasonwee/videoOnCloud/blob/master/src/java/org/just4fun/concurrent/MyRejectedExecutionHandlerImpl.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/org/just4fun/concurrent/ExampleThreadPoolExecutor.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/org/just4fun/concurrent/DemoExecutor.java

https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/LearnConcurrentInterfaceP2.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/QueueManagedBlocker.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/ScheduledFutureImpl.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/Summer1.java


Friday, December 4, 2015

Learn java util concurrent part2

This series is the next learning series of java.util.concurrent. You should read the first part here too. Today we will learn ten interfaces in package java.util.concurrent.

Okay, let's start on the queue interface, BlockingDeque. Some characteristics of this interface including

  • blocking
  • thread safe
  • does not permit null elements
  • may (or may not) be capacity-constrained.

and we can do adding/removing item to this queue. Example below.

1:  bd.add(1);  
2:  bd.add(2);  
3:  System.out.println("size: " + bd.size());  
4:          
5:  bd.add(3);  
6:  System.out.println("size: " + bd.size());  
7:  //bd.add(4); // exception  
8:          
9:  bd.forEach(s -> System.out.println(s));  

Try play around this class with different methods to get a basic understanding on it. Next, we have a similar queue called BlockingQueue. It's characters same as BlockingDeque, not sure where is the different. But official java has many classes implement this BlockingQueue.

1:  BlockingQueue<Integer> bq = new ArrayBlockingQueue<Integer>(10);  
2:  bq = new DelayQueue();  
3:  bq = new LinkedBlockingDeque<Integer>();  
4:  bq = new LinkedTransferQueue<Integer>();  
5:  bq = new PriorityBlockingQueue<Integer>();  
6:  bq = new SynchronousQueue<Integer>();  

Next we have Callable interface which is similar to Runnable with a clear distinction. Callable return a value.

1:  ExecutorService executorService = Executors.newFixedThreadPool(1);  
2:    
3:  Future<Integer> future = executorService.submit(new Summer(11,22));  
4:    
5:  try {  
6:     Integer total = future.get();  
7:     System.out.println("sum " + total);  
8:  } catch (Exception e) {  
9:     e.printStackTrace();  
10:  }  
11:    

As can be read above, Summer is a implementation of interface Callable and it is submitted to an executor service to be execute upon on. CompletableFuture.AsynchronousCompletionTask is a interesting interface and its official documentation said "A marker interface identifying asynchronous tasks produced by async methods. This may be useful for monitoring, debugging, and tracking asynchronous activities."

1:  CompletableFuture<Integer> cf = new CompletableFuture<Integer>();  
2:  ForkJoinPool.commonPool().submit(  
3:        (Runnable & CompletableFuture.AsynchronousCompletionTask)()->{  
4:      try {  
5:         cf.complete(1);  
6:      } catch (Exception e) {  
7:         cf.completeExceptionally(e);  
8:      }  
9:   });  

As can be read above, we submit a anonymous function to the ForkJoinPool where this anonymous function cast into the intersection of interface Runnable and CompletableFuture.AsynchronousCompletionTask. Moving on, we have Interface CompletionService. Now if you have a long running service, you might want to look into this interface. Example as can be read below.

1:  CompletionService<Integer> longRunningCompletionService = new ExecutorCompletionService<Integer>(executorService);  
2:    
3:  longRunningCompletionService.submit(() -> {System.out.println("done"); return 1;});  
4:    
5:  try {  
6:     Future<Integer> result = longRunningCompletionService.take();  
7:     System.out.println(result.get());  
8:  } catch (Exception e) {  
9:     // TODO Auto-generated catch block  
10:     e.printStackTrace();  
11:  }  

Persist the object longRunningCompletionService throughout your application and the result can be retrieve in the future. Pretty handy. Moving on, we have a new Interface CompletionStage which debut on jdk8. From CompletionStage javadoc, A stage of a possibly asynchronous computation, that performs an action or computes a value when another CompletionStage completes. A stage completes upon termination of its computation, but this may in turn trigger other dependent stages.

Example code using CompletionStage as of following.

1:  ListenableFuture<String> springListenableFuture = createSpringListenableFuture();  
2:    
3:  CompletableCompletionStage<Object> completionStage = factory.createCompletionStage();  
4:  springListenableFuture.addCallback(new ListenableFutureCallback<String>() {  
5:    @Override  
6:    public void onSuccess(String result) {  
7:       System.out.println("onSuccess called");  
8:      completionStage.complete(result);  
9:    }  
10:    @Override  
11:    public void onFailure(Throwable t) {  
12:       System.out.println("onFailure called");  
13:      completionStage.completeExceptionally(t);  
14:    }  
15:  });  
16:    
17:  completionStage.thenAccept(System.out::println);  

Until here, if you don't understand, you should start to take the code and start to work on it. In this concurrent package, we have two Maps to use, that is ConcurrentMap and ConcurrentNavigableMap.

1:  ConcurrentMap<String, String> cm = new ConcurrentHashMap();  
2:  cm = new ConcurrentSkipListMap<String, String>();  
3:    
4:  ConcurrentNavigableMap<String, String> cnm = new ConcurrentSkipListMap<String, String>();  

ConcurrentMap providing thread safety and atomicity guarantees whilst ConcurrentNavigableMap support additional supporting NavigableMap operations, and recursively so for its navigable sub-maps. Then we have interface Delayed.

1:  Random random = new Random();  
2:  int delay = random.nextInt(10000);  
3:  Delayed employer = new SalaryDelay("a lot of bs reasons", delay);  
4:  System.out.println("bullshit delay this time " + employer.getDelay(TimeUnit.SECONDS));  

Delayed is an interface where you should implemented two require methods, getDelay and compareTo. As you can read fundamentally is you have a few object which feed to the executor to process upon but not immmediate, it may get delay for any reasons. As an example, pretty common in the working world, emplayer delay salary for any reasons.

Last two interfaces is related to each other where ExecutorService is a sub interface of Executor. For ExecutorService, we have seen an example above and below is for Executor,

1:  Executor executor = new ForkJoinPool();  
2:  executor = new ScheduledThreadPoolExecutor(1);  
3:    
4:  BlockingQueue<Runnable> blockingQueue = new ArrayBlockingQueue<Runnable>(4);  
5:  executor = new ThreadPoolExecutor(1, 1, 1000, TimeUnit.SECONDS, blockingQueue);  

We see there are many classes implemented Executor interface. Executor interface guaranteed


  • An object that executes submitted Runnable tasks. 
  • Executor interface does not strictly require that execution be asynchronous


That's it for this article, we continue the rest in the next article!

Oh before that, you can download the full source at the follow links

https://github.com/jasonwee/videoOnCloud/commit/3d291610df89c610215b8ecdbcc59cbb028ba14b
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/LearnConcurrentInterfaceP1.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/SalaryDelay.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/concurrent/Summer.java
https://github.com/jasonwee/videoOnCloud/blob/master/src/java/play/learn/java/CompletableFuture/LearnCompletionStage.java