Home » Java » How does Java makes use of multiple cores?

How does Java makes use of multiple cores?

Posted by: admin December 20, 2017 Leave a comment

Questions:

A JVM runs in a single process and threads in a JVM share the heap belonging to that process. Then how does JVM make use of multiple cores which provide multiple OS threads for high concurrency?

Answers:

You can make use of multiple cores using multiple threads. But using a higher number of threads than the number of cores present in a machine can simply be a waste of resources. You can use availableProcessors() to get the number of cores.

In Java 7 there is fork/join framework to make use of multiple cores.

Related Questions:

Questions:
Answers:

Green threads were replaced by native threads in Java 1.2.

Questions:
Answers:

Java will benefit from multiple cores, if the OS distribute threads over the available processors. JVM itself do not do anything special to get its threads scheduled evenly across multiple cores. A few things to keep in mind:

  • While implementing parallel algorithms, it might be better to spawn as many threads as there are cores. (Runtime.getRuntime().availableProcessors()). Not more, not less.
  • Make use of the facilities provided by the java.util.concurrent package.
  • Make sure that you have Java Concurrency in Practice in your personal library.
Questions:
Answers:

Bunch of good answers here but I thought I’d add some more details for posterity.

A JVM runs in a single process and threads in a JVM share the heap belonging to that process. Then how does JVM make use of multiple cores which provide multiple OS threads for high concurrency?

As mentioned by others, Java will utilize the underlying OS’ threads to do the actual job of executing the code on different CPUs, if running on a multi-CPU machine. When a Java thread is started, it creates an associated OS thread and the OS is responsible for scheduling, etc.. The JVM certain does some management and tracking of the thread and Java language constructs like volatile, synchronized, notify(), wait(), etc. all affect the run status of the OS thread.

A JVM runs in a single process and threads in a JVM share the heap belonging to that process.

JVM doesn’t necessary “run in a single process” because even the garbage collector and other JVM code run in different threads and the OS often represents these different threads as different processes. In Linux, for example, the single process you see in the process list is often masquerading a bunch of different thread processes. This is even if you are on a single core machine.

However, you are correct that they all share the same heap space. They actually share the same entire memory space which means code, interned strings, stack space, etc..

Then how does JVM make use of multiple cores which provide multiple OS threads for high concurrency?

Threads get their performance improvements from a couple of reasons. Obviously straight concurrency often makes the program run faster. Being able to do multiple CPU tasks at the same time can (though not always) improve the throughput of the application. You are also able to isolate IO operations to a single thread meaning that other threads can be running while a thread is waiting on IO (read/write to disk/network, etc.).

But in terms of memory, threads get a lot of their performance improvements because of local per-CPU cached memory. When a thread runs on a CPU, the local high speed memory cache for the CPU helps the thread isolate storage requests locally without having to spend the time to read or write to central memory. This is why volatile and synchronized calls include memory synchronization constructs because the cache memory has to be flushed to main memory or invalidated when threads need to coordinate their work or communicate with each other.