A lot of people who are new to multi-threading think that using threads automatically make an application go faster. In fact, it is a lot more complicated than that. But one thing that we can state with certainty is that for any computer there is a limit on the number of threads that can be run at the same time:
This tells us that simply creating more and more Java threads cannot make the application go faster and faster. But there are other considerations as well:
Each thread requires an off-heap memory region for its thread stack. The typical (default) thread stack size is 512Kbytes or 1Mbytes. If you have a significant number of threads, the memory usage can be significant.
Each active thread will refer to a number of objects in the heap. That increases the working set of reachable objects, which impacts on garbage collection and on physical memory usage.
The overheads of switching between threads is non-trivial. It typically entails a switch into the OS kernel space to make a thread scheduling decision.
The overheads of thread synchronization and inter-thread signaling (e.g. wait(), notify() / notifyAll) can be significant.
Depending on the details of your application, these factors generally mean that there is a "sweet spot" for the number of threads. Beyond that, adding more threads gives minimal performance improvement, and can make performance worse.
If your application create for each new task, then an unexpected increase in the workload (e.g. a high request rate) can lead to catastrophic behavior.
A better way to deal with this is to use bounded thread pool whose size you can control (statically or dynamically). When there is too much work to do, the application needs to queue the requests. If you use an
ExecutorService, it will take care of the thread pool management and task queuing.