cool hit counter JVM tuning (II) empirical parameter settings_Intefrankly

JVM tuning (II) empirical parameter settings

Tuning settings specific analysis

   Heap Size Setting

     The maximum heap size in the JVM is limited in three ways: by the data model of the relevant operating system (32-bt or 64-bit); by the available virtual memory of the system; and by the available physical memory of the system. Under 32-bit system, it is generally limited to 1.5G~2G; 64 for operating system has no limitation on memory.

Tested on a Windows Server 2003 system with 3.5G of physical memory and JDK5.0 with a maximum setting of 1478m. Typical setup.

  • java-Xmx3550m -Xms3550m -Xmn2g -Xss128k
  • -Xms3550m: sets the maximum available heap memory of the JVM to 3550M and sets the initial heap memory of the JVM to 3550m. This value can be set the same as -Xmx to avoid the JVM reallocating memory after each garbage collection is completed.
  • -Xmn2g: set the young generation size to 2G. Entire JVM memory size = young generation size + old generation size + persistent generation size. The persistent generation is generally of fixed size 64m, so increasing the young generation will reduce the size of the old generation. This value has a large impact on system performance and the official Sun recommendation is to configure 3/8 of the entire heap. (Note: Starting with Java 8, "persistent generation" has been removed from the HotSpot VM).
  • -Xss128k: Set the stack size per thread. JDK5.0 onwards has a stack size of 1M per thread, previously the stack size was 256K per thread. The memory size required for more application-specific threads is adjusted. Decreasing this value generates more threads for the same physical memory. But the operating system still has a limit on the number of threads within a process and cannot be generated indefinitely, with an empirical value of around 3000 to 5000.
  • java -Xmx3550m -Xms3550m -Xss128k-XX:NewRatio=4 -XX:SurvivorRatio=4 -XX:MaxPermSize=16m -XX:MaxTenuringThreshold=0
  • -XX:NewRatio=4Setting the young generation( includeEden and twoSurvivor distinguish) Ratio to older generations( Except for the durable generation), Set to4, Then the ratio of the younger to the older generation is1:4, Younger generations account for the entire stack of1/5。
  • -XX:SurvivorRatio=4: sets the ratio of the size of the Eden region to the Survivor region in the young generation, set to 4, the ratio of two Survivor regions to one Eden region is 2:4, one Survivor region accounts for 1/2 * 1/3 = 1/6 of the whole young generation
  • -XX:MaxPermSize=16m: Set the persistent generation size to 16m.
  • -XX:MaxTenuringThreshold=0: Set the maximum age of the garbage. If set to 0, the young generation object does not go through the Survivor area and goes directly to the old generation. For applications with more aged generations, it can be more efficient. If this value is set to a larger value, the young generation object will be copied multiple times in the Survivor area, which increases the survival time of the object in the young generation and increases the probability of it being recycled in the young generation.

  1. Recycler Selection JVM Three options were given: serial collector、 parallel collector、 Concurrent Collector, But serial collectors are only suitable for small data volumes, consequentlyhere The choice is mainly for parallel collectors and concurrent collectors。 By default,JDK5.0 Previously, serial collectors were used, If you want to use other collectors you need to add the appropriate parameters at startup。JDK5.0 after,JVM It will be based on the current System Configuration Make a judgment.
    1. Throughput-first parallel collector As mentioned above, Parallel collectors are primarily focused on reaching a certain throughput as objectives, Applicable to science and technology, back-office processing, etc.。 Typical configuration.
      • java -Xmx3800m -Xms3800m -Xmn2g -Xss128k -XX:+UseParallelGC -XX:ParallelGCThreads=20 -XX:+UseParallelGC: Select the garbage collector as a parallel collector. This configuration is valid for younger generations only. That is, with the above configuration, the younger generation uses concurrent collection, while the older generation still uses serial collection. -XX:ParallelGCThreads=20: Configure the number of threads for the parallel collector, i.e.: how many threads to garbage collect together at the same time. This value is best configured to be equal to the number of processors.
      • java -Xmx3550m -Xms3550m -Xmn2g -Xss128k -XX:+UseParallelGC -XX:ParallelGCThreads=20 -XX:+UseParallelOldGC -XX:+UseParallelOldGC: Configure the ageing generation garbage collection method to be parallel. JDK 6.0 supports parallel collection for older generations.
      • java -Xmx3550m -Xms3550m -Xmn2g -Xss128k -XX:+UseParallelGC-XX:MaxGCPauseMillis=100 -XX:MaxGCPauseMillis=100: Set the maximum time for each young generation garbage collection, if this time cannot be met, the JVM will automatically adjust the young generation size to meet this value.
      • java -Xmx3550m -Xms3550m -Xmn2g -Xss128k -XX:+UseParallelGC -XX:MaxGCPauseMillis=100 -XX:+UseAdaptiveSizePolicy -XX:+UseAdaptiveSizePolicy: After setting this option, The parallel collector automatically selects the young generation area size and the correspondingSurvivor District ratio, in order to achieve objectives System-specified minimum response time or collection frequency, etc., This value is recommended when using the parallel collector, Keep it open.。
    2. Concurrent collector with response time priority As mentioned above, the concurrent collector is mainly to ensure the response time of the system and to reduce the stopping time during garbage collection. Suitable for application servers, telecommunication field, etc. Typical configuration.
      • java -Xmx3550m -Xms3550m -Xmn2g -Xss128k -XX:ParallelGCThreads=20 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+UseConcMarkSweepGC: Set the ageing generation to concurrent collection. After configuring this in the test, the -XX:NewRatio=4 configuration fails for unknown reasons. So, at this point, the young generation size is best set with -Xmn. -XX:+UseParNewGC: Set the young generation to be collected in parallel. Can be used in conjunction with CMS collection. JDK5.0 and above, the JVM will set itself according to the system configuration, so there is no need to set this value again.
      • java -Xmx3550m -Xms3550m -Xmn2g -Xss128k -XX:+UseConcMarkSweepGC -XX:CMSFullGCsBeforeCompaction=5 -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction: Since the concurrent collector does not compress and organize the memory space, it will generate "fragmentation" after a period of time, making the operation less efficient. This value sets how many times the GC is run after which the memory space is compressed and organized. -XX:+UseCMSCompactAtFullCollection: turn on compression for older generations. It may affect performance, but it will eliminate fragmentation.
  2. Supporting Information JVM A large number of command line parameters are provided, printable information, For commissioning purposes。 Some of the main ones are as follows:
    • -XX:+PrintGC Output form:[GC 118250K->113543K(130112K), 0.0094143 secs] [Full GC 121376K->10414K(130112K), 0.0650971 secs]
    • -XX:+PrintGCDetails Output form:[GC [DefNew: 8614K->781K(9088K), 0.0123035 secs] 118250K->113543K(130112K), 0.0124633 secs] [GC [DefNew: 8614K->8614K(9088K), 0.0000665 secs][Tenured: 112761K->10414K(121024K), 0.0433488 secs] 121376K->10414K(130112K), 0.0436268 secs]
    • -XX:+PrintGCTimeStamps-XX:+PrintGC:PrintGCTimeStamps Can be mixed with the two above。 Output form:11.851: [GC 98328K->93620K(130112K), 0.0082960 secs]
    • -XX:+PrintGCApplicationConcurrentTime: Print the uninterrupted execution time of the program before each garbage collection. Can be mixed with the above. Output form:Application time: 0.5291524 seconds
    • -XX:+PrintGCApplicationStoppedTime: prints the time the program was paused during garbage collection. Can be mixed with the above. Output form: Total time for which application threads were stopped: 0.0468229 seconds
    • -XX:PrintHeapAtGC: printableGC Detailed stack information before and after。 Output form: 34.702: [GC {Heap before gc invocations=7: def new generation total 55296K, used 52568K [0x1ebd0000, 0x227d0000, 0x227d0000) eden space 49152K, 99% used[0x1ebd0000, 0x21bce430, 0x21bd0000) from space 6144K, 55% used[0x221d0000, 0x22527e10, 0x227d0000) to space 6144K, 0% used [0x21bd0000, 0x21bd0000, 0x221d0000) tenured generation total 69632K, used 2696K [0x227d0000, 0x26bd0000, 0x26bd0000) the space 69632K, 3% used[0x227d0000, 0x22a720f8, 0x22a72200, 0x26bd0000) compacting perm gen total 8192K, used 2898K [0x26bd0000, 0x273d0000, 0x2abd0000) the space 8192K, 35% used [0x26bd0000, 0x26ea4ba8, 0x26ea4c00, 0x273d0000) ro space 8192K, 66% used [0x2abd0000, 0x2b12bcc0, 0x2b12be00, 0x2b3d0000) rw space 12288K, 46% used [0x2b3d0000, 0x2b972060, 0x2b972200, 0x2bfd0000) 34.735: [DefNew: 52568K->3433K(55296K), 0.0072126 secs] 55264K->6615K(124928K)Heap after gc invocations=8: def new generation total 55296K, used 3433K [0x1ebd0000, 0x227d0000, 0x227d0000) eden space 49152K, 0% used[0x1ebd0000, 0x1ebd0000, 0x21bd0000) from space 6144K, 55% used [0x21bd0000, 0x21f2a5e8, 0x221d0000) to space 6144K, 0% used [0x221d0000, 0x221d0000, 0x227d0000) tenured generation total 69632K, used 3182K [0x227d0000, 0x26bd0000, 0x26bd0000) the space 69632K, 4% used[0x227d0000, 0x22aeb958, 0x22aeba00, 0x26bd0000) compacting perm gen total 8192K, used 2898K [0x26bd0000, 0x273d0000, 0x2abd0000) the space 8192K, 35% used [0x26bd0000, 0x26ea4ba8, 0x26ea4c00, 0x273d0000) ro space 8192K, 66% used [0x2abd0000, 0x2b12bcc0, 0x2b12be00, 0x2b3d0000) rw space 12288K, 46% used [0x2b3d0000, 0x2b972060, 0x2b972200, 0x2bfd0000) } , 0.0757599 secs]
    • -Xloggc:filename: Used in conjunction with the above, logs relevant log information to a file for analysis.
  3. Common Configuration Summary
    1. heap setup
      • -Xms:initial heap size
      • -Xmx:maximum heap size
      • -XX:NewSize=n: set the size of the young generation
      • -XX:NewRatio=n: sets the ratio of younger to older generations. For example, if :is 3, it means that the ratio of young generation to old generation is 1:3, and the young generation accounts for 1/4 of the sum of the entire young generation and old generation
      • -XX:SurvivorRatio=n:ratio of Eden area to two Survivor areas in the young generation. Note that there are two Survivor zones. E.g., 3, means that Eden:Survivor = 3:2, and a Survivor zone accounts for 1/5 of the entire young generation
      • -XX:MaxPermSize=n: set persistent generation size
    2. Collector Settings
      • -XX:+UseSerialGC: set serial collector
      • -XX:+UseParallelGC: set parallel collector
      • -XX:+UseParalledlOldGC: set parallel aged generation collector
      • -XX:+UseConcMarkSweepGC: set concurrent collector
    3. Garbage collection statistics
      • -XX:+PrintGC
      • -XX:+PrintGCDetails
      • -XX:+PrintGCTimeStamps
      • -Xloggc:filename
    4. Parallel collector settings
      • -XX:ParallelGCThreads=n: Sets the number of CPUs used by the parallel collector when collecting. Number of parallel collection threads.
      • -XX:MaxGCPauseMillis=n:Set the maximum pause time for parallel collection
      • -XX:GCTimeRatio=n: Sets the garbage collection time as a percentage of the program's runtime. The formula is 1/(1+n)
    5. Concurrent Collector Settings
      • -XX:+CMSIncrementalMode:Set to incremental mode. Suitable for single CPU cases.
      • -XX:ParallelGCThreads=n: Sets the number of CPUs used when the concurrent collector's young generation collection method is parallel. Number of parallel collection threads.

Tuning Summary

  1. Young Generation Size Choice
    • Response time priority applications: set as large as possible until close to the system's minimum response time limit (chosen as appropriate). In such cases, the frequency of collection by younger generations is also minimal. Also, reduce the number of objects reaching the older generations.
    • Throughput-first applications: set as large as possible, possibly up to the Gbit level. Because there is no requirement for response time, garbage collection can be done in parallel and is generally suitable for applications with more than 8 CPUs.
  2. Older generation size options
    • Response time first applications: aged generations use concurrent collectors, so their size needs to be set carefully, generally taking into account some parameters such as concurrent session rate and session duration. If the heap is set small, it can cause memory fragmentation, high reclaim frequency, and application pauses while using traditional marker removal; if the heap is large, it takes longer to collect. The optimal solution, which generally needs to be obtained with reference to the following data.
      • Concurrent garbage collection of information
      • Number of concurrent collections of persistent generations
      • Traditional GC information
      • Proportion of time spent on recycling for younger and older generations

      Reducing the time spent by younger and older generations will generally improve the efficiency of the application

    • Throughput-first applications: generally throughput-first applications have a large young generation and a smaller old generation. The reason is that this reclaims most of the short-term objects and reduces the medium-term ones as much as possible, while the older generations do their best to store the long-term surviving objects.
  3. Debris problems caused by smaller heaps Because the concurrent collector of the aged generation uses a mark-and-clear algorithm, it does not compress the heap. When the collector reclaims, he merges adjacent spaces so that they can be allocated to larger objects. However, when the heap space is small, after a period of time, "fragmentation" will occur, and if the concurrent collector cannot find enough space, then the concurrent collector will stop and use the traditional mark-and-clean method to recycle. If "fragmentation" occurs, the following configuration may be required.
    • -XX:+UseCMSCompactAtFullCollection: turn on compression of aged generations when using concurrent collectors.
    • -XX:CMSFullGCsBeforeCompaction=0: With the above configuration on,here How many times to setFull GC back, Compression of older generations

Bottlenecks in garbage collection

   The traditional generation-by-generation garbage collection approach has minimized the burden of garbage collection on the application to some extent, pushing the throughput of the application to a limit. But the one problem he can't solve is the application pause caused by Full GC. In some application scenarios with high real-time requirements, the request pile-up and request failures caused by GC pauses are unacceptable. Such applications may require requests to be returned within a few hundred or even a few tens of milliseconds, and if the generational garbage collection approach is to achieve this metric, it can only limit the maximum heap setting to a relatively small range, but this has the effect of limiting the processing power of the application itself, which again is unacceptable.

   The generational garbage collection approach does also provide concurrent collectors considering real-time requirements and supports maximum pause time settings, but it is also not very effective due to the limited memory partitioning model of generational garbage collection.

   In order to meet the real-time requirements (in fact, the Java language was originally designed on embedded systems), a new garbage collection method was called out that supports both short pauses and large memory space allocations. It can be a good solution to the problems caused by the traditional sub-generation method.

Evolution of incremental collection

   The incremental collection approach could theoretically solve the problems posed by the traditional subgeneration approach. Incremental collection divides the heap space into a series of memory blocks, when using, first use a part of them (will not be used up), garbage collection will be used before the part of the surviving objects in the space not used again, so that you can achieve the effect of collecting while using all the time, avoiding the traditional generation of the whole used up and then suspended recycling.

   Of course, the traditional generational collection approach also provides concurrent collection, but he has the fatal aspect of making the whole heap a memory block, which on the one hand causes fragmentation (no compression), and on the other hand each collection of his is for the whole heap, which cannot be selected, and is still weak in terms of pause time control. And the incremental approach, by chunking memory space, solves exactly the above problem.

Garbage Firest(G1)

   The main reference for this section ishere , this article is sort of a paraphrase of the G1 algorithm paper. I didn't add anything to it either.


   In terms of design goals the G1 is entirely intended for large scale applications.

Support a very large pile high throughput -- Support MultiCPU and garbage collection threads -- In the case of a suspended main thread, Using parallel collection -- With the main thread running, Using concurrent collection Real-time target: configurable to take up to M milliseconds for garbage collection in N milliseconds

   Of course G1 has to meet real-time requirements, and there is some loss in performance relative to the traditional generational recycling algorithm.

Algorithm Detail

Figure 1 G1 collector

  G1 can be said to have taken the best of all worlds and strived to reach a kind of perfection. He takes the advantage of incremental collection and divides the entire heap into regions of equal size. The memory is reclaimed and divided in regionon; also, he has learned the features of CMS to divide this garbage collection process into several stages and spread out a garbage collection process; moreover, G1 also agrees with the idea of generational garbage collection and believes that different objects have different life cycles and can be collected in different ways, therefore, it also supports generational garbage collection. In order to achieve predictability on the recycling time, G1, after scanning the region, sorts the size of the active objects in it, and first collects those regions with small active objects in order to quickly reclaim the space (fewer active objects to copy), because the active objects are small and most of them can be considered as garbage inside, so this approach is called Garbage First (G1) garbage collection algorithm, i.e.: garbage first recycling.

   Recovery steps.

Initial Marking (Initial Marking)

   G1 stores two marking bitmaps for each region, a previous marking bitmap and a next marking bitmap, which contains a bit of address information to point to the start of the object.

   Before starting Initial Marking, the next marking bitmap is first cleared concurrently, then all application threads are stopped and scanned to identify the objects in each region that root can access directly, and the value of top in the region is put into next top at mark start (TAMS), after which all application threads are resumed.

   The conditions that trigger the execution of this step are.

  G1 defines a threshold value for the percentage of the JVM Heap size, called h, and an H, with the value of H being (1-h)*Heap Size, which is currently fixed, and which G1 may subsequently change to dynamic, depending on the operation of the jvm. In the generation approach, G1 also defines a u and a soft limit, with the value of the soft limit being H-u*Heap Size, and when the memory used in the Heap exceeds the soft limit value, this step is performed as soon as possible after a clean up is executed, within the GC pause time allowed by the application.  In the pure method, G1 forms a ring of markings and clean up so that clean up can fully use the information of the markings. When clean up starts recycling, it first recycles the regions that can bring the most memory space, and when it recovers the regions that have little space after many clean up, G1 reinitializes a new ring of markings and clean up.

Concurrent Marking (Concurrent Marking)

   The objects scanned by the previous Initial Marking are traversed to identify the active status of the objects below them, and the relationships since the objects that have been concurrently modified by the application threads during this period are recorded in the remembered set logs, and the newly created objects are placed in an address interval higher than the top value, and the default status of these newly created objects is active, while the top value is modified.

Final Marking Pause (FMP)

   When the application thread's remembered set logs are not full, they are not put into the filled RS buffers, in such a case, the changes to the card recorded in these remembered set logs will be updated, so this step is needed, all this step has to do is to process the contents of the remembered set logs that exist in the application thread and modify the remembered sets accordingly, this step requires suspending the application and running in parallel.

Live Data Counting and Cleanup

   It is worth noting that in G1, it is not certain that the Cleanup step will be executed once the Final Marking Pause is executed; since this step requires suspending the application, G1 needs to plan when to execute Cleanup reasonably according to the maximum GC-induced pause time specified by the user in order to achieve quasi-real-time requirements, and there are several other cases that trigger the execution of this step.

G1 uses a replication method for collection and must ensure that there is enough "to space" each time, so G1 adopts the strategy of performing a Cleanup step when the used memory space reaches H. as far as sth is concernedfull-young harmonypartially-young of the generational model of theG1 with regard (preceding phrase), Then there are situations that triggerCleanup implementation of the Convention,full-young in the mode,G1 Acceptable pause time based on application、 recallyoung regions The time that needs to be consumed to estimate ayound regions numerical value of the, properJVM of the assigned object in theyoung regions When the number of,Cleanup It will be executed;partially-young in the mode, will then try to execute as often as possible within the acceptable pause time of the applicationCleanup, And to perform to the maximumnon-young regions ofCleanup。


   In the future, JVM tuning may need to be tuned for the G1 algorithm more often than not.

JVM tuning tools

   The main ones are Jconsole, jProfile, and VisualVM.

  Jconsole :jdk comes with, Simple functionality, But it can be used when there is some load on the system。 There is a very detailed trace of the garbage collection algorithm。 Detailed description referencehere

   JProfiler: commercial software, requires payment. Powerful features. Detailed description referencehere

   VisualVM: comes with the JDK, powerful and similar to JProfiler. Recommended.

How to tune

   Observing memory frees, collection class checking, object trees

   All of these tuning tools above offer powerful features, but in general they generally fall into the following categories of functionality

Heap Information View

Figure 2 Viewing Heap Information

   You can view heap space size allocation (young generation, old generation, persistent generation allocation).

   Provides instant garbage collection functionality.

   Trash monitoring (monitoring recycling over time)

Figure 3 Class and object information within the heap

   View in-heap class, object information view: number, type, etc.

Figure 4 Object reference situation

   Object Reference View。   With the heap information viewing aspect, we can generally solve the following problems without any problems.

   -- Whether the size division between older and younger generations is reasonable

   -- memory leak

   -- Is the garbage collection algorithm set up properly

Thread Monitoring

chart5 Thread Monitoring information

   Thread information monitoring: number of system threads.

   Thread state monitoring: what state the individual threads are in.

Figure 6 Thread dump information

Dump thread details: view the thread's internal operation. Deadlock check .

Hot Spot Analysis

chart7 Hot Spot Analysis

   CPU Hotspot: check which methods of the system are taking up a lot of CPU time.

   Memory hotspots: check which objects have the largest number in the system (live objects and destroyed objects are counted together for a certain time).

   These two things are very helpful for system optimization. Instead of aimlessly optimizing all the code, we can target bottleneck finding and perform system optimization based on the hotspots we find.


   A snapshot is a snapshot of the system running up to a certain point. When we are tuning, it is impossible to keep track of all system changes by eye, relying on the snapshot feature, we can perform two different running moments of the system with different objects (or classes, threads, etc.) in order to quickly find the problem.

   e.g., I want to check the system after garbage collection, Are there any objects that should be recovered that have been left out?。 or so, I can do garbage collection before and after, separately for a heap situation snapshot, Then compare twice snapshot of the target population。

Memory leak check

   Memory leaks are a relatively common problem, And the solution is relatively universal,here You can focus on, And threads、 The issue of hot spots is a matter of specificity。

   Memory leaks can generally be understood as system resources (all aspects of resources, heap, stack, threads, etc.) that are used incorrectly, resulting in used resources that cannot be reclaimed (or are not reclaimed) and thus new resource allocation requests cannot be completed, causing system errors.

   Memory leaks are more harmful to the system because he can directly cause the system to crash.

   A distinction needs to be made, there is a difference between a memory leak and a system overload, although the end result that may result is the same. A memory leak is when a used up resource is not reclaimed causing an error, while a system overload is when the system really doesn't have that many resources left to allocate (other resources are being used).

Ageing generation pile space is taken up

   anomaly:java.lang.OutOfMemoryError: Java heap space


chart8 Heap space is slowly running out

   This is the most typical form of memory leak, which simply means that all heap space is filled up with garbage objects that cannot be reclaimed and the virtual machine cannot allocate new space.

   As shown above, this is a very typical graph of a garbage collection situation for a memory leak. All peak sections are once garbage collection points, and all trough sections indicate the memory remaining after one garbage collection. Connecting all the valley points reveals a bottom-to-high line, which indicates that the system's heap space is continuously occupied over time and will eventually fill up the entire heap space. Therefore it can be tentatively assumed that there may be a memory leak inside the system. (The graph above is for illustrative purposes only; in real situations the data collection time will take longer, e.g. several hours or days)


   This approach is also relatively easy to solve, which is generally based on a comparison of the situation before and after garbage collection, and also based on the analysis of object references (common collection object references), which can basically find the leak.

Persistent generation is occupied

   anomaly:java.lang.OutOfMemoryError: PermGen space


  Perm Space is taken up。 It is not possible to provide a newclass triggered by the allocation of storage space anomaly。 this one anomaly It wasn't there before., But inJava Reflecting the heavy use of this today anomaly It's more common.。 The main reason for this is that a lot of dynamically reflected generated classes are constantly being loaded, eventually lead toPerm The area is occupied。

   What's even scarier is that different classLoaders will load the same class even though they use it, which amounts to the same thing, if there are N classLoaders then he will be loaded N times. Thus, in some cases, the problem is considered essentially insoluble. Of course, the existence of a large number of classLoader and a large number of reflected classes is not really a big deal.


  1. -XX:MaxPermSize=16m

   2. Switch to the JDK. JRocket, for example.

stack overflow (computing)


   Description. Not much more to say about that., Usually it's just recursion that doesn't return, Or a loop call causing

Thread stack full

   anomaly:Fatal: Stack size too small There is a limit to the amount of space a thread can have in。JDK5.0 Later this value is1M。 The data associated with this thread will be stored in the。 But when the thread space is full, will appear above anomaly。

   Solution. Increasing the thread stack size。-Xss2m。 But this configuration won't solve the underlying problem, Also look at the code section for parts that cause leaks。

System memory is full

   anomaly:java.lang.OutOfMemoryError: unable to create new native thread


   this one anomaly is caused by the OS not having enough resources to spawn this thread。 When the system creates a thread, In addition to having to be inJava Outside of memory allocation in the heap, The operating system itself needs to allocate resources to create threads。 therefore, After a certain number of threads are large, There may be room in the pile., But the OS can't allocate the resources anymore, And this comes up. anomaly finish。

   The more memory allocated to the Java virtual machine, the fewer resources the system has left, so when system memory is fixed, the more memory allocated to the Java virtual machine, then the fewer threads the system can spawn in total, in an inverse relationship. Also, the space allocated to individual threads can be reduced by modifying -Xss, which also increases the number of threads produced within the system in total.


   1. Redesign the system to reduce the number of threads.

   2. Reduce the individual thread size with -Xss if the number of threads cannot be reduced. so that more threads can be produced.

The paradox of garbage collection

   As the saying goes, "What goes around comes around". Java's garbage collection does bring a lot of benefits for development. But in some high performance, high concurrency situations, garbage collection does become a bottleneck that constrains Java applications. The current garbage collection algorithm of the JDK has never been able to solve the problem of pausing during garbage collection, because this pause seriously affects the corresponding time of the program, causing congestion or buildup. This was a major reason for the subsequent addition of the G1 algorithm to the JDK.

   Of course, the above addresses the problems posed by garbage collection from a technical perspective, but from the system design side we need to ask.

1. Do we need to allocate such a large amount of memory space to the application? 2. Can we design our systems by using memory efficiently rather than by expanding it?

What do we have in our memory?

   What needs to go in the memory? Personally, I think that what needs to be put in memory is something that your application needs to use to again in the near future. Think about it, if you don't use these things in the future, why bother putting them in memory? Wouldn't it be better to put files, databases? These typically include.

1. Business-related data during system operation. For example, session in web applications, session for instant messaging, etc. This data generally needs to be present during a user access cycle or a usage process. 2. buffer memory。 The cache is a bit more, Everything you need for quick access can be puthere noodle。 In fact, the business data above can be understood as a kind of cache。 3. threads。

   So, can we assume that if we don't put the business data and cache in the JVM, or keep them separate, then Java applications will require much less memory to use, with a corresponding reduction in garbage collection time.

   I think it's possible.


Databases, file systems

   Putting all the data into a database or file system is one of the easiest ways to do this. In this approach, the memory of a Java application is essentially equal to the memory required to handle one peak concurrent request. Data is fetched from both the database and the file system at each request. It can also be understood that after one business visit, all objects are ready for recycling.

   This is one of the most efficient ways to use memory, but it's inefficient from an application perspective.

Memory-Hard Disk Mapping

   The problem above is due to inefficiencies brought about by our use of the filesystem. But it would be much more efficient if instead of reading and writing to the hard drive, we were writing to memory.

   The database and file system are both solidly persistent, but when we don't need to be so persistent, we can do some workarounds - use memory as a hard drive.

  Memory-disk mapping is nice and powerful, both in terms of using cache and having no impact on the memory usage of Java applications. A Java application is still a Java application, he only knows that it's still files that are read and written, but it's actually memory.

   This approach has the benefit of both Java applications and caching. The widespread use of memcached also represents exactly this category.

Deploying multiple JVMs on the same machine

   This is also a good way to divide it into vertical and horizontal demolition. Vertical splitting can be understood as dividing a Java application into different modules, each using a separate Java process. A cross split, on the other hand, is a deployment of multiple JVMs for the same functional application.

   By deploying multiple JVMs, it is possible to keep one garbage collection per JVM just within tolerable limits. But this amounts to distributed processing, and the additional complexity it brings is something that needs to be evaluated. Also, there are distributed support for such JVMs to consider, don't ask for money :)

Program-controlled object life cycle

   This approach is the ideal one, which is not available for current VMs and is purely hypothetical. I.e., consider programmatically configuring which objects can be skipped directly during garbage collection, reducing the time it takes for the garbage collection thread to traverse the marker.

   This approach is equivalent to telling the VM at programming time that certain objects you can collect after a certain time or are marked by code as ready for collection (similar to C, C++), until then it has no effect even if you go through him, he must still be referenced.

   This approach, if implemented by the JVM, would personally be a leap forward for Java, i.e., with the advantages of garbage collection and the controllability of C and C++ for memory.

Thread allocation

   Java's blocking threading model can largely be discarded, and there are more mature NIO frameworks out there. Blocking IO poses the problem of linear growth in the number of threads, whereas NIO can be converted to constant threads. Therefore, for server-side applications, NIO is still the only option. However, does the AIO brought to us in JDK7 make a difference? We'll see what happens.

Other JDKs

  This article is all about Sun's JDK, and there are also JRocket and IBM's JDKs that are commonly available today. One of them, JRocket, is much higher in IO than Sun's, but Sun JDK 6.0 has improved a lot since then. And JRocket has an advantage when it comes to garbage collection, with its ability to set a maximum pause time for garbage collection, which is also appealing. However, Sun's G1 implementation will be a quantum leap forward in this regard.

1、Pythons open source face recognition library up to 9938 offline recognition rate
2、Sword to Offer Adjusting the order of the array so that odd numbers come before even numbers
3、China Student Programming CompetitionCCPC the end of the show Tsinghua wins the title Kuangwei promises to wrap up the future5 Total event sponsorship for the year
4、Write your own webpackloader

    已推荐到看一看 和朋友分享想法
    最多200字,当前共 发送