chart8 Heap space is slowly running out
This is the most typical form of memory leak, which simply means that all heap space is filled up with garbage objects that cannot be reclaimed and the virtual machine cannot allocate new space.
As shown above, this is a very typical graph of a garbage collection situation for a memory leak. All peak sections are once garbage collection points, and all trough sections indicate the memory remaining after one garbage collection. Connecting all the valley points reveals a bottom-to-high line, which indicates that the system's heap space is continuously occupied over time and will eventually fill up the entire heap space. Therefore it can be tentatively assumed that there may be a memory leak inside the system. (The graph above is for illustrative purposes only; in real situations the data collection time will take longer, e.g. several hours or days)
This approach is also relatively easy to solve, which is generally based on a comparison of the situation before and after garbage collection, and also based on the analysis of object references (common collection object references), which can basically find the leak.
Persistent generation is occupied
anomaly：java.lang.OutOfMemoryError: PermGen space
Perm Space is taken up。 It is not possible to provide a newclass triggered by the allocation of storage space anomaly。 this one anomaly It wasn't there before.， But inJava Reflecting the heavy use of this today anomaly It's more common.。 The main reason for this is that a lot of dynamically reflected generated classes are constantly being loaded， eventually lead toPerm The area is occupied。
What's even scarier is that different classLoaders will load the same class even though they use it, which amounts to the same thing, if there are N classLoaders then he will be loaded N times. Thus, in some cases, the problem is considered essentially insoluble. Of course, the existence of a large number of classLoader and a large number of reflected classes is not really a big deal.
2. Switch to the JDK. JRocket, for example.
stack overflow (computing)
Description. Not much more to say about that.， Usually it's just recursion that doesn't return， Or a loop call causing
Thread stack full
anomaly：Fatal: Stack size too small
Description.java There is a limit to the amount of space a thread can have in。JDK5.0 Later this value is1M。 The data associated with this thread will be stored in the。 But when the thread space is full， will appear above anomaly。
Solution. Increasing the thread stack size。-Xss2m。 But this configuration won't solve the underlying problem， Also look at the code section for parts that cause leaks。
System memory is full
anomaly：java.lang.OutOfMemoryError: unable to create new native thread
this one anomaly is caused by the OS not having enough resources to spawn this thread。 When the system creates a thread， In addition to having to be inJava Outside of memory allocation in the heap， The operating system itself needs to allocate resources to create threads。 therefore， After a certain number of threads are large， There may be room in the pile.， But the OS can't allocate the resources anymore， And this comes up. anomaly finish。
The more memory allocated to the Java virtual machine, the fewer resources the system has left, so when system memory is fixed, the more memory allocated to the Java virtual machine, then the fewer threads the system can spawn in total, in an inverse relationship. Also, the space allocated to individual threads can be reduced by modifying -Xss, which also increases the number of threads produced within the system in total.
1. Redesign the system to reduce the number of threads.
2. Reduce the individual thread size with -Xss if the number of threads cannot be reduced. so that more threads can be produced.
The paradox of garbage collection
As the saying goes, "What goes around comes around". Java's garbage collection does bring a lot of benefits for development. But in some high performance, high concurrency situations, garbage collection does become a bottleneck that constrains Java applications. The current garbage collection algorithm of the JDK has never been able to solve the problem of pausing during garbage collection, because this pause seriously affects the corresponding time of the program, causing congestion or buildup. This was a major reason for the subsequent addition of the G1 algorithm to the JDK.
Of course, the above addresses the problems posed by garbage collection from a technical perspective, but from the system design side we need to ask.
1. Do we need to allocate such a large amount of memory space to the application?
2. Can we design our systems by using memory efficiently rather than by expanding it?
What do we have in our memory?
What needs to go in the memory? Personally, I think that what needs to be put in memory is something that your application needs to use to again in the near future. Think about it, if you don't use these things in the future, why bother putting them in memory? Wouldn't it be better to put files, databases? These typically include.
1. Business-related data during system operation. For example, session in web applications, session for instant messaging, etc. This data generally needs to be present during a user access cycle or a usage process.
2. buffer memory。 The cache is a bit more， Everything you need for quick access can be puthere noodle。 In fact, the business data above can be understood as a kind of cache。
So, can we assume that if we don't put the business data and cache in the JVM, or keep them separate, then Java applications will require much less memory to use, with a corresponding reduction in garbage collection time.
I think it's possible.
Databases, file systems
Putting all the data into a database or file system is one of the easiest ways to do this. In this approach, the memory of a Java application is essentially equal to the memory required to handle one peak concurrent request. Data is fetched from both the database and the file system at each request. It can also be understood that after one business visit, all objects are ready for recycling.
This is one of the most efficient ways to use memory, but it's inefficient from an application perspective.
Memory-Hard Disk Mapping
The problem above is due to inefficiencies brought about by our use of the filesystem. But it would be much more efficient if instead of reading and writing to the hard drive, we were writing to memory.
The database and file system are both solidly persistent, but when we don't need to be so persistent, we can do some workarounds - use memory as a hard drive.
Memory-disk mapping is nice and powerful, both in terms of using cache and having no impact on the memory usage of Java applications. A Java application is still a Java application, he only knows that it's still files that are read and written, but it's actually memory.
This approach has the benefit of both Java applications and caching. The widespread use of memcached also represents exactly this category.
Deploying multiple JVMs on the same machine
This is also a good way to divide it into vertical and horizontal demolition. Vertical splitting can be understood as dividing a Java application into different modules, each using a separate Java process. A cross split, on the other hand, is a deployment of multiple JVMs for the same functional application.
By deploying multiple JVMs, it is possible to keep one garbage collection per JVM just within tolerable limits. But this amounts to distributed processing, and the additional complexity it brings is something that needs to be evaluated. Also, there are distributed support for such JVMs to consider, don't ask for money :)
Program-controlled object life cycle
This approach is the ideal one, which is not available for current VMs and is purely hypothetical. I.e., consider programmatically configuring which objects can be skipped directly during garbage collection, reducing the time it takes for the garbage collection thread to traverse the marker.
This approach is equivalent to telling the VM at programming time that certain objects you can collect after a certain time or are marked by code as ready for collection (similar to C, C++), until then it has no effect even if you go through him, he must still be referenced.
This approach, if implemented by the JVM, would personally be a leap forward for Java, i.e., with the advantages of garbage collection and the controllability of C and C++ for memory.
Java's blocking threading model can largely be discarded, and there are more mature NIO frameworks out there. Blocking IO poses the problem of linear growth in the number of threads, whereas NIO can be converted to constant threads. Therefore, for server-side applications, NIO is still the only option. However, does the AIO brought to us in JDK7 make a difference? We'll see what happens.
This article is all about Sun's JDK, and there are also JRocket and IBM's JDKs that are commonly available today. One of them, JRocket, is much higher in IO than Sun's, but Sun JDK 6.0 has improved a lot since then. And JRocket has an advantage when it comes to garbage collection, with its ability to set a maximum pause time for garbage collection, which is also appealing. However, Sun's G1 implementation will be a quantum leap forward in this regard.
>>1、Pythons open source face recognition library up to 9938 offline recognition rate2、Sword to Offer Adjusting the order of the array so that odd numbers come before even numbers3、China Student Programming CompetitionCCPC the end of the show Tsinghua wins the title Kuangwei promises to wrap up the future5 Total event sponsorship for the year4、Write your own webpackloader5、72JamesStewartCalculus5thEditionTrigonometricIntegrals