Linux Operation System
这里所指之操作系统专指Linux
/Unix
系统,所有相关命令也是在该类系统上所有,在其他系统极有可能无法使用。
free
$ free -m
total used free shared buffers cached
Mem: 7869 7499 369 0 237 5855
-/+ buffers/cache: 1407 6462
Swap: 0 0 0
free
命令是查看内存的使用情况,这里需要注意的最重要的两个值是:
项目 | 说明 |
---|---|
-/+ buffers/cached [free] | 去除内存中缓存部分后的空闲空间 |
Swap [used] | 交换分区所占用的空间,一旦开始使用交换分区后,系统性能即会出现拐点,即部分内存访问会转化成磁盘访问,系统会表现出laod 很高,响应很慢或无响应 |
buffers
– block device I/O cache
cached
– virtual page cache
upttime
$ uptime
09:56:37 up 355 days, 6:48, 1 user, load average: 0.00, 0.01, 0.05
uptime
主要是看系统load
,load
平均值分别是:1min
, 5min
, 15min
,相关的命令还有:w
,top
等
top
top - 12:04:01 up 355 days, 8:56, 1 user, load average: 0.00, 0.01, 0.05
Tasks: 135 total, 1 running, 134 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.8%us, 0.2%sy, 0.0%ni, 98.5%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8058676k total, 7690196k used, 368480k free, 243244k buffers
Swap: 0k total, 0k used, 0k free, 5996240k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
27216 haitao.j 20 0 3118m 17m 9112 S 29.7 0.2 0:00.15 java
1 root 20 0 19272 1160 872 S 0.0 0.0 0:48.81 init
2 root 20 0 0 0 0 S 0.0 0.0 0:07.60 kthreadd
top
命令看的的信息就比较详细了,示例中上面5行分别是:
- 与
uptime
相同:启动时间,登录用户数,平均load
- 任务数:总的任务数,运行的任务数,睡眠的任务数,停止的任务数,僵死的任务数
CPU
占用情况,如%id
表示CPU
是否空闲- 内存使用情况
- 交换分区使用情况,和
free
命令中看到的一样,如果出现交换分区占用表示系统已经出现严重内存相关的问题
交互命令说明:
命令 | 说明 |
---|---|
交互命令: | |
1 | CPU 信息是否合并显示切换 |
u … <user name> | 过滤指定用户的任务 |
k … <PID> | 杀掉指定的任务 |
排序命令: | |
M | %MEM :按内存占用倒序排序,当出现使用交换分区的情况下,第一个需要看的数据 |
N | PID :按PID 倒序排序,不清楚有什么用 |
P | %CPU :按CPU 使用倒序排序,看看哪个任务使用的CPU 最多 |
T | TIME+ :按运行时间倒序排序,具体也没怎么用过 |
pstack
当你的系统没有pstack
命令的时候也可以使用以下命令代替:
gdb -ex "set pagination 0" -ex "thread apply all bt" --batch -p <PID>
pstack
输出:
Thread 5 (Thread 0x7f74a4569700 (LWP 14598)):
#0 0x000000374ac0e8ec in recv () from /lib64/libpthread.so.0
#1 0x00007f74a51bc569 in NET_Read () from /usr/software/install/jdk1.6.0_25/jre/lib/amd64/libnet.so
#2 0x00007f74a51b93db in Java_java_net_SocketInputStream_socketRead0 () from /usr/software/install/jdk1.6.0_25/jre/lib/amd64/libnet.so
#3 0x00007f749d010d6e in ?? ()
#4 0x00007f7400000000 in ?? ()
#5 0x00000007806b257c in ?? ()
#6 0x00007f74a4567ee0 in ?? ()
#7 0x00000007806b40d8 in ?? ()
#8 0x0000000000000000 in ?? ()
jstack
输出:
"Smack Packet Reader (0)" daemon prio=10 tid=0x00007f744c003800 nid=0x3906 runnable [0x00007f74a4567000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at com.sun.net.ssl.internal.ssl.InputRecord.readFully(InputRecord.java:293)
at com.sun.net.ssl.internal.ssl.InputRecord.read(InputRecord.java:331)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:798)
- locked <0x000000078513f290> (a java.lang.Object)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:755)
at com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75)
- locked <0x0000000785140ac8> (a com.sun.net.ssl.internal.ssl.AppInputStream)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
- locked <0x0000000785149668> (a java.io.InputStreamReader)
at java.io.InputStreamReader.read(InputStreamReader.java:167)
at java.io.BufferedReader.fill(BufferedReader.java:136)
at java.io.BufferedReader.read1(BufferedReader.java:187)
at java.io.BufferedReader.read(BufferedReader.java:261)
- locked <0x0000000785149668> (a java.io.InputStreamReader)
at org.xmlpull.mxp1.MXParser.fillBuf(MXParser.java:2992)
at org.xmlpull.mxp1.MXParser.more(MXParser.java:3046)
at org.xmlpull.mxp1.MXParser.nextImpl(MXParser.java:1144)
at org.xmlpull.mxp1.MXParser.next(MXParser.java:1093)
at org.jivesoftware.smack.PacketReader.parsePackets(PacketReader.java:316)
at org.jivesoftware.smack.PacketReader.access$000(PacketReader.java:46)
at org.jivesoftware.smack.PacketReader$1.run(PacketReader.java:72)
注意pstack
中的LWP 14598
和 jstack
中的nid=0x3906
是匹配的,即0x3906(HEX) == 14598(DEC)
。
Network
网络是现代互联网应用最重要的组成部分,相对于我们所处理的海量数据,单机的计算能力存储量是极为有限的,因此我们会选择大量机器堆叠的方式来支持大数据计算,而链接这些机器的就是网络。现实中,不仅仅是计算和存储的问题,面对非常复杂的应用也要求我们将应用拆分成多个集群,所以也就会有Web服务器,Service服务器,Session服务器,Cache服务器,DB服务器等。
Timeout
在Java中相关的Timeout主要有以下两个参数(用于java.net.URLConnection
):
参数 | 默认值 | 说明 |
---|---|---|
sun.net.client.defaultConnectTimeout | -1 | 默认创建链接时的超时时间 |
sun.net.client.defaultReadTimeout | -1 | 默认读数据时的超时时间 |
这两个超时时间都非常重要,如果未设置则默认为-1
,当网络发生异常时,程序有可能永远停在创建链接或读取数据的语句上,而此时通过top
或jtop
这些命令去看时会发现该线程不占用CPU时间,但这种情况会使线程池中可用线程变化,最终会导致应用没有可用线程来处理用户请求而拒绝服务。
在使用Java编程时参考以下代码设置setConnectTimeout
和setReadTimeout
两个Timeout:
URL url = new URL("http://www.example.com/index.htm");
URLConnection conn = url.openConnection();
conn.setConnectTimeout(connTimeoutInMillis);
conn.setReadTimeout(readTimeoutInMillis);
InputStream is = conn.getInputStream();
HttpClient
Hotspot Virtual Machine
Performance techniques used in the Hotspot JVM是OpenJDK(Oracle)官方的性能技术说明文档,完整的阅读相关文档可以帮助我们很好的进行编程及性能优化。
查看HostSpot
的相关参数:
-XX:+PrintCommandLineFlags | 打印命令行中的参数 |
-XX:+PrintFlagsInitial | 打印所有参数及期初始值 |
-XX:+PrintFlagsFinal | 打印所有参数据及运行时的值,即虚拟机在运行时真正起作用的参数的值 |
几乎所有的参数都和性能相关,其中平时会应用到的主要的性能相关参数如下:
-XX:+HeapDumpOnOutOfMemoryError | 当出现OutOfMemoryError 的时候将内存信息Dump到文件中,相当于命令:jmap -dump:format=b,file=xxx.bin <PID> |
-XX:+PrintGC | 打印GC 信息,用于诊断GC 相关的问题 |
更多的VM标记参见:Study_Java_HotSpot_Arguments
jinfo
jinfo <PID> | 查看系统的Property参数和设置的VM标记 |
jinfo -sysprops <PID> | 仅查看系统的Property参数 |
jinfo -flags <PID> | 仅查看设置的VM标记 |
jinfo -flag <VM FLAG> <PID> | 打印VM标记的值,如命令jinfo -flag MaxPermSize <PID> 打印-XX:MaxPermSize=85983232 |
jinfo -flag [+/-]<VM FLAG> <PID> | 设置Bool型VM标记打开或关闭 |
jinfo -flag <VM FLAG>=<NEW VALUE> <PID> | 设置非Bool型VM标记的值 |
查看系统Property中各种版本号的示例:
$ jinfo -sysprops 2466 | grep version
Attaching to process ID 2466, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 24.51-b03
java.vm.version = 24.51-b03
java.runtime.version = 1.7.0_51-b13
java.class.version = 51.0
os.version = 10.9.4
java.specification.version = 1.7
java.vm.specification.version = 1.7
java.version = 1.7.0_51
一个设置Bool型VM标记的示例:
$ jinfo -flag HeapDumpOnOutOfMemoryError 2466
-XX:-HeapDumpOnOutOfMemoryError
$ jinfo -flag +HeapDumpOnOutOfMemoryError 2466
$ jinfo -flag HeapDumpOnOutOfMemoryError 2466
-XX:+HeapDumpOnOutOfMemoryError
jstack
jstack -m <PID>
jstat
jstat -options
打印性能计数器数据:
jstat -J-Djstat.showUnsupported=true -snap <PID>
jmap
jcmd
命令 | 成本 | 说明 |
---|---|---|
help | 低 | 打印帮助信息,示例:jcmd <PID> help [<command name>] |
ManagementAgent.stop | 低 | 停止JMX Agent |
ManagementAgent.start_local | 低 | 开启本地JMX Agent |
ManagementAgent.start | 低 | 开启JMX Agent |
Thread.print | 中 | 参数-l 打印java.util.concurrent 锁信息,相当于:jstack <PID> |
PerfCounter.print | 低 | 相当于:jstat -J-Djstat.showUnsupported=true -snap <PID> |
GC.class_histogram | 高 | 相当于:jmap -histo <PID> |
GC.heap_dump | 高 | 相当于:jmap -dump:format=b,file=xxx.bin <PID> |
GC.run_finalization | 中 | 相当于:System.runFinalization() |
GC.run | 中 | 相当于:System.gc() |
VM.uptime | 低 | 参数-date 打印当前时间,VM启动到现在的时候,以秒为单位显示 |
VM.flags | 低 | 参数-all 输出全部,相当于:jinfo -flags <PID> , jinfo -flag <VM FLAG> <PID> |
VM.system_properties | 低 | 相当于:jinfo -sysprops <PID> |
VM.command_line | 低 | 相当于:jinfo -sysprops 2857 | grep command |
VM.version | 低 | 相当于:jinfo -sysprops 2857 | grep version |
VM.commercial_features | 低 | 收费的,能不用就不用吧 |
VM.native_memory | 中 | |
JFR.stop | 低 | 收费的,能不用就不用吧 |
JFR.start | 低 | 收费的,能不用就不用吧 |
JFR.dump | 高 | 收费的,能不用就不用吧 |
JFR.check | 低 | 收费的,能不用就不用吧 |
jhat
jconsole
jvisualvm
Btrace
Profiling
BTrace绝对是查找性能问题的神器,我做了一个Web页面可以生成一些简单的BTrace脚本http://btrace.org/btrace/。
package com.sun.btrace.samples;
import com.sun.btrace.BTraceUtils;
import com.sun.btrace.Profiler;
import com.sun.btrace.annotations.*;
@BTrace class Profiling {
@Property
Profiler swingProfiler = BTraceUtils.Profiling.newProfiler();
@OnMethod(clazz="/javax\\.swing\\..*/", method="/.*/")
void entry(@ProbeMethodName(fqn=true) String probeMethod) {
BTraceUtils.Profiling.recordEntry(swingProfiler, probeMethod);
}
@OnMethod(clazz="/javax\\.swing\\..*/", method="/.*/", location=@Location(value=Kind.RETURN))
void exit(@ProbeMethodName(fqn=true) String probeMethod, @Duration long duration) {
BTraceUtils.Profiling.recordExit(swingProfiler, probeMethod, duration);
}
@OnTimer(5000)
void timer() {
BTraceUtils.Profiling.printSnapshot("Performance profile", swingProfiler);
}
}
打印出来的表头信息如下:
Block Invocations SelfTime.Total SelfTime.Avg SelfTime.Min SelfTime.Max WallTime.Total WallTime.Avg WallTime.Min WallTime.Max
对表头的相关说明如下:
Block | 完整的类名 + 函数名 |
---|---|
Invocations | 被调用次数 |
SelfTime.Total | 自身运行的总时间(运行时间单位为纳秒,即除以1,000,000 后得到的时间才是毫秒,下同) |
SelfTime.Avg | 自身运行的平均时间 |
SelfTime.Min | 自身运行的最小时间 |
SelfTime.Max | 自身运行的最大时间 |
WallTime.Total | 函数运行的总时间 |
WallTime.Avg | 函数运行的平均时间 |
WallTime.Min | 函数运行的最小时间 |
WallTime.Max | 函数运行的最大时间 |
默认输出的内容非常难以处理,一个简单的办法可以使用Sublime Text
(替换时开启正则表达式)将 +
(空格&加号)替换为\t
(Tab),即可粘贴到Excel
/OpenOffice
等电子表格软件中,再处理就会非常简单了。
JITWatch
https://github.com/AdoptOpenJDK/jitwatch
Other Tools
TProfiler
https://github.com/alibaba/TProfiler - TProfiler is a code profiling tool
CRaSH
http://www.crashub.org/ - CRaSH the shell for the Java Platform
HouseMD
https://github.com/CSUG/HouseMD - HouseMD is a interactive command-line tool for dianosing Java process in runtime
JIP
http://jiprof.sourceforge.net/ - JIP is a code profiling tool much like the hprof tool that ships with the JDK
Limpid Log
http://www.acelet.org/limpidlog/index.html - Limpid Log: log for debugging without log statements
ByCounter
https://sdqweb.ipd.kit.edu/wiki/ByCounter - ByCounter is an approach for portable bytecode counting at runtime
Async-Profier
https://github.com/jvm-profiling-tools/async-profiler - Sampling CPU and HEAP profiler for Java featuring AsyncGetCallTrace + perf_events
Performance Estimate
对于程序员来说性能估算是非常重要的,不仅仅在设计、编码的时候我们需要关注性能,在测试、发布、维护时更需要关注这一块,当一个O(n^2)
,甚至O(n^3)
的程序上线后,带来的问题极有可能是灾难性的,即应用上线即无法正常服务。
Micro Benchmark
微性能测试是了解代码(类/方法)性能比较简单的方法,通过微性能测试可以对比不同的类/方法的性能差别,也可以了解到一些常用类/方法的性能级别。caliper是一个来自Google的微性能测试框架,我们可能使用这框架来帮助我们进行微性能测试。
写微性能测试需要参照并了解的一些规则:
- Rule 0: Read a reputable paper on JVMs and micro-benchmarking. A good one is Brian Goetz, 2005. Do not expect too much from micro-benchmarks; they measure only a limited range of JVM performance characteristics.
- Rule 1: Always include a warmup phase which runs your test kernel all the way through, enough to trigger all initializations and compilations before timing phase(s). (Fewer iterations is OK on the warmup phase. The rule of thumb is several tens of thousands of inner loop iterations.)
- Rule 2: Always run with -XX:+PrintCompilation, -verbose:gc, etc., so you can verify that the compiler and other parts of the JVM are not doing unexpected work during your timing phase.
- Rule 2.1: Print messages at the beginning and end of timing and warmup phases, so you can verify that there is no output from Rule 2 during the timing phase.
- Rule 3: Be aware of the difference between -client and -server, and OSR and regular compilations. The -XX:+PrintCompilation flag reports OSR compilations with an at-sign to denote the non-initial entry point, for example: Trouble$1::run @ 2 (41 bytes). Prefer server to client, and regular to OSR, if you are after best performance. Also be aware of the effects of -XX:+TieredCompilation, which mixes client and server modes together.
- Rule 4: Be aware of initialization effects. Do not print for the first time during your timing phase, since printing loads and initializes classes. Do not load new classes outside of the warmup phase (or final reporting phase), unless you are testing class loading specifically (and in that case load only the test classes). Rule 2 is your first line of defense against such effects.
- Rule 5: Be aware of deoptimization and recompilation effects. Do not take any code path for the first time in the timing phase, because the compiler may junk and recompile the code, based on an earlier optimistic assumption that the path was not going to be used at all. Rule 2 is your first line of defense against such effects.
- Rule 6: Use appropriate tools to read the compiler's mind, and expect to be surprised by the code it produces. Inspect the code yourself before forming theories about what makes something faster or slower.
- Rule 7: Reduce noise in your measurements. Run your benchmark on a quiet machine, and run it several times, discarding outliers. Use -Xbatch to serialize the compiler with the application, and consider setting -XX:CICompilerCount=1 to prevent the compiler from running in parallel with itself.
引用自:https://wiki.openjdk.java.net/display/HotSpot/MicroBenchmarks
Benchmark:Reflect
很多同学都觉得反射会性能很差,那么反射性能真的差么?这个可不一定哦,我们看一个测试 (ReflectTest.java):
public class ReflectTest {
public static int doAdd(int i) {
return ++i;
}
static interface Invoker {
Object invoke(Object[] objs);
}
static class Invoker0 implements Invoker {
@Override
public Object invoke(Object[] objs) {
return doAdd((Integer) objs[0]);
}
}
static final long COUNT = 1000000000L;
public static void main(String[] args) throws Exception {
// warm up
main1(args);
main1_2(args);
main2(args);
main3(args);
System.out.println();
System.out.println();
System.out.println("TEST1:");
main1(args);
System.out.println("TEST1_2:");
main1_2(args);
System.out.println("TEST2:");
main2(args);
System.out.println("TEST3:");
main3(args);
}
public static void main1(String[] args) throws Exception {
long start = System.currentTimeMillis();
int x = 0;
for (long i = 0; i < COUNT; i++) {
x = doAdd(x);
}
long end = System.currentTimeMillis();
System.out.println((end - start) + " ms cost");
}
public static void main1_2(String[] args) throws Exception {
long start = System.currentTimeMillis();
int x = 0;
for (long i = 0; i < COUNT; i++) {
x = (++x);
}
long end = System.currentTimeMillis();
System.out.println((end - start) + " ms cost");
}
public static void main2(String[] args) throws Exception {
long start = System.currentTimeMillis();
Method doAddMethod = ReflectTest.class.getDeclaredMethod("doAdd", new Class[]{int.class});
int x = 0;
for (long i = 0; i < COUNT; i++) {
x = (int) (Integer) doAddMethod.invoke(null, new Object[]{x});
}
long end = System.currentTimeMillis();
System.out.println((end - start) + " ms cost");
}
public static void main3(String[] args) throws Exception {
long start = System.currentTimeMillis();
int x = 0;
Invoker invoker = new Invoker0();
for (long i = 0; i < COUNT; i++) {
x = (Integer) invoker.invoke(new Object[]{x});
}
long end = System.currentTimeMillis();
System.out.println((end - start) + " ms cost");
}
}
测试结果如下:
TEST1:
352 ms cost
TEST1_2:
339 ms cost
TEST2:
3642 ms cost
TEST3:
6193 ms cost
加上JVM启动参数-XX:+TraceClassLoading
,可以看到日志有下面一行:
[Loaded sun.reflect.GeneratedMethodAccessor1 from __JVM_DefineClass__]
但是加上参数-XX:-Inline
后看到的性能将会非常糟糕,你懂的。
Usage 1
"http-/0.0.0.0:8080-38" daemon prio=10 tid=0x00007f4c35e7f800 nid=0x317fe runnable [0x000000004158d000]
java.lang.Thread.State: RUNNABLE
at java.util.HashMap.getEntry(HashMap.java:469)
at java.util.HashMap.get(HashMap.java:421)
at <......>
"http-/0.0.0.0:8080-13" daemon prio=10 tid=0x00007f4c35e4b000 nid=0x3150f runnable [0x0000000056100000]
java.lang.Thread.State: RUNNABLE
at java.util.HashMap.getEntry(HashMap.java:469)
at java.util.HashMap.get(HashMap.java:421)
at <......>
从上面的stacktrace
信息看到有两个线程正好运行到HashMap
的get
函数处,而我们知道一般情况下这个函数的时间复杂度为O(1)
,该函数出现在stacktrace
中应该是极为少见的,而且同时出现两个线程都停在这里,概率可以和中500万
相当啦,但有两种情况例外:
Hash
碰撞攻击 - Hash Collision DoS 问题HashMap
线程不安全导致死循环 - 疫苗:Java HashMap的死循环
从代码堆栈中的类名及函数名判断不是第1种
情况,那么很有可能是第二种情况,先看HashMap
源代码:
461. final Entry<K,V> getEntry(Object key) {
462. if (size == 0) {
463. return null;
464. }
465.
466. int hash = (key == null) ? 0 : hash(key);
467. for (Entry<K,V> e = table[indexFor(hash, table.length)];
468. e != null;
469. e = e.next) {
470. Object k;
471. if (e.hash == hash &&
472. ((k = e.key) == key || (key != null && key.equals(k))))
473. return e;
474. }
475. return null;
476. }
验证这个问题还可以使用工具top
或jtop
,从这两个工具都可以看到出问题的线程CPU
占用率接近100%
,且多次打印堆栈都停留在这个for
循环中。
Usage 2
第2个例子和第1个如出一辙,只是从HashMap
换成了WeakHashMap
,通过top
看到的系统信息如下:
top - 03:07:48 up 232 days, 5:14, 2 users, load average: 3.77, 3.49, 3.43
Tasks: 127 total, 1 running, 126 sleeping, 0 stopped, 0 zombie
Cpu0 : 10.6%us, 0.3%sy, 0.0%ni, 89.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 0.0%sy, 0.0%ni, 98.7%id, 1.3%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu5 : 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 10485760k total, 9227556k used, 1258204k free, 201184k buffers
Swap: 2097144k total, 116k used, 2097028k free, 1643884k cached
从上面信息可以看到,Cpu3
, Cpu4
, Cpu6
的用户状CPU
利用率是100%
,基本可以说明有死循环存在。
通过jtop
命令查看线程及堆栈的消耗信息:
java -jar jtop.jar -thread 3 -stack 8 <PID> 2000 100
展示的相关数据如下:
DefaultQuartzScheduler_Worker-8 TID=86 STATE=RUNNABLE CPU_TIME=2110 (99.90%) USER_TIME=2110 (99.90%) Allocted: 0
java.util.WeakHashMap.put(WeakHashMap.java:405)
org.aspectj.weaver.Dump.registerNode(Dump.java:253)
org.aspectj.weaver.World.<init>(World.java:150)
org.aspectj.weaver.reflect.ReflectionWorld.<init>(ReflectionWorld.java:50)
org.aspectj.weaver.tools.PointcutParser.setClassLoader(PointcutParser.java:221)
org.aspectj.weaver.tools.PointcutParser.<init>(PointcutParser.java:207)
org.aspectj.weaver.tools.PointcutParser.getPointcutParserSupportingSpecifiedPrimitivesAndUsingContextClassloaderForResolution(PointcutParser.java:128)
org.springframework.aop.aspectj.AspectJExpressionPointcut.<init>(AspectJExpressionPointcut.java:100)
DefaultQuartzScheduler_Worker-10 TID=88 STATE=RUNNABLE CPU_TIME=2110 (99.90%) USER_TIME=2110 (99.90%) Allocted: 0
java.util.WeakHashMap.put(WeakHashMap.java:405)
org.aspectj.weaver.Dump.registerNode(Dump.java:253)
org.aspectj.weaver.World.<init>(World.java:150)
org.aspectj.weaver.reflect.ReflectionWorld.<init>(ReflectionWorld.java:50)
org.aspectj.weaver.tools.PointcutParser.setClassLoader(PointcutParser.java:221)
org.aspectj.weaver.tools.PointcutParser.<init>(PointcutParser.java:183)
org.aspectj.weaver.reflect.InternalUseOnlyPointcutParser.<init>(InternalUseOnlyPointcutParser.java:22)
org.aspectj.weaver.reflect.Java15ReflectionBasedReferenceTypeDelegate.getDeclaredPointcuts(Java15ReflectionBasedReferenceTypeDelegate.java:243)
DefaultQuartzScheduler_Worker-6 TID=84 STATE=RUNNABLE CPU_TIME=2100 (99.43%) USER_TIME=2100 (99.43%) Allocted: 0
java.util.WeakHashMap.put(WeakHashMap.java:405)
org.aspectj.weaver.Dump.registerNode(Dump.java:253)
org.aspectj.weaver.World.<init>(World.java:150)
org.aspectj.weaver.reflect.ReflectionWorld.<init>(ReflectionWorld.java:50)
org.aspectj.weaver.tools.PointcutParser.setClassLoader(PointcutParser.java:221)
org.aspectj.weaver.tools.PointcutParser.getPointcutParserSupportingSpecifiedPrimitivesAndUsingContextClassloaderForResolution(PointcutParser.java:129)
org.springframework.aop.aspectj.AspectJExpressionPointcut.<init>(AspectJExpressionPointcut.java:100)
org.springframework.aop.aspectj.AspectJExpressionPointcut.<init>(AspectJExpressionPointcut.java:109)
从jtop
的输出内容看,用户状CPU
利用率趋近于100%
,而内存分配数却为0
,说明CPU
利用率即使为100%
的情况下也没有创建任何对象,这种只能是死循环了。
WeakHashMap
相关源代码如下:
399. public V put(K key, V value) {
400. K k = (K) maskNull(key);
401. int h = HashMap.hash(k.hashCode());
402. Entry[] tab = getTable();
403. int i = indexFor(h, tab.length);
404.
405. for (Entry<K,V> e = tab[i]; e != null; e = e.next) {
406. if (h == e.hash && eq(k, e.get())) {
407. V oldValue = e.value;
408. if (value != oldValue)
409. e.value = value;
410. return oldValue;
411. }
412. }
413.
414. modCount++;
415. Entry<K,V> e = tab[i];
416. tab[i] = new Entry<K,V>(k, value, queue, h, e);
417. if (++size >= threshold)
418. resize(tab.length * 2);
419. return null;
420. }
Usage 3
http-/0.0.0.0:8080-22" daemon prio=10 tid=0x00007f544b24a800 nid=0x23384 in Object.wait() [0x000000004f731000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000007600088a0> (a org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$ConnectionPool)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.doGetConnection(MultiThreadedHttpConnectionManager.java:509)
- locked <0x00000007600088a0> (a org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$ConnectionPool)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.getConnectionWithTimeout(MultiThreadedHttpConnectionManager.java:394)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:152)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:396)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:324)
at <......>
<<this line missing...>>
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
- locked <0x00000007bd7e9c78> (a java.io.BufferedInputStream)
at org.apache.commons.httpclient.ContentLengthInputStream.read(ContentLengthInputStream.java:169)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at org.apache.commons.httpclient.AutoCloseInputStream.read(AutoCloseInputStream.java:107)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
- locked <0x00000007bd7c58e0> (a java.io.InputStreamReader)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.Reader.read(Reader.java:140)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1128)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1104)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1078)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:382)
at <......>
Reference
- https://wiki.openjdk.java.net/display/HotSpot/PerformanceTechniques
- https://wiki.openjdk.java.net/display/HotSpot/MicroBenchmarks
- http://www.slideshare.net/ConstantineNosovsky/nosovsky-java-microbenchmarking
- https://code.google.com/p/caliper/
- https://code.google.com/p/caliper/wiki/Links
- http://stackoverflow.com/questions/504103/how-do-i-write-a-correct-micro-benchmark-in-java
- http://www.herongyang.com/JVM/Micro-Benchmark-What-Is-Micro-Benchmark.html
- http://hatter-source-code.googlecode.com/svn/trunk/attachments/wiki/java/microbenchmarks.pdf
- http://hatter-source-code.googlecode.com/svn/trunk/attachments/wiki/java/2009_J1_Benchmark.pdf
- http://www.slideshare.net/atthakorn/java-performance-tuning
- https://www.ibm.com/developerworks/java/library/j-jtp02225/ (Java theory and practice: Anatomy of a flawed microbenchmark)
- http://cseweb.ucsd.edu/~wgg/JavaProf/javaprof.html (Bill and Paul's Excellent UCSD Benchmarks for Java)
- http://upload.wikimedia.org/wikipedia/commons/c/c4/Understanding_HotSpot_Logs.pdf
- https://github.com/AdoptOpenJDK/jitwatch
- http://www.chrisnewland.com/images/jitwatch/HotSpot_Profiling_Using_JITWatch.pdf
- http://coolshell.cn/articles/6424.html (Hash Collision DoS 问题)
- http://coolshell.cn/articles/9606.html (疫苗:Java HashMap的死循环)
- https://code.google.com/p/hatter-source-code/wiki/Study_Java_HotSpot_Arguments
- https://code.google.com/p/hatter-source-code/wiki/Study_OS_Linux_GDB
- https://code.google.com/p/hatter-source-code/wiki/jtop
- https://code.google.com/p/hatter-source-code/wiki/Apps_jtop_Case01
- http://www.slideshare.net/TsunenagaHanyuda/jcmd-16803399
- http://www.coppermine.jp/docs/programming/2012/08/jdk7u4.html
- http://openjdk.java.net/jeps/137 (JEP 137: Diagnostic-Command Framework)
- http://openjdk.java.net/jeps/158 (JEP 158: Unified JVM Logging)
- http://marxsoftware.blogspot.com/2012/10/javaone-2012-diagnosing-application-jcmd.html
- http://wiki.li3huo.com/JDK_Tools_and_Utilities
- http://hllvm.group.iteye.com/group/topic/27945 ([讨论] JVM调优的"标准参数"的各种陷阱)
- http://docs.oracle.com/javase/6/docs/technotes/guides/net/properties.html (Networking Properties)
- http://www.slideshare.net/brendangregg/linux-performance-tools