不说话, 只上数据, 测试平台: 性能超差笔记本, 2G内存.
场景:
1000客户端, 每个重复请求100次. 服务器发送"Hello world!", 合计处理传输100000次请求, 共1.14 MB.
测试工具:
Siege (http_load测试后没有实时输出略过, 另外不要相信wrk)
测试结果:
Fibjs 用时71.76秒 每秒传输次数1393.53
Nodejs 用时76.46秒 每秒传输次数1307.87
功耗
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
26903 king 20 0 692276 67580 8380 R 49.4 3.4 0:20.51 node
29937 king 20 0 1617672 177236 5672 R 36.1 9.0 0:12.75 fibjs
Nodejs $ siege -c 1000 -r 100 http://127.0.0.1:8100
Transactions: 100000 hits // 完成100000次传输
Availability: 100.00 % // 传输成功率
Elapsed time: 76.46 secs // 总共使用时间76.46s
Data transferred: 1.14 MB // 共传输数据1.14MB
Response time: 0.13 secs // 平均响应时间
Transaction rate: 1307.87 trans/sec // 平均每秒完成传输次数1307.87
Throughput: 0.01 MB/sec // 平均每秒传输数据
Concurrency: 167.48 // 实际最高并发连接数
Successful transactions: 100000 // 成功传输次数
Failed transactions: 0 // 失败传输次数
Longest transaction: 7.13 // 每次传输所花最长时间
Shortest transaction: 0.00 // 每次传输所花最短时间
Fibjs $ siege -c 1000 -r 100 http://127.0.0.1:8200
Transactions: 100000 hits // 完成100000次传输
Availability: 100.00 % // 传输成功率
Elapsed time: 71.76 secs // 总共使用时间71.76s
Data transferred: 1.14 MB // 共传输数据1.14MB
Response time: 0.02 secs // 平均响应时间
Transaction rate: 1393.53 trans/sec // 平均每秒完成传输次数1393.53
Throughput: 0.02 MB/sec // 平均每秒传输数据
Concurrency: 34.66 // 实际最高并发连接数
Successful transactions: 100000 // 成功传输次数
Failed transactions: 0 // 失败传输次数
Longest transaction: 1.59 // 每次传输所花最长时间
Shortest transaction: 0.00 // 每次传输所花最短时间
Siege客户端实时输出采集:
Nodejs
HTTP/1.1 200 0.10 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.10 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.10 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.10 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.10 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.10 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.09 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.08 secs: 12 bytes ==> GET /
Fibjs
HTTP/1.1 200 0.02 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.03 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.04 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.05 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.00 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.00 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.00 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.00 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.00 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.03 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.00 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 12 bytes ==> GET /
最后, 我觉得一幅图阐述多进程单线程异步事件驱动模型很有意思:
libev -> epoll -> linux
I Nodejs 服务器源码:
require('http').createServer(function(req, res) {
res.end("Hello world!");
}).listen(8100);
II Fibjs 服务器源码:
var http = require('http');
var svr = new http.Server(8200, function(r){
r.response.write('Hello world!');
});
svr.run();
估计是最高并发连接数量造成了平均值的差异。 nodejs由于并发数量较高,同时响应了更多的连接,但是返回却是同步的,大家要一个一个来,这个中间可能就造成了响应时间波动。 fibjs的模型其实比nodejs是更先进的,但关键还是稳定性,我看了下代码,大部分是响马贡献的,参与度还嫌少。 楼主可以测试一下更多并发条件下的状态,给大家做个参考。
好测试。
从测试数据来看 fibjs 和 nodejs cpu 都没跑满,所以瓶颈不在服务器,应该是测试机器配置导致测试客户端跑到瓶颈导致的。你查看一下总 cpu 负载和测试客户端 cpu负载看一下。
nodejs 测试出的并发比 fibjs 多,也不是因为处理能力更强,而是因为速度更慢,导致更多请求重叠处理。简单比喻,一条一公里的路,一百辆时速 100 公里的车快速通过,并发只有一,而一百个时速 1 公里的人通过,并发就是 100。
其次 fibjs 的多线程工作线程模型是借鉴了 nodejs 的,在这一点上 fibjs 和 nidejs 的原理完全相同。区别仅仅在于 fibjs 把更多的工作迁移到了工作线程。
在 libev 和 js 上,fibjs 也与 nodejs 完全一样是单线程事件驱动的。
所以在负作用这个问题上,fibjs 并没比nodejs 多引入。