<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#C0C0C0">
<p>Hi Alexey<br>
</p>
<div class="moz-cite-prefix">On 03/21/2018 06:50 PM, Alexey Kodanev
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:4f48b8e7-21ca-088e-24eb-c5b79c3bac18@oracle.com">
<pre wrap="">On 03/20/2018 03:18 AM, sunlianwen wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Hi Alexey
You are right, I think is wrong, I debug this case again,
and find the driver is virtio_net no support busy poll.
</pre>
</blockquote>
<pre wrap="">
There is support in virtio_net... may be the problem in the underlying
configuration/driver, latency between guest and the other host? you could
also try netperf -H remote_host -t TCP_RR with/without busy_polling:
# sysctl net.core.busy_read=50
# sysctl net.core.busy_poll=50
</pre>
</blockquote>
Thanks your advise. and I find a patch:"virtio_net: remove custom
busy_poll"<br>
patch link
:<a class="moz-txt-link-freetext" href="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/net/virtio_net.c?id=ceef438d613f6d">https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/net/virtio_net.c?id=ceef438d613f6d</a><br>
I am not sure whether this patch mean virtio_net no support busy
poll.<br>
<br>
Below is debuginfo follow your advise.<br>
<br>
# sysctl net.core.busy_read=0<br>
net.core.busy_read = 0<br>
<br>
# sysctl net.core.busy_poll=0<br>
net.core.busy_poll = 0<br>
<br>
# netperf -H 192.168.122.248 -t TCP_RR<br>
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0
AF_INET to 192.168.122.248 () port 0 AF_INET : first burst 0<br>
Local /Remote<br>
Socket Size Request Resp. Elapsed Trans.<br>
Send Recv Size Size Time Rate<br>
bytes Bytes bytes bytes secs. per sec<br>
<br>
16384 87380 1 1 10.00 30101.63<br>
16384 87380<br>
<br>
# sysctl net.core.busy_read=50<br>
net.core.busy_read = 50<br>
# sysctl net.core.busy_poll=50<br>
net.core.busy_poll = 50<br>
<br>
# netperf -H 192.168.122.248 -t TCP_RR<br>
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0
AF_INET to 192.168.122.248 () port 0 AF_INET : first burst 0<br>
Local /Remote<br>
Socket Size Request Resp. Elapsed Trans.<br>
Send Recv Size Size Time Rate<br>
bytes Bytes bytes bytes secs. per sec<br>
<br>
16384 87380 1 1 10.00 37968.90<br>
16384 87380<br>
<br>
-----------------------------------------------------------------------<br>
<<<test_output>>><br>
incrementing stop<br>
busy_poll01 1 TINFO: Network config (local -- remote):<br>
busy_poll01 1 TINFO: eth1 -- eth1<br>
busy_poll01 1 TINFO: 192.168.1.41/24 -- 192.168.1.20/24<br>
busy_poll01 1 TINFO: fd00:1:1:1::1/64 -- fd00:1:1:1::2/64<br>
busy_poll01 1 TINFO: set low latency busy poll to 50<br>
busy_poll01 1 TINFO: run server 'netstress -R 500000 -B
/tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'<br>
busy_poll01 1 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2
-r 500000 -d res_50 -g 44175'<br>
busy_poll01 1 TPASS: netstress passed, time spent '53265' ms<br>
busy_poll01 2 TINFO: set low latency busy poll to 0<br>
busy_poll01 2 TINFO: run server 'netstress -R 500000 -B
/tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'<br>
busy_poll01 2 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2
-r 500000 -d res_0 -g 46767'<br>
busy_poll01 2 TPASS: netstress passed, time spent '23393' ms<br>
busy_poll01 3 TFAIL: busy poll result is '-127' %<br>
<<<execution_status>>><br>
initiation_status="ok"<br>
duration=79 termination_type=exited termination_id=1 corefile=no<br>
cutime=148 cstime=6930<br>
<<<test_end>>><br>
INFO: ltp-pan reported some tests FAIL<br>
LTP Version: 20180118<br>
<br>
###############################################################<br>
<br>
Done executing testcases.<br>
LTP Version: 20180118<br>
###############################################################<br>
<br>
Thanks,<br>
Lianwen Sun<br>
</body>
</html>