网络设备测试艺术(2)–应用层测试

挂表验收的流氓又回归了,这次带大家看看应用层的测试方法

网络设备进入到应用层以后,各种系统的缺陷是各个公司及其不愿暴露的,同时应用层的防火墙/流控等设备性能测试上,整个行业基本上都装傻,包乱序不管,指标按照物理接口大包转发带宽忽悠,实际并发连接数超高但新建连接数一坨屎.或者新建也很好,就是连接拆的慢,大象流处理有问题等等..

测试环境

依旧还是那个ASR1000,上一期测试过的ESP100引擎。这个基于多核心的处理器的平台牛x之处就在于,它不光是一台路由器,还支持IPSecVPN几十个G的吞吐,同时还支持100G的防火墙和NAT,以及几十G的流控和应用识别。想当年我可是拿它直接参加某运营商防火墙集采还中标的。

ASR1009X 

基本NAT配置

我们这次以NAT为例来测试,配置如下:

interface HundredGigE0/1/0
 ip address 1.1.1.1 255.255.255.0
 ip nat inside
!
interface HundredGigE1/1/0
 ip address 2.2.2.1 255.255.255.0
 ip nat outside
!
ip nat translation finrst-timeout 1
ip nat translation max-entries 10000000
ip nat pool AAA 17.0.0.1 17.0.0.254 prefix-length 24
ip nat inside source list 10 pool AAA overload
!
ip route 16.0.0.0 255.0.0.0 1.1.1.2
ip route 48.0.0.0 255.0.0.0 2.2.2.2
!
ip access-list standard 10
 10 permit 16.0.0.0 0.255.255.255

Trex平台配置

通常我们在进行流量测试时需要支持超过一千万会话的场景,因此对于很多平台也是一个考验,因此需要在Trex的配置文件中增加Memory

[root@Trex v2.86]

新建连接测试作弊手段

通常的测试方法是只测试新建,期间不会拆掉连接,然后设备方通常说,顺便这个测试例把并发连接数测了

客户一定要要求测试新建和拆连接同时测试时的最大会话建立速度。

新建连接测试的配置如下,其中我们把Delay即会话长度缩短,使得整个系统建连接以后就立即拆掉

 ./t-rex-64-o -f astf/http_high_active_flows.py -m 200000  -t delay=1  --astf --no-ofed-check -c 20

astf/http_high_active_flows.py的内容:

 

其中-m 200000为新建连接数为200K每秒,流量建立起来以后的统计情况:

-Per port stats table
ports | 0 | 1
-----------------------------------------------------------------------------------------
opackets | 6924918 | 6180533
obytes | 887754867 | 628870502
ipackets | 6179461 | 6180499
ibytes | 628765569 | 829747853
ierrors | 0 | 0
oerrors | 0 | 0
Tx Bw | 617.56 Mbps | 434.81 Mbps

-Global stats enabled
Cpu Utilization : 5.0 % 2.1 Gb/core
Platform_factor : 1.0
Total-Tx : 1.05 Gbps
Total-Rx : 1.01 Gbps
Total-PPS : 1.14 Mpps
Total-CPS : 199.23 Kcps

Expected-PPS : 0.00 pps
Expected-CPS : 0.00 cps
Expected-L7-BPS : 0.00 bps

Active-flows : 1827860 Clients : 0 Socket-util : 0.0000 %
Open-flows : 2291151 Servers : 0 Socket : 0 Socket/Clients : -nan
drop-rate : 0.00 bps
current time : 50.9 sec
test duration : 3549.1 sec

在这个界面中点击t可以看到具体的会话统计

                      |          client  |           server  |
-----------------------------------------------------------------------------------------
m_active_flows | 1663411 | 1604109 | active open flows
m_est_flows | 1602288 | 1602276 | active established flows
m_tx_bw_l7_r | 398.33 Mbps | 190.36 Mbps | tx L7 bw acked
m_tx_bw_l7_total_r | 398.31 Mbps | 190.36 Mbps | tx L7 bw total
m_rx_bw_l7_r | 190.37 Mbps | 398.31 Mbps | rx L7 bw acked
m_tx_pps_r | 799.93 Kpps | 799.83 Kpps | tx pps
m_rx_pps_r | 799.84 Kpps | 999.79 Kpps | rx pps
m_avg_size | 46.00 B | 40.89 B | average pkt size
m_tx_ratio | 100.00 % | 100.00 % | Tx acked/sent ratio
- | --- | --- |
m_traffic_duration | 75.51 sec | 75.40 sec | measured traffic duration
- | --- | --- |
TCP | --- | --- |
- | --- | --- |
tcps_connattempt | 15078482 | 0 | connections initiated
tcps_accepts | 0 | 15018249 | connections accepted
tcps_connects | 15017359 | 15016416 | connections established
tcps_closed | 13415071 | 13414140 | conn. closed (includes drops)
tcps_segstimed | 45111355 | 45049260 | segs where we tried to get rtt
tcps_rttupdated | 45046569 | 45043726 | times we succeeded
tcps_sndtotal | 60125024 | 60065678 | total packets sent
tcps_sndpack | 15017359 | 15016416 | data packets sent
tcps_sndbyte | 3754542018 | 1786953504 | data bytes sent by application
tcps_sndbyte_ok | 3739322391 | 1786953504 | data bytes sent by tcp
tcps_sndctrl | 15078482 | 0 | control (SYN|FIN|RST) packets sent
tcps_sndacks | 30029183 | 45049262 | ack-only packets sent
tcps_rcvpack | 30029179 | 30031011 | packets received in sequence
tcps_rcvbyte | 1786846166 | 3739087584 | bytes received in sequence
tcps_rcvackpack | 30029210 | 45043726 | rcvd ack packets
tcps_rcvackbyte | 3738870705 | 1786736805 | tx bytes acked by rcvd acks
tcps_rcvackbyte_of | 15013665 | 30029131 | tx bytes acked by rcvd acks - overflow acked
tcps_preddat | 15015512 | 0 | times hdr predict ok for data
- | --- | --- |
UDP | --- | --- |
- | --- | --- |
- | --- | --- |
Flow Table | --- | --- |
- | --- | --- |
err_cwf | 40 | 0 | *client pkt without flow
redirect_rx_ok | 2 | 0 | redirect to rx OK

当您打过载的时候,应该可以看到如下的错误,一方面是整个设备的并发连接数会因为太忙拆不掉而急剧上升,另一方面是t统计页面会有错误日志,例如我们打到1M CPS时,请注意下表中带*的统计,表示各种错误就开始出现了。


                       |          client  |           server  |
 -----------------------------------------------------------------------------------------
       m_active_flows  |        20971518  |          1109319  |  active open flows
          m_est_flows  |         1065539  |          1070179  |  active established flows
         m_tx_bw_l7_r  |     301.40 Mbps  |      136.45 Mbps  |  tx L7 bw acked
   m_tx_bw_l7_total_r  |     325.77 Mbps  |      144.00 Mbps  |  tx L7 bw total
         m_rx_bw_l7_r  |     144.04 Mbps  |      301.31 Mbps  |  rx L7 bw acked
           m_tx_pps_r  |       1.48 Mpps  |      609.74 Kpps  |  tx pps
           m_rx_pps_r  |     589.02 Kpps  |      726.25 Kpps  |  rx pps
           m_avg_size  |        26.96  B  |         40.96  B  |  average pkt size
           m_tx_ratio  |        92.52  %  |         94.76  %  |  Tx acked/sent ratio
                    -  |             ---  |              ---  |
   m_traffic_duration  |      58.09  sec  |       57.98  sec  |  measured traffic duration
                    -  |             ---  |              ---  |
                  TCP  |             ---  |              ---  |
                    -  |             ---  |              ---  |
     tcps_connattempt  |        27575473  |                0  |  connections initiated
         tcps_accepts  |               0  |          7670600  |  connections accepted
        tcps_connects  |         7669494  |          7631460  |  connections established
          tcps_closed  |         6603955  |          6561281  |  conn. closed (includes drops)
       tcps_segstimed  |        42875900  |         22903061  |  segs where we tried to get rtt
      tcps_rttupdated  |        20041731  |         22790403  |  times we succeeded
        tcps_sndtotal  |        78011576  |         30534522  |  total packets sent
         tcps_sndpack  |         7669494  |          7631457  |  data packets sent
         tcps_sndbyte  |      6866292777  |        908143383  |  data bytes sent by application
      tcps_sndbyte_ok  |      1909704006  |        908143383  |  data bytes sent by tcp
         tcps_sndctrl  |        55110893  |                1  |  control (SYN|FIN|RST) packets sent
         tcps_sndacks  |        15231189  |         22903064  |  ack-only packets sent
         tcps_rcvpack  |        15231189  |         15232464  |  packets received in sequence
         tcps_rcvbyte  |       908081027  |       1900233540  |  bytes received in sequence
      tcps_rcvackpack  |        15231190  |         22790403  |  rcvd ack packets
      tcps_rcvackbyte  |      1900102566  |        904519476  |  tx bytes acked by rcvd acks
   tcps_rcvackbyte_of  |         7600256  |         15189399  |  tx bytes acked by rcvd acks - overflow acked
         tcps_preddat  |         7630933  |                0  |  times hdr predict ok for data pkts
      tcps_rexmttimeo  |               0  |                1  | *retransmit timeouts
  tcps_rexmttimeo_syn  |        27535420  |                0  | *retransmit SYN timeouts
 tcps_rcvpackafterwin  |               0  |                2  | *packets with data after window
                    -  |             ---  |              ---  |
                  UDP  |             ---  |              ---  |
                    -  |             ---  |              ---  |
                    -  |             ---  |              ---  |
           Flow Table  |             ---  |              ---  |
                    -  |             ---  |              ---  |
              err_cwf  |              27  |                0  | *client pkt without flow
           err_no_syn  |               0  |             4400  | *server first flow packet with no SYN
       err_no_tcp_udp  |               0  |                1  |  no tcp/udp packet
       redirect_rx_ok  |               0  |                1  |  redirect to rx OK
   err_c_nf_throttled  |        30378990  |                0  | *client new flow throttled
    err_flow_overflow  |          960449  |                0  | *too many flows errors

并发连接测试

并发连接测试需要注意很多厂商偷偷的在送测设备中更换内存,以获得更大的并发连接数, 实际上最大的并发连接数通常为实际接入终端个数*200,同时互联网上单个流平均的持续时间为16s左右,因此新建连接数*20~32基本上就是平台实际可以用的并发连接数,例如Cisco ASR1000的ESP100虽然内存支持10M的并发连接,通常我们默认建议用户使用的也就是4M左右,具体测试和新建类似,只是调长Delay参数

 ./t-rex-64-o -f astf/http_high_active_flows.py -m 200000  -t delay=1000000000  --astf --no-ofed-check -c 20

吞吐测试

吞吐测试的脚本

from trex.astf.api import *

打流看吞吐:

 ./t-rex-64-o -f astf/http.py -m 200000  --astf   --no-ofed-check -c 20

吞吐测试结果:

-Per port stats table
ports | 0 | 1
-----------------------------------------------------------------------------------------
opackets | 19756560 | 91901021
obytes | 1721689878 | 127458559534
ipackets | 89897230 | 19761521
ibytes | 124679395555 | 1722113091
ierrors | 1806494 | 0
oerrors | 0 | 0
Tx Bw | 1.27 Gbps | 92.73 Gbps

-Global stats enabled
Cpu Utilization : 73.1 % 12.9 Gb/core
Platform_factor : 1.0
Total-Tx : 94.00 Gbps
Total-Rx : 92.12 Gbps
Total-PPS : 10.21 Mpps
Total-CPS : 111.76 Kcps

Expected-PPS : 0.00 pps
Expected-CPS : 0.00 cps
Expected-L7-BPS : 0.00 bps

Active-flows : 355026 Clients : 0 Socket-util : 0.0000 %
Open-flows : 1317126 Servers : 0 Socket : 0 Socket/Clients : -nan
Total_queue_full : 58711424
drop-rate : 0.00 bps
current time : 50.6 sec
test duration : 3549.4 sec

应用流控测试

这个需要多个流混跑,如下所示:

[root@Trex v2.86] 

然后打流

./t-rex-64-o -f astf/sfr_full.py -m 300000   --astf --no-ofed-check -c 20

看流控结果


-Per port stats table
ports | 0 | 1
-----------------------------------------------------------------------------------------
opackets | 65889293 | 132685318
obytes | 24917530681 | 142126089015
ipackets | 112246897 | 65421150
ibytes | 119222904031 | 24565262063
ierrors | 33314 | 0
oerrors | 0 | 0
Tx Bw | 11.10 Gbps | 58.37 Gbps

-Global stats enabled
Cpu Utilization : 84.2 % 8.2 Gb/core
Platform_factor : 1.0
Total-Tx : 69.48 Gbps
Total-Rx : 63.65 Gbps
Total-PPS : 10.62 Mpps
Total-CPS : 368.60 Kcps

Expected-PPS : 0.00 pps
Expected-CPS : 0.00 cps
Expected-L7-BPS : 0.00 bps

Active-flows : 755305 Clients : 0 Socket-util : 0.0000 %
Open-flows : 7592138 Servers : 0 Socket : 0 Socket/Clients : -nan
Total_queue_full : 768940
drop-rate : 0.00 bps
current time : 64.8 sec
test duration : 3535.2 sec

看设备识别率

ASR1009X

今日技术扶贫结束

网络设备测试艺术(2)–应用层测试》来自互联网,仅为收藏学习,如侵权请联系删除。本文URL:https://www.bookhoes.com/4844.html