network performance open source toolkit
c1 defining network perf
c2 watching network traffic
c3 network device util
c4 netperf
c5 dbs
c6 iperf
c7 pathrate
c8 nettest
c9 netlogger
c10 tcptrace
c11 ntop
c12 comparing network perf tools
c13 measuring app perf
c14 dummynet
c15 nist net
c16 network traffic generator
c17 ns
c18 comparing network app perf tools
c1 defining network perf
c2 watching network traffic # libpcap/winpcap, tcpdump, windump, analyzer, etherreal
c3 network device util # net-snmp
c4 netperf # send data streams across net, and monitor
c5 dbs # perform net tests b/w two remote hosts on network
c6 iperf # how TCP parameters affect net app perf
c7 pathrate # network stat calculations involing delays present in transferring packets
c8 nettest # secure shell for performing network tests b/w hosts
c9 netlogger # set of APIs, logging net work events, such as writing data to network
c10 tcptrace # analyze data captured by tcpdump, display info about each TCP session
c11 ntop # network utilization by each device on network
c12 comparing network perf tools
c13 measuring app perf # network emulators and simulators
c14 dummynet # freebsd, emulate network delay, bandwidth limitations, packet loss
c15 nist net # simulate network behavior using Linux
c16 network traffic generator # generate specific data traffic patterns
c17 ns # network simulator
c18 SSFNetw # model net behaviro using C++ or Java
c19 comparing network app perf tools
c1 defining network performance
availability, ping
2 biggest causes of lost packets
- collisions
- packets dropped by network device
sometimes net work device passes packets of one size, but not another
# ping -s 1000 192.168.0.1
response time, ping
traceroute
some networks sometimes use redudnant paths
packets don't always take best path
network utilization on 10Meg Ether
%util = ((datasent + datarec) * 8) / (intspeed * samptime) * 100
network throughput
bandwidth capacity
methods for collecting performance data
- querying, SNMP
- watching, tcpdump
- generating test traffic
watching existing traffic, watch for these:
1. packet retransmissions
2. frozen tcp window sizes
3. broadcast storms
4. network advertisements
5. chatty applications
6. quality of service applications
c2. watching network traffic
libpcap, winpcap
www.tcpdump.org/release/libpcap-0.7.1.tar.gz
winpcap.polito.it
www.windump.polito.it/install.bin/alpha.windump.exe
tcpdump
$tcpdump -i eth0 -s 200 -x
windump
c:\>windump -s 200 -x -w testcap
filtering packets with tcpdump/windump
tcpdump ip host myhost
tcpdump iip host 192.168.0.1 and port not 23
Analyzer
windows program similar to windump, but with cool gui
shows different layers, mac, ip, tcp, telnet application data
ethereal
one nice feature of ethereal is it decodes alot more packet types for you?
www.ethereal.com/distribution
c3 network device utilization
net-snmp package
sourceforge.net/project/showfiles.php?group_id=12694
snmpwalk/snmpdelta
MIB2 shit, ifNumber, ifTable
ifEntry objects: ifIndex, ifDescr
# display nimber opf interfaces on a device
$ snmpget -c public 192.168.0.1 IF-MIB::ifNumber.0
$ snmpdelta -c public -Cp 5 -CT 192.168.0.1 IF-MIB::ifInOctets.23 IF-MIB::ifOutOctets.23
-CT # format output into tables
-Cs timestamp each entry
once you know input & output over time, calculate bandwidth
%util = ((inO + out0) * 8 ) / (ifSpeed * time) * 100
be careful of bandwidth calcs, full duplex allow 2ce amount of traffic, so 100Mb = 200Mb # not sure about that
error rates:
error rate = (ifInErrors * 100) / (ifInUcastPkts + ifInNUcastPkts)
$snmpdelta -c public -CP 60 -Cs 192.168.0.1 IF-MIB::ifInErrors.23 IF-MIB::ifInUcastPkts.23 IF-MIB::ifInNUcastPkts.23
Using Vendor MIBS
Cisco CPU MIB
MIB uses ASN.1 syntax to define each of the objects
first step to SNMP script is to obtain MIB
ftp://ftp.cisco.com/pub/mibs/v1/OLD-CISCO-CPU-MIB.my
there is a newer MIB, but they are dont work on everything
MIB information for router CPU util is stored in OLD-CISCO-CPU-MIB.my # not sure how they got this
http://www.cisco.com/public/sw-center/netmgmt/cmtk/mibs.shtml
busyPer, avgbusy1, avgbusy5 # choose one of three mib objects
backtrace to get full object identifyer
lcpu, local, found in CSICO-SMI-V1SMI.my
putting all together for avgBusy5, 1.3.6.1.4.1.9.2.1.58.0
#!/bin/sh
datenow=`date +%x,%T"
util=`snmpget 192.168.0.1 -c public 1.3.6.1.4.1.9.2.1.58.0 | awk '{print $4}'
echo $datenow, $util >> utilization.txt
cron every 5min
c4 netperf
client/server, netserver & netperf
ftp.cup.hp.com/dist/networking/benchmarks/netperf
TCP_Stream
$ netperf -H 192.168.0.1 -l 60
Throughput in bits/sec = 7.75 (of 10Meg)
UDP_Stream
$ netperf -t UDP_Stream -H 192.168.0.1 # error message too long?
$ netperf -t UDP_STREAM -H 192.168.0.1 -- -m 102 = 9.52 (of 10Meg)
Measuring Request/Repsonse TImes
TCP_RR
$ netperf -t TCP_RR 192.168.1.1 -l 60
Transaction Rate per sec = 1944
TCP_CRR
sets up a new TCP connection for each request, similar to HTTP
$ netperf -t TCP_CRR -H 192.168.0.1 -l 60
Transactions per sec = 17.25
UDP_RR
$ netperf -t UDP_RR -H 192.168.1.1 -l 60
Trans Rate per sec = 2151
snapshot_script
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment