Thursday, October 12, 2006

xen config on FC5


baseurl=file:///home/pgabler/dvd_fc5
/etc/yum.repos.d





/usr/sbin/xm list
dd if=/dev/zero of=fedora1.img bs=5M count=1 seek=1024
/sbin/mke2fs -F -j fedora1.img
mount -o loop fedora1.img /mnt
for i in console null zero ; do /sbin/MAKEDEV -d /mnt/dev -x $i ; done

mkdir /mnt/etc
vi /mnt/etc/fstab
==============
/dev/sda5 / ext3 defaults 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
==============

mkdir /mnt/proc
mount -t proc none /mnt/proc

yum --installroot=/mnt -y groupinstall Base
yum --installroot=/mnt -y groupinstall "X Window System"
yum --installroot=/mnt -y groupinstall "GNOME Desktop Environment"
cp /etc/yum*d/fedora-core.repo fedora-core.repo
cp /etc/yum*d/fedora-updates.repo fedora-updates.repo
cp /etc/yum*d/fedora-extras.repo fedora-extras.repo
yum --installroot=/mnt grouplist

cp /etc/selinux/config /mnt/etc/selinux/config
cd /
umount /mnt/proc
umount /mnt

vi /etc/xen/rawhide1
=========
kernel ="/boot/vmlinuz-2.6.15-1.2054_FC5xenU"
memory = 384
name = "rawhide1"
nics = 1
disk = ['file:/root/fedora.img,sda5,w']
root = "/dev/sda5"
extra = "ro selinux=0 3"
=========


xm mem-max 0 512
xm mem-set 0 512


# xm shutdown rawhide1
xm create -c rawhide1







dd if=/dev/zero of=/swapfile bs=1M count=1024
/sbin/mkswap /swapfile
swapon /swapfile
fstab:
/swapfile swap swap defaults 0 0

Wednesday, October 4, 2006

book: Network Performance Open Source Toolkit

network performance open source toolkit


c1 defining network perf
c2 watching network traffic
c3 network device util
c4 netperf
c5 dbs
c6 iperf
c7 pathrate
c8 nettest
c9 netlogger
c10 tcptrace
c11 ntop
c12 comparing network perf tools
c13 measuring app perf
c14 dummynet
c15 nist net
c16 network traffic generator
c17 ns
c18 comparing network app perf tools


c1 defining network perf
c2 watching network traffic # libpcap/winpcap, tcpdump, windump, analyzer, etherreal
c3 network device util # net-snmp
c4 netperf # send data streams across net, and monitor
c5 dbs # perform net tests b/w two remote hosts on network
c6 iperf # how TCP parameters affect net app perf
c7 pathrate # network stat calculations involing delays present in transferring packets
c8 nettest # secure shell for performing network tests b/w hosts
c9 netlogger # set of APIs, logging net work events, such as writing data to network
c10 tcptrace # analyze data captured by tcpdump, display info about each TCP session
c11 ntop # network utilization by each device on network
c12 comparing network perf tools
c13 measuring app perf # network emulators and simulators
c14 dummynet # freebsd, emulate network delay, bandwidth limitations, packet loss
c15 nist net # simulate network behavior using Linux
c16 network traffic generator # generate specific data traffic patterns
c17 ns # network simulator
c18 SSFNetw # model net behaviro using C++ or Java
c19 comparing network app perf tools





c1 defining network performance
availability, ping

2 biggest causes of lost packets
- collisions
- packets dropped by network device

sometimes net work device passes packets of one size, but not another
# ping -s 1000 192.168.0.1

response time, ping

traceroute
some networks sometimes use redudnant paths
packets don't always take best path

network utilization on 10Meg Ether
%util = ((datasent + datarec) * 8) / (intspeed * samptime) * 100

network throughput
bandwidth capacity

methods for collecting performance data
- querying, SNMP
- watching, tcpdump
- generating test traffic

watching existing traffic, watch for these:
1. packet retransmissions
2. frozen tcp window sizes
3. broadcast storms
4. network advertisements
5. chatty applications
6. quality of service applications






c2. watching network traffic

libpcap, winpcap
www.tcpdump.org/release/libpcap-0.7.1.tar.gz
winpcap.polito.it
www.windump.polito.it/install.bin/alpha.windump.exe

tcpdump
$tcpdump -i eth0 -s 200 -x

windump
c:\>windump -s 200 -x -w testcap

filtering packets with tcpdump/windump
tcpdump ip host myhost
tcpdump iip host 192.168.0.1 and port not 23

Analyzer
windows program similar to windump, but with cool gui
shows different layers, mac, ip, tcp, telnet application data

ethereal
one nice feature of ethereal is it decodes alot more packet types for you?
www.ethereal.com/distribution



c3 network device utilization

net-snmp package
sourceforge.net/project/showfiles.php?group_id=12694

snmpwalk/snmpdelta
MIB2 shit, ifNumber, ifTable
ifEntry objects: ifIndex, ifDescr
# display nimber opf interfaces on a device
$ snmpget -c public 192.168.0.1 IF-MIB::ifNumber.0
$ snmpdelta -c public -Cp 5 -CT 192.168.0.1 IF-MIB::ifInOctets.23 IF-MIB::ifOutOctets.23
-CT # format output into tables
-Cs timestamp each entry
once you know input & output over time, calculate bandwidth
%util = ((inO + out0) * 8 ) / (ifSpeed * time) * 100

be careful of bandwidth calcs, full duplex allow 2ce amount of traffic, so 100Mb = 200Mb # not sure about that

error rates:
error rate = (ifInErrors * 100) / (ifInUcastPkts + ifInNUcastPkts)

$snmpdelta -c public -CP 60 -Cs 192.168.0.1 IF-MIB::ifInErrors.23 IF-MIB::ifInUcastPkts.23 IF-MIB::ifInNUcastPkts.23

Using Vendor MIBS
Cisco CPU MIB
MIB uses ASN.1 syntax to define each of the objects
first step to SNMP script is to obtain MIB
ftp://ftp.cisco.com/pub/mibs/v1/OLD-CISCO-CPU-MIB.my
there is a newer MIB, but they are dont work on everything
MIB information for router CPU util is stored in OLD-CISCO-CPU-MIB.my # not sure how they got this
http://www.cisco.com/public/sw-center/netmgmt/cmtk/mibs.shtml
busyPer, avgbusy1, avgbusy5 # choose one of three mib objects
backtrace to get full object identifyer
lcpu, local, found in CSICO-SMI-V1SMI.my
putting all together for avgBusy5, 1.3.6.1.4.1.9.2.1.58.0

#!/bin/sh
datenow=`date +%x,%T"
util=`snmpget 192.168.0.1 -c public 1.3.6.1.4.1.9.2.1.58.0 | awk '{print $4}'
echo $datenow, $util >> utilization.txt

cron every 5min




c4 netperf
client/server, netserver & netperf
ftp.cup.hp.com/dist/networking/benchmarks/netperf

TCP_Stream
$ netperf -H 192.168.0.1 -l 60
Throughput in bits/sec = 7.75 (of 10Meg)

UDP_Stream
$ netperf -t UDP_Stream -H 192.168.0.1 # error message too long?
$ netperf -t UDP_STREAM -H 192.168.0.1 -- -m 102 = 9.52 (of 10Meg)

Measuring Request/Repsonse TImes
TCP_RR
$ netperf -t TCP_RR 192.168.1.1 -l 60
Transaction Rate per sec = 1944

TCP_CRR
sets up a new TCP connection for each request, similar to HTTP
$ netperf -t TCP_CRR -H 192.168.0.1 -l 60
Transactions per sec = 17.25

UDP_RR
$ netperf -t UDP_RR -H 192.168.1.1 -l 60
Trans Rate per sec = 2151

snapshot_script

Digg / Technology

Blog Archive