Traffic-Shaping auf dem WRT54G(S) mit Openwrt

zoo

Aktives Mitglied
Mitglied seit
9 Jun 2005
Beiträge
873
Punkte für Reaktionen
1
Punkte
18
Hallo, ich habe bei mir auf dem Openwrt QoS für das Traffic-Shaping
installiert. Es funktioniert bestens und ich kann auch noch gut telefonieren,
wenn meine Leitung voll am Anschlag ist.

Ich habe dazu testweise den Downstream mit wget dicht gemacht und von Außen gleichzeitig noch einen Flood-Ping mit grossen Paketen reinbekommen. Normalerweise wäre da Schicht im Schacht gewesen, war aber nicht :)

Wer openwrt bei sich laufen hat, der hat i.d.R. genug Kenntnisse von Linux,
dass er das anhand meiner Dateien (unten) auch hinbekommt. Nur in
/etc/ppp/ip-up.qos muss die DSL-Bandbreite angepasst werden. Alle skripte
müssen ausführbar sein: chmod a+x

Meine Quelle: http://brewer123.home.comcast.net/openwrt/firewall_qos.html

/etc/init.d/S47qos
Code:
#!/bin/sh
#
# prepare QoS
#
# - loading modules
#
# - activation is done by /etc/ppp/ip-up
#

insmod ipt_TOS
insmod ipt_tos
insmod ipt_length
insmod sch_prio
insmod sch_htb
insmod sch_sfq
insmod sch_ingress
insmod cls_tcindex
insmod cls_fw
insmod cls_route
insmod cls_u32

#--- EOF ---

/etc/ppp/ip-up.qos
Code:
#!/bin/sh
#
# set up traffic control stuff
# use the heirarchical token bucket filter (HTB)
# with 3 priorities corresponding to low-latency, normal, and bulk
# traffic.  We mainly shape the outbound traffic.  Inbound traffic
# is harder to shape, but we try that to a minor extent also.
# We do our bandwidth classification based on the TOS flags in the
# IP header.  The nice thing about this is that our TOS settings
# will stay with the packets as they go out onto the internet,
# and may be helpful somewhere along the way.  Probably not,
# but we can hope.
# For most packets, we assume that the creating app
# sets the TOS to something useful.  If it is at
# the default "Normal-Service" we will set the TOS ourselves.
# For UDP and ICMP, we consider them low-latency.  Short
# TCP packets (less than 128 bytes) are low-latency.  This
# covers TCP ack packets as well as most interactive stuff.
# Bulk TCP will generally try to optimize itself by sending
# larger packets up to 1500 bytes long.  Currently we just leave
# longer TCP packets as "Normal-Service" and hope that
# the originating app will set it as bulk if needed.
# ssh is a complicating factor, since other bulk traffic is often
# forwarded over it.  Supposedly ssh doesn't handle this well, and
# still marks the packets as minimize-delay.  We use the connrate
# module to check the connection speed of traffic running to or from
# the ssh port, and if its current bandwidth usage exceeds a threshold,
# we re-mark the TOS as maximize-throughput.  Pretty slick, because
# when it is sending at a slower average rate, the packets will
# stay at minimize-delay.
#
# to check the status of the qos stuff:
#  iptables -t mangle -L
#  tc -s qdisc show dev eth1
#  tc -s class show dev eth1

. /etc/functions.sh

# interface to shape
#IFACE=$WAN
export IFACE=$(nvram get wan_ifname)
# uplink bandwidth
# specified in kbits (about 90% of actual max uplink rate)
UP_RATE=230
DOWN_RATE=1840

# modules should be already loaded...
#insmod ipt_TOS
#insmod ipt_tos
#insmod ipt_length
#insmod sch_prio
#insmod sch_htb
#insmod sch_sfq
#insmod sch_ingress
#insmod cls_tcindex
#insmod cls_fw
#insmod cls_route
#insmod cls_u32


# clear traffic control to known state
tc qdisc del dev $IFACE root

# set up HTB with 3 bands
# class 1:20 is the default class for unclassified traffic
# the ceil command in each band allows it to suck up the entire bandwidth
# if any bandwidth is not being used by the other classes
tc qdisc add dev $IFACE root handle 1: htb default 20
tc class add dev $IFACE parent 1: classid 1:1 htb rate ${UP_RATE}kbit
# 70% bandwidth to band 0 (interactive, low-latency)
tc class add dev $IFACE parent 1:1 classid 1:10 htb rate 224kbit ceil ${UP_RATE}kbit prio 0
# 20% bandwidth to band 1 (normal stuff, web browsing, etc.)
tc class add dev $IFACE parent 1:1 classid 1:20 htb rate 64kbit ceil ${UP_RATE}kbit prio 1
# 10% bandwidth to band 2 (low priority, bulk stuff, file transfers, etc.)
tc class add dev $IFACE parent 1:1 classid 1:30 htb rate 32kbit ceil ${UP_RATE}kbit prio 2

# now use stochastic fairness queuing everywhere
tc qdisc add dev $IFACE parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev $IFACE parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev $IFACE parent 1:30 handle 30: sfq perturb 10

# check TOS and set that to something special
iptables -t mangle -N CHKTOS
# TOS already set, leave alone
iptables -t mangle -A CHKTOS -m tos --tos ! Normal-Service -j RETURN
# udp gets high priority
iptables -t mangle -A CHKTOS -p udp  -j TOS --set-tos Minimize-Delay
# small tcp packets get high priority
iptables -t mangle -A CHKTOS -p tcp -m length --length :128 -j TOS --set-tos Minimize-Delay
# ping gets high priority
iptables -t mangle -A CHKTOS -p icmp -j TOS --set-tos Minimize-Delay

# set up use of CHKTOS table, like a subroutine call
iptables -t mangle -A POSTROUTING -o $IFACE -j CHKTOS
# for fast-transferring ssh connections, let's change their TOS to
# something else
#iptables -t mangle -A POSTROUTING -o $IFACE -p tcp --sport 22 -m tos --tos Minimize-Delay -m connrate --connrate 20000:inf -j TOS --set-tos Maximize-Throughput
#iptables -t mangle -A POSTROUTING -o $IFACE -p tcp --dport 22 -m tos --tos Minimize-Delay -m connrate --connrate 20000:inf -j TOS --set-tos Maximize-Throughput

# now classify our packets into the HTB bands based on the TOS flags
# The "handle" field in the tc line is the mark applied by iptables
iptables -t mangle -A POSTROUTING -o $IFACE -m tos --tos Minimize-Delay -j MARK --set-mark 1
tc filter add dev $IFACE protocol ip parent 1: prio 1 handle 1 fw classid 1:10

iptables -t mangle -A POSTROUTING -o $IFACE -m tos --tos Maximize-Reliability -j MARK --set-mark 2
iptables -t mangle -A POSTROUTING -o $IFACE -m tos --tos Normal-Service -j MARK --set-mark 2
tc filter add dev $IFACE protocol ip parent 1: prio 1 handle 2 fw classid 1:20

iptables -t mangle -A POSTROUTING -o $IFACE -m tos --tos Maximize-Throughput -j MARK --set-mark 3
iptables -t mangle -A POSTROUTING -o $IFACE -m tos --tos Minimize-Cost -j MARK --set-mark 3
tc filter add dev $IFACE protocol ip parent 1: prio 1 handle 3 fw classid 1:30

# police incoming traffic
tc qdisc del dev $IFACE ingress
tc qdisc add dev $IFACE ingress

# rate limit to 4 mbits... any TCP traffic over that limit is dropped
# on the floor.  Note that UDP traffic will not be dropped.
# This helps ensure that there is at least a little bandwidth left over for
# my VoIP calls.  And when congestion occurs, TCP will have to back off
# instead of UDP.
tc filter add dev $IFACE parent ffff: protocol ip prio 50 u32 match ip protocol 6 0xff police rate ${DOWN_RATE}kbit burst 150k drop flowid :1

/etc/ppp/ip-up
Code:
#!/bin/sh
#-------------------------------------------------------------------------------
PATH=/usr/sbin:/sbin:/usr/bin:/bin
export PATH

# These variables are for the use of the scripts
PPP_IFACE="$1"
PPP_TTY="$2"
PPP_SPEED="$3"
PPP_LOCAL="$4"
PPP_REMOTE="$5"
PPP_IPPARAM="$6"
export PPP_IFACE PPP_TTY PPP_SPEED PPP_LOCAL PPP_REMOTE PPP_IPPARAM
#-------------------------------------------------------------------------------

# update hostname
# ez-ipupdate -s XXX -S dyndns -u XXX:XXX -h router.XXX -a $PPP_LOCAL

# Set time
ntpclient -c 1 -s -h pool.ntp.org

# Activate traffic shaping
/etc/ppp/ip-up.qos

# start siproxd if not running
pidof siproxd || /usr/sbin/siproxd
 
Nachtrag: Wenn Bittorrent läuft und den Upload dicht macht, klappt es noch nicht gut genug. Muss wohl noch etwas nach-getunt werden :/
 
hi
welche version von openwrt verwendest du denn? stable oder experimental?
gibt es für stable auch openvpn (2.0) ?

gruß
thorsten gehrig
 
openwrt experimental.

Ich werde mich die Tage mal weiter reinarbeiten in QoS/ToS und das Shaping optimieren.
 

Statistik des Forums

Themen
244,832
Beiträge
2,219,110
Mitglieder
371,534
Neuestes Mitglied
vignajeanniegolabek
Holen Sie sich 3CX - völlig kostenlos!
Verbinden Sie Ihr Team und Ihre Kunden Telefonie Livechat Videokonferenzen

Gehostet oder selbst-verwaltet. Für bis zu 10 Nutzer dauerhaft kostenlos. Keine Kreditkartendetails erforderlich. Ohne Risiko testen.

3CX
Für diese E-Mail-Adresse besteht bereits ein 3CX-Konto. Sie werden zum Kundenportal weitergeleitet, wo Sie sich anmelden oder Ihr Passwort zurücksetzen können, falls Sie dieses vergessen haben.