Table 1. Correlation between traffic control elements and Linux components
traditional element | Linux component |
---|---|
enqueuing; | FIXME: receiving packets from userspace and network. |
shaping | The class offers shaping capabilities. |
scheduling | A qdisc is a scheduler. Schedulers
can be simple such as the FIFO or
complex, containing classes and other
qdiscs, such as HTB. |
classifying | The filter object performs the
classification through the agency of a
classifier object. Strictly speaking,
Linux classifiers cannot exist outside
of a filter. |
policing | A policer exists in the Linux traffic
control implementation only as part of a
filter . |
dropping | To drop traffic requires a
filter with a policer which
uses “drop” as an action. |
marking | The dsmark qdisc is used for
marking. |
enqueuing; | Between the scheduler's qdisc and the network interface controller (NIC) lies the driver queue. The driver queue gives the higher layers (IP stack and traffic control subsystem) a location to queue data asynchronously for the operation of the hardware. The size of that queue is automatically set by Byte Queue Limits (BQL). |
Simply put, a qdisc is a scheduler (Section 3.2, “Scheduling”). Every output interface needs a scheduler of some kind, and the default scheduler is a FIFO. Other qdiscs available under Linux will rearrange the packets entering the scheduler's queue in accordance with that scheduler's rules.
The qdisc is the major building block on which all of Linux traffic control is built, and is also called a queuing discipline.
The classful qdiscs can contain class
es, and provide a handle
to which to attach filter
s. There is no prohibition on using a
classful qdisc without child classes, although this will usually consume
cycles and other system resources for no benefit.
The classless qdiscs can contain no classes, nor is it possible to attach filter to a classless qdisc. Because a classless qdisc contains no children of any kind, there is no utility to classifying. This means that no filter can be attached to a classless qdisc.
A source of terminology confusion is the usage of the terms
root
qdisc and ingress
qdisc. These are not
really queuing disciplines, but rather locations onto which traffic
control structures can be attached for egress (outbound traffic) and
ingress (inbound traffic).
Each interface contains both. The primary and more common is the
egress qdisc, known as the root
qdisc. It can contain any
of the queuing disciplines (qdisc
s) with potential
class
es and class structures. The overwhelming majority of
documentation applies to the root
qdisc and its children. Traffic
transmitted on an interface traverses the egress or root
qdisc.
For traffic accepted on an interface, the ingress
qdisc is traversed.
With its limited utility, it allows no child class
to be
created, and only exists as an object onto which a filter
can be
attached. For practical purposes, the ingress
qdisc is merely a
convenient object onto which to attach a policer
to limit the
amount of traffic accepted on a network interface.
In short, you can do much more with an egress qdisc because it contains
a real qdisc and the full power of the traffic control system. An
ingress
qdisc can only support a policer. The remainder of the
documentation will concern itself with traffic control structures
attached to the root
qdisc unless otherwise specified.
Classes only exist inside a classful qdisc
(e.g., HTB
and CBQ). Classes are immensely flexible and can always
contain either multiple children classes or a single child qdisc
[5].
There is no prohibition against a class containing a classful qdisc
itself, which facilitates tremendously complex traffic control
scenarios.
Any class can also have an arbitrary number of filter
s attached
to it, which allows the selection of a child class or the use of a
filter to reclassify or drop traffic entering a particular class.
A leaf class is a terminal class in a qdisc. It contains a qdisc (default FIFO) and will never contain a child class. Any class which contains a child class is an inner class (or root class) and not a leaf class.
The filter is the most complex component in the Linux traffic control system. The filter provides a convenient mechanism for gluing together several of the key elements of traffic control. The simplest and most obvious role of the filter is to classify (see Section 3.3, “Classifying”) packets. Linux filters allow the user to classify packets into an output queue with either several different filters or a single filter.
A filter must contain a classifier
phrase.
A filter may contain a policer
phrase.
Filters can be attached either to classful qdisc
s or to
class
es, however the enqueued packet always enters the root
qdisc first. After the filter attached to the root qdisc has been
traversed, the packet may be directed to any subclasses (which can have
their own filters) where the packet may undergo further classification.
Filter objects, which can be manipulated using tc, can use several
different classifying mechanisms, the most common of which is the
u32
classifier. The u32
classifier allows the user to
select packets based on attributes of the packet.
The classifiers are tools which can be used as part of a filter
to identify characteristics of a packet or a packet's metadata. The
Linux classfier object is a direct analogue to the basic operation and
elemental mechanism of traffic control classifying.
This elemental mechanism is only used in Linux traffic control as part
of a filter
. A policer calls one action above and another
action below the specified rate. Clever use of policers can simulate
a three-color meter. See also
Section 10, “Diagram”.
Although both policing and shaping are basic
elements of traffic control for limiting bandwidth usage a policer will
never delay traffic. It can only perform an action based on specified
criteria. See also
Example 5, “tc filter
”.
This basic traffic control mechanism is only used in Linux traffic
control as part of a policer
. Any policer attached to
any filter
could have a drop
action.
The only place in the Linux traffic control system where a packet can be explicitly dropped is a policer. A policer can limit packets enqueued at a specific rate, or it can be configured to drop all traffic matching a particular pattern [6].
There are, however, places within the traffic control system where a packet may be dropped as a side effect. For example, a packet will be dropped if the scheduler employed uses this method to control flows as the GRED does.
Also, a shaper or scheduler which runs out of its allocated buffer space may have to drop a packet during a particularly bursty or overloaded period.
Every class
and classful qdisc
(see also
Section 7, “Classful Queuing Disciplines (qdisc
s)”) requires a unique identifier within
the traffic control structure. This unique identifier is known as a
handle and has two constituent members, a major number and a minor
number. These numbers can be assigned arbitrarily by the user in
accordance with the following rules
[7].
The numbering of handles for classes and qdiscs
major
This parameter is completely free of meaning to the kernel. The
user may use an arbitrary numbering scheme, however all objects in
the traffic control structure with the same parent must share a
major
handle number. Conventional
numbering schemes start at 1 for objects attached directly to the
root
qdisc.
minor
This parameter unambiguously identifies the object as a qdisc if
minor
is 0. Any other value identifies the
object as a class. All classes sharing a parent must have unique
minor
numbers.
The special handle ffff:0 is reserved for the ingress
qdisc.
The handle is used as the target in classid
and
flowid
phrases of tc filter
statements.
These handles are external identifiers for the objects, usable by
userland applications. The kernel maintains internal identifiers for
each object.
The current size of the transmission queue can be obtained from the ip and ifconfig commands. Confusingly, these commands name the transmission queue length differently (emphasized text below):
$ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:18:F3:51:44:10
inet addr:69.41.199.58 Bcast:69.41.199.63 Mask:255.255.255.248
inet6 addr: fe80::218:f3ff:fe51:4410/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:435033 errors:0 dropped:0 overruns:0 frame:0
TX packets:429919 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:65651219 (62.6 MiB) TX bytes:132143593 (126.0 MiB)
Interrupt:23
$ip link
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:18:f3:51:44:10 brd ff:ff:ff:ff:ff:ff
The length of the transmission queue in Linux defaults to 1000 packets which could represent a large amount of buffering especially at low bandwidths. (To understand why, see the discussion on latency and throughput, specifically bufferbloat).
More interestingly, txqueuelen
is only used as a default queue
length for these queueing disciplines.
pfifo_fast
(Linux default queueing discipline)
sch_fifo
sch_gred
sch_htb
(only for the default queue)
sch_plug
sch_sfb
sch_teql
Looking back at Figure 1, the txqueuelen parameter controls the size of the queues in the Queueing Discipline box for the QDiscs listed above. For most of these queueing disciplines, the “limit” argument on the tc command line overrides the txqueuelen default. In summary, if you do not use one of the above queueing disciplines or if you override the queue length then the txqueuelen value is meaningless.
The length of the transmission queue is configured with the ip or ifconfig commands.
ip link set txqueuelen 500 dev eth0
Notice that the ip command uses “txqueuelen” but when displaying the interface details it uses “qlen”.
Between the IP stack and the network interface controller (NIC) lies the driver queue. This queue is typically implemented as a first-in, first-out (FIFO) ring buffer – just think of it as a fixed sized buffer. The driver queue does not contain packet data. Instead it consists of descriptors which point to other data structures called socket kernel buffers (SKBs) which hold the packet data and are used throughout the kernel.
The input source for the driver queue is the IP stack which queues complete IP packets. The packets may be generated locally or received on one NIC to be routed out another when the device is functioning as an IP router. Packets added to the driver queue by the IP stack are dequeued by the hardware driver and sent across a data bus to the NIC hardware for transmission.
The reason the driver queue exists is to ensure that whenever the system has data to transmit, the data is available to the NIC for immediate transmission. That is, the driver queue gives the IP stack a location to queue data asynchronously from the operation of the hardware. One alternative design would be for the NIC to ask the IP stack for data whenever the physical medium is ready to transmit. Since responding to this request cannot be instantaneous this design wastes valuable transmission opportunities resulting in lower throughput. The opposite approach would be for the IP stack to wait after a packet is created until the hardware is ready to transmit. This is also not ideal because the IP stack cannot move on to other work.
For detail how to set driver queue see chapter 5.5.
Byte Queue Limits (BQL) is a new feature in recent Linux kernels (> 3.3.0) which attempts to solve the problem of driver queue sizing automatically. This is accomplished by adding a layer which enables and disables queuing to the driver queue based on calculating the minimum buffer size required to avoid starvation under the current system conditions. Recall from earlier that the smaller the amount of queued data, the lower the maximum latency experienced by queued packets.
It is key to understand that the actual size of the driver queue is not changed by BQL. Rather BQL calculates a limit of how much data (in bytes) can be queued at the current time. Any bytes over this limit must be held or dropped by the layers above the driver queue..
The BQL mechanism operates when two events occur: when packets are enqueued to the driver queue and when a transmission to the wire has completed. A simplified version of the BQL algorithm is outlined below. LIMIT refers to the value calculated by BQL.
**** ** After adding packets to the queue **** if the number of queued bytes is over the current LIMIT value then disable the queueing of more data to the driver queue
Notice that the amount of queued data can exceed LIMIT because data is queued before the LIMIT check occurs. Since a large number of bytes can be queued in a single operation when TSO, UFO or GSO (see chapter 2.9.1 aggiungi link for details) are enabled these throughput optimizations have the side effect of allowing a higher than desirable amount of data to be queued. If you care about latency you probably want to disable these features.
The second stage of BQL is executed after the hardware has completed a transmission (simplified pseudo-code):
**** ** When the hardware has completed sending a batch of packets ** (Referred to as the end of an interval) **** if the hardware was starved in the interval increase LIMIT else if the hardware was busy during the entire interval (not starved) and there are bytes to transmit decrease LIMIT by the number of bytes not transmitted in the interval if the number of queued bytes is less than LIMIT enable the queueing of more data to the buffer
As you can see, BQL is based on testing whether the device was starved. If it was starved, then LIMIT is increased allowing more data to be queued which reduces the chance of starvation. If the device was busy for the entire interval and there are still bytes to be transferred in the queue then the queue is bigger than is necessary for the system under the current conditions and LIMIT is decreased to constrain the latency.
A real world example may help provide a sense of how much BQL affects the amount of data which can be queued. On one of my servers the driver queue size defaults to 256 descriptors. Since the Ethernet MTU is 1,500 bytes this means up to 256 * 1,500 = 384,000 bytes can be queued to the driver queue (TSO, GSO etc are disabled or this would be much higher). However, the limit value calculated by BQL is 3,012 bytes. As you can see, BQL greatly constrains the amount of data which can be queued.
An interesting aspect of BQL can be inferred from the first word in the name – byte. Unlike the size of the driver queue and most other packet queues, BQL operates on bytes. This is because the number of bytes has a more direct relationship with the time required to transmit to the physical medium than the number of packets or descriptors since the later are variably sized.
BQL reduces network latency by limiting the amount of queued data to the minimum required to avoid starvation. It also has the very important side effect of moving the point where most packets are queued from the driver queue which is a simple FIFO to the queueing discipline (QDisc) layer which is capable of implementing much more complicated queueing strategies. The next section introduces the Linux QDisc layer.
The BQL algorithm is self tuning so you probably don’t need to mess with this too much. However, if you are concerned about optimal latencies at low bitrates then you may want override the upper limit on the calculated LIMIT value. BQL state and configuration can be found in a /sys directory based on the location and name of the NIC. On my server the directory for eth0 is:
/sys/devices/pci0000:00/0000:00:14.0/net/eth0/queues/tx-0/byte_queue_limits
The files in this directory are:
hold_time: Time between modifying LIMIT in milliseconds.
inflight: The number of queued but not yet transmitted bytes.
limit: The LIMIT value calculated by BQL. 0 if BQL is not supported in the NIC driver.
limit_max: A configurable maximum value for LIMIT. Set this value lower to optimize for latency.
limit_min: A configurable minimum value for LIMIT. Set this value higher to optimize for throughput.
To place a hard upper limit on the number of bytes which can be queued write the new value to the limit_max fie.
echo "3000" > limit_max
[5] A classful qdisc can only have children classes of its type. For example, an HTB qdisc can only have HTB classes as children. A CBQ qdisc cannot have HTB classes as children.
[6]
In this case, you'll have a filter
which uses a
classifier
to select the packets you wish to drop. Then
you'll use a policer
with a with a drop action like this
police rate 1bps burst 1 action drop/drop.
[7] I do not know the range nor base of these numbers. I believe they are u32 hexadecimal, but need to confirm this.