- Packet Scheduler, Queueing Discipline(QDisc), queueing algorithm and packet scheduler algorithm are all names for the same thing. Usually contained in distinct kernel modules, one of multiple schedulers can be loaded into the kernel and utilized to make scheduling decisions.
- The packet scheduler is integral to the network parts of the Kernel - embedded in the network stack and the network driver. You can find the source code in: net/sched
- The packet scheduler is configured with the program
We will start with what you probably already know - that almost all network communication takes place in chunks and not as a continuous flow. If you do not, this networking article should help.
These chunks are more commonly referred to as packets, so network communication is packet-based.
These packets begin their lives as data on a computing device that needs to get from where it presently resides to another device somewhere on the local area network or a different network.
This data is first broken into packets. Then a header and sometimes a footer are added with delivery instructions and other information. This process is repeated each time the packet traverses an application or device that uses a different format for handling packets.
The opposite takes place as the packet nears its destination. The various headers and footers are removed and the packet reaches the end of it's journey as it began - the same chunk of data we started with. When combined with all the other chunks, in the correct order, we end up with the same data we started with only that data is in a new location. A real world analogy.
We use QoS to ensure packets get to their destination in a timely fashion and that they are not delayed by lower priority traffic. While we don't have much control of what happens to packets outside of our own network, there are QoS options that allow us to drop or reorder packets at each of our own network interfaces. The most important of these is usually the WAN port of our router.
Well, this where LAN traffic converges to be transferred to another network, often over a lower capacity connection. Similar to the way traffic backs up on a busy freeway when multiple lanes merge into one, packets are either dropped or backed up in the interface queue waiting for their turn to pass through the bottleneck.
In fact unless there is traffic congestion there is no need for QoS. By itself QoS does not increase bandwidth or make packets travel faster. It queues, or drops packets in cases where there is less bandwidth available than needed. We could also eliminate the congestion by increasing the bandwidth of the connection, but this is not always possible or practical.
Network interfaces can drop, forward, queue, delay and re-order packets.
Every network interface has two queues, also referred to as a buffers, where packets reside briefly before being transmitted. The queue for incoming packets is called the ingress queue. The queue for outgoing packets is called the egress queue.
Let us look at the egress queue of a typical network interface.
We can determine and change the size of the queue using the
ifconfig command. The
txqueuelen: in the response indicates the capacity of the queue.
Queue capacity is not measured in bytes or bits as you might expect, but by the number of packets it can hold. When the queue is full, any further incoming packets will "overflow". They are dropped and never reach the intended recipient.
Activating QoS is not necessary with Linux as it is already active by default. The standard packet scheduler that manages egress queues in Linux, is "pfifo_fast", which means "prioritized first in first out". It is based on the QoS/TOS flags in the packet headers.
Network interfaces are serial devices. Packets leave the queue one a time, and are transmitted one after the other, single file. The task of the scheduler is to decide which packet leaves next. It does this by ordering the packets according to an algorithm and its configuration. In the case of "pfifo_fast", the first packet to the enter the buffer is the first to leave.
Unlike the egress queue, the ingress queue has limited control over the packets it receives. Other than forwarding packets as they are received it's only other capability is to drop packets. This can be used to advantage though with the TCP protocol which uses flow and congestion control. Dropping TCP "ACK" packets will imply congestion to the transmission source which will reduce it's transmission rate. There is no similar mechanism available for UDP packets however.
A basic QoS setup.
____ User1==============\ ___( )__ Line_A \ _( )_ User2===============[ROUTER]·············[ISP]≡≡≡≡≡≡≡≡≡≡(_ Internet __) Line_B / Line_X Line_Z (_ __) User3==============/ (______) Line_C Line_A, Line_B and Line_C are Gigabit Ethernet Line_X phone line using ADSL2+ protocol Line_Z 10 Gigabit fiber We implement QoS at the [ROUTER] WAN interface. -->-->--[egress queue]-->-->--[interface output]-->-->--Internet \ / \ / QDisc 1. Drop packets exceeding available bandwidth. 2. Reorder packets currently in the buffer. -->-->--[ingress queue]-->--[bridge check]-->-->--intranet \ / \ / QDisc 1. Drop packets that exceed configured bandwidth ("policing") With TCP => no line congestion 2. No reordering
- We limit outgoing traffic to a rate slightly below the capacity of the outgoing connection. This moves the traffic bottleneck upstream to the router where we can control congestion instead of downstream where we cannot.
- We drop incoming packets that exceed bandwidth. TCP recognizes this as a sign of traffic congestion and reduces the transmission rate at the source.
Below are some articles about packet reordering.
doc/howto/packet.scheduler/packet.scheduler.theory.txt · Last modified: 2013/08/10 12:22 by lorema