OpenWrt Barrier Breaker and Chaos Calmer (BB & CC) have pre-built packages for controlling Bufferbloat - the undesirable latency that comes from the router buffering too much data.
Bufferbloat is most evident when the link is heavily loaded. It causes bad performance for voice and video conversations, causes gamers to lag out, and generally makes people say, "The Internet is slow today."
The "luci-app-sqm" package of modern OpenWrt solves the problem of Bufferbloat. In a three-minute installation and configuration, you'll have a much more lively network connection. Here's how:
TL;DR Install OpenWrt BB or newer, and follow the video at: https://www.youtube.com/watch?v=FvYhifdQ92Q
Before you can optimize your network, you need to know its current state. Run a speed test to find your down/upload link speeds.
Note: As an alternative, you could do a lot more work and measure ping times while running one of the other speed tests at http://speedtest.net, http://testmy.net, or http://speedof.me but you would miss out on the accurate latency measurements of DSLReports…
Install the luci-app-sqm package in OpenWrt BB or CC. Watch the Youtube Video that shows these steps:
The default values described below work quite well for most situations. You may be able to improve performance by experimenting with settings, see A little about tuning SQM below.
To configure SQM, choose Network → SQM QoS to see the Smart Queue Management (SQM) GUI.
Measure your latency again with the speed test. You should notice that the measured ping times should only be slightly larger during the downloads and uploads. Try using VoIP, Skype, Facetime, gaming, DNS, and general web browsing. They should be much more pleasant, even if someone's uploading or downloading a lot of data.
You've reduced your connection's bufferbloat!
The steps above will control latency well without additional effort. The 80-95% figures mentioned above are good first-cut estimates, but you can often gain more speed while still controlling latency by making a couple experiments to adjust the settings.
The most important settings are the Download and Upload speeds. While it may seem counterintuitive, it may be your upstream buffering that is slowing your downstream. Remember that each packet that is sent in a TCP connection needs to be acknowledged, so you are always doing both.
Adjust the Download and Upload speeds upwards until the latency begins to increase, then enter a slightly lower final value. One good test for this is the DSLReports Speed Test because it automatically measures latency as well as speed. Then do the same for the Upload entry. It may be worthwhile to tweak the two a bit up and down to find a sweet spot for your connection and usage.
Note: If you have a DSL link, the experiments above may produce Download and Upload values that are actually higher than the original speed test results. This is OK: the ATM framing bytes of a DSL link add an average of 9% overhead, and these settings simply tell SQM how to make up for that overhead.
Note: If you use a cable modem, you should use a speed test that runs for a longer time. Cable modem makers have gamed speed tests thoroughly by using "Speedboost", which usually gives you an extra 10 mbits or so for the first 10 seconds (so the speed test will look good(!)). Don't be surprised if the "right" setting for your queue rates is significantly lower than the no-SQM speed test results. You may need to tune the speeds down from your initial settings to get the latency to the point you need for your own usage of your connection.
You can also experiment with the other settings (read the "the details" sections below for more information), but they will not make nearly as large a difference as ensuring that the Download and Upload speeds are maximized.
Smart Queue Management (SQM) is our name for an intelligent combination of better packet scheduling (flow queueing) techniques along with with active queue length management (AQM).
OpenWrt has full capability of tuning the network traffic control parameters. If you want to do the work, you can read the full description at the Traffic Control HOWTO. You may still find it useful to get into all the details of classifying and prioritizing certain kinds of traffic, but the SQM algorithms and scripts (fq_codel and sqm-scripts) require a few minutes to set up, and work as well or better than most hand-tuned classification schemes.
Current versions of OpenWrt have SQM and fq_codel built in. These algorithms were developed as part of the CeroWrt project. They have been tested and refined over the last three years, and have been accepted back into the OpenWrt mainline (BB & CC), as well as the Linux Kernel, and in dozens of commercial offerings.
To use SQM in your OpenWrt router, use the SQM QoS tab in the web interface. This will optimize the performance of the WAN interface (generally eth0) that connects your router to the ISP/the Internet. There are three sub-tabs in the SQM QoS page that you may configure:
Set the Download and Upload speeds in the web GUI for the speed of your Internet connection. To do this
Example 1: If your your provider boasts "7 megabit download/768 kbps upload", set Download to 5950 kbit/s and Upload to 653 kbit/s. Those numbers are 85% of the advertised speeds.
Example 2: If you have measured your bandwidth with a speed test (be sure to disable SQM first), set the Download and Upload speeds to 95% of those numbers. For example, if you have measured 6.2 megabits down and 0.67 megabits up (6200 kbps and 670 kbps, respectively), set your Download and Upload speeds to 95% of those numbers (5890 and 637 kbps, respectively)
Basic Settings - the details…
SQM is designed to manage the queues of packets waiting to be sent across the slowest (bottleneck) link, which is usually your connection to the Internet. OpenWrt cannot automatically adapt to network conditions on DSL, cable modems or GPON without any settings. Since the majority of ISP provided configurations for buffering are broken today, you need take control of the bottleneck link away from the ISP and move it into OpenWrt so it can be fixed. You do this by entering link speeds that are a few percent below the actual speeds.
Use a speed test program or web site like the DSL Reports Speed Test to get an estimate of the actual download and upload values. After setting the initial Download and Upload entries, you should feel free to try the suggestions at A little about tuning SQM above to see if you can further increase the speeds.
The Queue Discipline tab controls how packets are prioritized for sending and receipt. The default settings shown here work very well for nearly all circumstances. Those defaults are:
Queueing Discipline - the details…
The default fq_codel queueing discipline works well in virtually all situations. Feel free to try out other algorithms to see if they work better in your environment.
The default simple.qos script has a traffic shaper (the Queueing Discipline you select) and three classes with different priorities for traffic. This provides good defaults.
Explicit Congestion Notification (ECN) is a mechanism for notifying a sender that its packets are encountering congestion and that the sender should slow its packet delivery rate. Instead of dropping a packet, fq_codel marks the packet with a congestion notification and passes it along to the receiver. That receiver sends the congestion notification back to the sender, which can adjust its rate. This provides faster feedback than having the router drop the received packet. Note: this technique requires that the TCP stack on both sides enable ECN.
At low bandwidth, we recommend that you turn ECN off for the Upload (outbound, egress) direction, because fq_codel handles and drops packets before they reach the bottleneck, leaving room for more important packets to get out. For the Download (inbound, ingress) link, we recommend you turn ECN on so that fq_codel can inform the local receiver (that will in turn notify the remote sender) that it has detected congestion without loss of a packet.
The "Dangerous Configuration" options allow you to change other parameters. They are not heavily error checked, so be careful that they are exactly as shown when you enter them. As with other options in this tab, it is safe to leave them at their default. They include:
Set the Link Layer Adaptation options based on your connection to the Internet. The general rule for selecting the Link Layer Adaption is:
If you are not sure what kind of link you have, first try using "None", then run the Quick Test for Bufferbloat. If the results are good, you’re done. Next, try the ATM choice, then the Ethernet choice to see which performs best. Read the Details (below) to learn more about tuning the parameters for your link.
Link Layer Adaptation - the details…
It is especially important to set the Link Layer Adaptation on links that use ATM framing (almost all DSL/ADSL links do), because ATM adds five additional bytes of overhead to a 48-byte frame. Unless the SQM algorithm can account correctly for the ATM framing bytes, short packets will appear to take longer to send than expected, and will be penalized.
SQM can also account for the overhead imposed by "Ethernet with overhead" (mostly VDSL) links. Cable Modem, Fiber, and direct Ethernet connections generally do not need any kind of link layer adaptation.
The "Advanced Link Layer" choices are relevant if you are sending packets larger than 1500 bytes. This would be unusual for most home setups, since ISPs generally limit traffic to 1500 byte packets.
Unless you are experimenting, you should use the tc_stab (not the htb_private) choice for the link layer adaptation mechanism.
The right queue setup script (simple, hfsc_lite, …) for you depends on the combination of several factors like the ISP connection's speed and latency, router's CPU power, wifi/wired connection from LAN clients etc.. You will need likely to experiment with several scripts to see which performs best for you. Below is a summary of real-life testing with three different setup scripts.
This was tested with WNDR3700 running trunk with kernel 4.1.16 and SQM 1.0.7 with simple, hfsc_lite and hfsc_litest scripts with SQM speed setting 85000/10000 (intentionally lower than ISP connection speed), 110000/15000 (that should exceed the ISP connection speed and also totally burden the router's CPU), as well as 110000/15000 using Wifi.
wired 85/10 wired 110/15 Wifi 110/15 Download/Up/Latency Download/Up/Latency Download/Up/Latency Simple 19.5/2.1/18.5 21.2/2.7/19 11.0/3.0/21 hfsc_lite 20.7/2.2/19.5 25.0/2.7/50 19.0/2.9/35 hfsc_litest 20.7/2.2/18.7 25.0/2.7/52 18.0/2.8/35
("flent" network measurement tool reports the overview as average of the 4 different traffic classes, so the total bandwidth was 4x the figures shown in the above table that shows "per-class" speed. The maximum observed combined download+upload speed was ~110 Mbit/s.)
With wired 85/10 the experience was almost identical with all four qdisc strategies in SQM. Approx. 20 Mbit/s download / 2.1 Mbit/s upload and 19 ms latency shown in the flent summary graph.
With wired 110/15 there was more difference. Interestingly "simple" kept latency at 20 ms, while with the other 3 strategies latency jumped to 50 ms after ~20 seconds. (Might be a flent peculiarity, but still mentioning it.) "simple" kept low latency at 19 ms and 21 Mbit/s download, while the other 3 strategies had 50 ms latency while having 24-25 Mbit/s download per class.
But when the LAN client connected with Wifi to the router with 110/15 limits, "simple" lost its download speed. Latency was still low, but download speed was really low, just half of the normal. Likely the CPU power required for wifi limited the CPU available for other processing and the router choked.
At least on the tested setup, the download speed using wifi and SQM "simple" was half of that what could achieved with hfsc_lite+wifi, or simple+wired.
The key message of this note is that the right setup script for you will depend on your connection, your router and your LAN clients. It pays off to test the various setup scripts.