dslreports logo
    All Forums Hot Topics Gallery


Search Topic:
share rss forum feed



Bandwidth shaping only on port 80. Penalize heavy http.


I am very new to linux and my question might be here already. Moderators, please feel free to redirect me to current forum of already posted question.
I have googled a lot but still could not find a tc or iptables, or iproute2 based algorithm which shapes the traffic based on just usage. Not ports, not different TOS fields. Basically, what I want is to penalize heavy http downloaders and facilitate light browsing people from my linux gateway. Is there a que algorithm of a opensource product which will help me do that? Something very similar to Netequalizer. I have come across n number of topics on bandwidth shaping, but all spoke about either class based or port based or ip subnet based. My scenario is - I dont have different classes of traffic. Only browsing and downloading people.

Please help......

Aptos, CA
Yeah, that's not typically what linux traffic shaping is going for.

How would your router know which user is which based on tcp/ip packets?

Also, note that you can't really shape/limit inbound traffic, only outbound traffic - you have no control over what the internet sends to you, unless you can get to whatever is upstream of you and shape it's outbound traffic - to your inbound side.

Here's a possible way to think about it (although this is not a solution, just something to maybe get you thinking...)

You could mark traffic from specific ip addresses on your network that you associate with various users (or at least various devices that belong to various users.)

Now, you can use the different marks you gave each of those packets to put those packets into different buckets.

From there, it's up to you to figure out how you want to divvy up your outbound pipe among them.
My place : »www.schettino.us


reply to maxtor
Look at fair NAT.

Space Elf
Mullica Hill, NJ
reply to maxtor
first off you have to find out what is causing the traffic. Without more data its hard to know. Are you trying to stop people from streaming Netflix? Or do you have steam users patching games?. Helps to know the why and the cause of the need to limit download. if its P2P you are tackling that is not on port 80. in fact I doubt Netflix is solidly on port 80 or any of the game services(Steam, Origin.). Even Windows Update does not have to be on port 80.
[65 Arcanist]Filan(High Elf) Zone: Broadband Reports

Sunnyvale, CA
reply to maxtor
There are two different scenarios:

1.) you are operating one or more public servers and the users you want to control are anonymous users on the Internet (some of which may use the same proxy and therefore appear to be one user).

2.) you are operating a gateway that provides Internet access to internal users (that you are able to identify by their workstation IP address).

From your post I can't make out which scenario applies to you but possible solutions are going to be different.

It also matters if you are only interested in instantaneous bandwidth usage or want to take usage over a certain time period into account as well ?
Got some spare cpu cycles ? Join Team Helix or Team Starfire!


True, if we're talking TCP 80 only Squid with a delay_pool in transparent mode with the appropriate iptables redirect rule in the nat table would work great without needing to use QoS and explicit proxy configs on the client.


reply to JohnInSJ
HI John,

Thanks for replying.
said by JohnInSJ:

How would your router know which user is which based on tcp/ip packets?

Wouldn't the linux routing / nat table maintain a list of connections. I think that's how ntop is able to detect the top talker ? And then we can inject a delay to the top talker after a certain threshold, or based on the top talker data delay the ack packets ? Well....something of that sort.

I agree with you totally traffic shaping on the outbound traffic. But still need some guidance.
Thanks a lot again.


reply to PSHPSHACK

Bingo, thats a good one !!! Thanks a lot. But somehow my centos 6.2 (32bit ) doesn't support esfq and hence having problems with the script. I am also looking at shaperd if I can tame it a bit. Keep em coming....
kernel - 2.6.32-220.el6.i686


reply to leibold
Thanks Leibold,

I ll try to answer your question as far as possible. I am speaking about scenario 2. There are around 100 users on the LAN side of the gateway.

Normally, I would be interested in instantaneous bandwidth usage only, as I dont want to implement a quota for say after 2gb of download or something of that sort.

Now fingers crossed and welcome to the solutions you were pointing to ?? Help .


reply to Squiddy
HI Squiddy,

Thanks for pointing that out.
Yes, I could have gone via the squid way, but my users are too noob or non tech to add proxy to their browsers and again take it off when they go back home. Though thats not impossible to implement but too many people and rearrangement of network involved. If it could be done without disturbing the existing setup ? That's why I said something similar to netequalizer or arbitrator.


reply to Kearnstd
Hi Kearnstd,

Thank you for replying. Quite valid questions. I have around 100 people to serve with 5 Mbps link. What I want is to penalize people watching youtube in HD or doing http downloads of bulk files should get lower priority than, people who are just checking their mails (gmail) / light web browsing. I have a simple NATed system with a linux gateway (Centos 6.2) with WAN on one side and LAN on another. Iptables does the rest. So, when somebody watches a heavy youtube video, another user suffers from slow browsing speeds. Just wanted everyone to get a fair share of the bandwidth, so as there is no single hogger.

Hope this clears my intention and my current infrastructure.

Sunnyvale, CA
reply to maxtor
said by maxtor:

Yes, I could have gone via the squid way, but my users are too noob or non tech to add proxy to their browsers and again take it off when they go back home.

You missed an important part of Squiddy's solution. Instead of configuring each users browser he suggests to create a transparent proxy by intercepting the port 80 web traffic in the Linux firewall of the gateway server and redirecting it through squid.

His solution has a number of benefits for your situation:
- adding squid is reducing bandwidth usage by serving popular content from its cache instead of fetching it repeatedly from the Internet.
- no client (workstation) side configuration changes (that could be subverted by knowledgeable users).
- using application specific (http traffic) delay_pool in squid allows finer control over bandwidth usage then qos at the network transport layer (and it appeared as if you didn't want to use qos anyway).
- all needed software is included with most Linux distributions so there is no need to hunt for additional software.

Regarding squid delay pools: HOWTO .

Regarding squid as transparent proxy: HOWTO .

You can find many more examples if you google the subject.
Got some spare cpu cycles ? Join Team Helix or Team Starfire!


Exactly, this is exactly what I do on my home network. Transparent mode is very useful since it is, as the name implies, completely transparent to the clients. Squid caching is amazing and tremendously increases the speed of the network as well as errata update downloads.


reply to maxtor
Hi Squiddy and Leibold,

Sweet !!! I think this will solve my problem. Thanks a lot for your suggestions, it never crossed my mind :P.

I should be able to do this. Lemme get back to you once I implement something based on your brilliant ideas. I ll update everyone how it goes.

expect an update from me very soon. Thanks a lot everyone.


Awesome, and thanks for the kind words. I'll be happy to share configs with you as well if you get stuck. I've got a friend on 35 Mbit symmetric FIOS and he said my home network was amazingly faster at browsing than his FIOS and I'm on a paltry 6Mbit/768K DSL -- Squid.

Keep us updated and thanks!


reply to maxtor
Hi All,
So, I went ahead as per Squiddy and Leibold. But the speeds are not upto the mark somehow jittery and slow. I also see to many TCP_MISS on the logs. I have a 5 Mbps leased line (1:1) and below is the squid configuration. To all gurus, please let me know if you see chances of improvement in the below squid.conf. :

acl manager proto cache_object
acl localhost src ::1
acl purge method PURGE
cache_mem 500 MB
acl to_localhost dst ::1
acl localnet src # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl work_time time MTWHF 09:00-18:00

delay_pools 1
delay_class 1 2
delay_parameters 1 600000/600000 200000/200000
delay_access 1 allow localnet
http_access allow manager localhost
http_access allow manager localnet
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3128 intercept
hierarchy_stoplist cgi-bin ?
cache_dir ufs /var/spool/squid 500 16 256
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320


Hey, great you've made some very good progress. Another thing to consider is the Squid box is going to be performing a heavy amount of DNS lookups, by installing a local caching resolver, like Bind9, pointed to roothints you'll substantially improve speeds since you'll be caching and recursive. You'd want to point /etc/resolv.conf to localhost after doing this.

You will probably want to mount noatime or relatime for the /var partition or optionally / if you're not using separate partitions.

Lots of TCP_MISS may be related to max cache object size or that the cache simply hasn't filled yet.

Here are snippets of my Squid 2.x setup, for the most part it should apply to the Squid 3 syntax:


Thanks a lot Squiddy, I am indeed running local bind for queries. I am attaching my squidclient report:

# squidclient -h mgr:info

HTTP/1.0 200 OK
Server: squid/3.1.10
Mime-Version: 1.0
Date: Fri, 04 Jan 2013 07:58:27 GMT
Content-Type: text/plain
Expires: Fri, 04 Jan 2013 07:58:27 GMT
Last-Modified: Fri, 04 Jan 2013 07:58:27 GMT
X-Cache: MISS from ggn-10-1-1-1
X-Cache-Lookup: MISS from ggn-10-1-1-1:3128
Via: 1.0 ggn-10-1-1-1 (squid/3.1.10)
Connection: close

Squid Object Cache: Version 3.1.10
Start Time: Thu, 03 Jan 2013 07:55:23 GMT
Current Time: Fri, 04 Jan 2013 07:58:27 GMT
Connection information for squid:
Number of clients accessing cache: 122
Number of HTTP requests received: 332119
Number of ICP messages received: 0
Number of ICP messages sent: 0
Number of queued ICP replies: 0
Number of HTCP messages received: 0
Number of HTCP messages sent: 0
Request failure ratio: 0.00
Average HTTP requests per minute since start: 230.1
Average ICP messages per minute since start: 0.0
Select loop called: 61015593 times, 1.419 ms avg
Cache information for squid:
Hits as % of all requests: 5min: 4.9%, 60min: 8.6%
Hits as % of bytes sent: 5min: 4.9%, 60min: 9.4%
Memory hits as % of hit requests: 5min: 26.1%, 60min: 43.2%
Disk hits as % of hit requests: 5min: 8.0%, 60min: 5.2%
Storage Swap size: 458408 KB
Storage Swap capacity: 89.5% used, 10.5% free
Storage Mem size: 343052 KB
Storage Mem capacity: 67.4% used, 32.6% free
Mean Object Size: 17.21 KB
Requests given to unlinkd: 77431
Median Service Times (seconds) 5 min 60 min:
HTTP Requests (All): 0.33943 0.27332
Cache Misses: 0.35832 0.28853
Cache Hits: 0.00000 0.00000
Near Hits: 0.20843 0.23230
Not-Modified Replies: 0.00000 0.00000
DNS Lookups: 0.14261 0.13638
ICP Queries: 0.00000 0.00000
Resource usage for squid:
UP Time: 86583.403 seconds
CPU Time: 368.918 seconds
CPU Usage: 0.43%
CPU Usage, 5 minute avg: 0.57%
CPU Usage, 60 minute avg: 0.78%
Process Data Segment Size via sbrk(): 435808 KB
Maximum Resident Size: 1772304 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
Total space in arena: 435940 KB
Ordinary blocks: 419312 KB 2101 blks
Small blocks: 0 KB 0 blks
Holding blocks: 1012 KB 4 blks
Free Small blocks: 0 KB
Free Ordinary blocks: 16627 KB
Total in use: 420324 KB 96%
Total free: 16627 KB 4%
Total size: 436952 KB
Memory accounted for:
Total accounted: 387557 KB 89%
memPool accounted: 387557 KB 89%
memPool unaccounted: 49394 KB 11%
memPoolAlloc calls: 81220376
memPoolFree calls: 82767989
File descriptor usage for squid:
Maximum number of file descriptors: 1024
Largest file desc currently in use: 413
Number of file desc currently in use: 197
Files queued for open: 0
Available number of file descriptors: 827
Reserved number of file descriptors: 100
Store Disk files open: 1
Internal Data Structures:
26681 StoreEntries
26299 StoreEntries with MemObjects
26280 Hot Object Cache Items
26636 on-disk objects


Our values look pretty close and mine is snappy, though I have a fraction of the clients you do. Could it be that you need to ratchet down your delay pool size to account for other circuit usage such as RTMP and streaming media or other usages? I have pretty good cache hit ratios as compared to you but that could be due to reduced number of clients.

What are you using for these arguments, mine are below:

Oh yeah, can't believe I missed this. You need to make sure you're not applying your delay_pool to your cache either or your dest net so you might want to acl your local RFC 1918 range and deny it to your delay pool.

My stuff