dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
5
share rss forum feed


Matt3
All noise, no signal.
Premium
join:2003-07-20
Jamestown, NC
kudos:12
reply to knightmb

Re: Well worth reading on why P2P causes problems

said by knightmb:

said by FFH:

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.

It also explains that the problem can be fixed without banning P2P. But to do that requires the IEEE to modify TCP protocol standards and for ISPs to develop bandwidth limiting procedures for those users who won't upgrade to the new TCP stack client software.
Maybe if they used what TCP/IP already has built in like, TOS, which is Type of Service, flags for lowdelay, throughput, reliability, mincost, congestion. The stuff that is already in there and adding another one like "even more uber ultra lower" is just more from the Redundancy Depart of Redundancy.
Agreed. QoS is also a viable option (it works on my $40 Buffalo router for christ sakes) but the big ISPs aren't interested in throttling P2P traffic, they want to either 1) kill it by making it unreliable ala Comcast or 2) implement byte caps to start hammering users with overages ala Time Warner. These are all being discussed by the companies most threatened by the advent of streaming video ... the cable companies.

There are solutions to P2P "flood" right now, but a lot of companies want it to fail because it's detrimental to their ancient and flawed business model.


justbits
More fiber than ATT can handle
Premium
join:2003-01-08
Chicago, IL
Reviews:
·Comcast Business..

1 recommendation

QOS/TOS flags are not a solution. Anybody can set those flags. Anybody can ignore those flags. QOS/TOS is only useful between you and your first hop onto the Internet, not between you and anybody else. Otherwise, everybody who is greedy would mark all of their packets as highest priority.

The proposed change to TCP can result in fair-sharing of major Internet backbones as well as fair-sharing on your home Internet router. The big win with a fair-sharing TCP stack is that the major Internet backbones that carry GB/sec of traffic wont need to deploy excessive traffic shaping or deploy fake RST packets. And they can even detect the difference between a "greedy" TCP stack and a "fair-sharing" TCP stack so they can continue to throttle "greedy" consumers, but not excessively screw "fair-sharing" Internet users. The average case for the "fair-sharing" stack appears to result in no excessive detriment to P2P apps that would/could otherwise be more severely throttled by your ISP. And it seems to result in a huge increase in performance for lower-bandwidth single-connection users like VoIP and web surfing. However, an implementation of the protocol and testing needs to be done before this can really be proven.

The key here is that people who excessively use the Internet would be fairly throttled by a change to everyone's TCP stack to help ease congestion on Internet backbones. With fair-sharing TCP stacks, ISPs wouldn't need to excessively punish people using P2P protocols that are designed to take advantage of TCP's "congestion control".

If you want to think of it another way, P2P protocols are NOT designed to be a "green" or environmentally friendly protocol. They are designed to be greedy and take advantage of known flaws in the current TCP congestion algorithm. A fair-sharing change to TCP would result in helping all traffic move more smoothly, not just on your connection to your ISP, but across the entire Internet backbone between you and your end destination.



factchecker

@cox.net

said by justbits:

QOS/TOS flags are not a solution. Anybody can set those flags. Anybody can ignore those flags. QOS/TOS is only useful between you and your first hop onto the Internet, not between you and anybody else. Otherwise, everybody who is greedy would mark all of their packets as highest priority.
The QoS issue only applies to the first mile anyway, where the problem is. There is not bandwidth problem at the internet backbone level.

As well, QoS can easily be implemented in a way that prevents the "flags" issue you have mentioned.

The proposed change to TCP can result in fair-sharing of major Internet backbones as well as fair-sharing on your home Internet router. The big win with a fair-sharing TCP stack is that the major Internet backbones that carry GB/sec of traffic wont need to deploy excessive traffic shaping or deploy fake RST packets.
Major backbone providers have not even thought about using traffic shaping or forged RST packets. There is not a bandwidth problem on the internet backbones, nor will there be for awhile as there is still a lot of available bandwidth and many options to alleviate any problems (turning up new wavelengths, etc.) This is a problem in last mile networks only.

And it seems to result in a huge increase in performance for lower-bandwidth single-connection users like VoIP and web surfing.
There are VERY few single-session TCP applications left. VoIP is not one of them. And web surfing is not one either (any longer).

If you want to think of it another way, P2P protocols are NOT designed to be a "green" or environmentally friendly protocol.
Then the problem is the P2P algorithm, NOT TCP. Fix the P2P algorithms, fix them problem. Mangle TCP, problem remains.


funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6

2 edits
reply to justbits

said by justbits:

If you want to think of it another way, P2P protocols are NOT designed to be a "green" or environmentally friendly protocol. They are designed to be greedy and take advantage of known flaws in the current TCP congestion algorithm.
That is simply untrue. Please do not be fooled by George Ou's vilification of P2P!

You can't exploit anything by downloading, since you're not on the sending side and you do not get any network feedback indicating congestion. The download side is also not heavily congested on these last-mile residential networks. The side an application or user can control is the upload side.

Of four major P2P File-Sharing protocols in use, DC++, Gnutella, and ED2K all deliver a requested file to a peer with a slot. These three behave more like http or ftp servers when transferring data - 1 slot per file request. They don't have the behavior that you (and George Ou) accuse of being an exploit.

I know developers across several of the BitTorrent teams, and they are extremely sensitive to responding to changing network conditions.

BitTorrent, the #1 protocol world-wide, does connect to 35-60 peers in a swarm, but it only uploads simultaneously to 3-4 of them! One of those upload slots is used for optimistic unchoking (looking for better uploading peers in exchange for your uploading activity).

If network congestion occurs on a connection with any of those 3-4 peers, that peer's performance will drop and then will be choked (traffic to that peer from your client is stopped). A different peer is selected to replace it. This process takes a maximum of 30 seconds. In this way, a congested link is relieved of the burden and will be retried later when the Optimistic Unchoke gets back around to that part of the peer list.

Neither FTP or HTTP do this. They continue to draw across the congested link for the entirety of the download.

BitTorrent is far more careful about congested links than many realize!

You can verify these claims either by testing them yourself (hard for many to do) or by looking at the protocol at http://wiki.theory.org/BitTorrentSpecification#Choking_and_Optimistic_Unchoking.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
"We don't throttle any traffic," -Charlie Douglas, Comcast spokesman, on this report.


espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2
Reviews:
·Vitelity VOIP

1 recommendation

said by funchords:

You can't exploit anything by downloading, since you're not on the sending side and you do not get any network feedback indicating congestion. The download side is also not heavily congested on these last-mile residential networks. The side an application or user can control is the upload side.
This is the argument where both you and Mr. Ou miss the boat. TCP is a connection oriented protocol with guaranteed delivery and flow control. The sending side of the connection cannot release any data beyond a TCP window's worth of packets until you acknowledge the previous bundle's safe receipt. There is no difference between uploading or downloading in implementation, the key issue is congestion and how the protocol deals with it. TCP's congestion avoidance algorithm tunes itself based on round trip time calculations and packet loss. Since every TCP connection follows the exact same rules on how to deal with congestion, each TCP session is essentially equal on the network. You don't configure anything for how TCP will balance itself out, whether you are on a 56k dialup connection or have straight GigE access all the way out to the Internet the protocol self-adapts itself to the conditions.

The bottom line is that in the event of congestion, every TCP session backs off in roughly the same proportional amount. As such, the people with the most TCP sessions pumping their data are going to win for throughput. The algorithm is remarkably fair for bandwidth per TCP session, where the whole thing gets hosed up is the number of TCP sessions per end user.


funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6

There is no difference between uploading or downloading in implementation...
Correct, but there is a rather physical difference. When you are uploading, you are sending fat packets back-to-back. When you are downloading, you are sending tiny packets with large gaps between. Given any random moment when congestion occurs, it's the uploader that is very likely to lose a packet and initiate congestion control responses. The downloader avoids losing a packet simply because his tiny "ACK" packets present a smaller target.

TCP's congestion avoidance algorithm tunes itself based on round trip time calculations and packet loss. Since every TCP connection follows the exact same rules on how to deal with congestion, each TCP session is essentially equal on the network.
Only if each route is the same and only if each end-system is as responsive as all of the others. IRL, this is never the case.

The bottom line is that in the event of congestion, every TCP session backs off in roughly the same proportional amount.
If and only if all connections drop a packet (or they all provide whatever else congestion cue might be available on that route).

The people with the most TCP sessions pumping their data are going to win for throughput.
Not withstanding my arguments above, they are only going to win during that brief interval of recovery. If the network is not congested, there is no problem that Bob Briscoe's proposal fixes. It does not avoid congestion. It does not reduce the amount of bandwidth used by P2P.

It's a bit like saying -- when IRQ 3 goes up, it takes the CPU 300% more time to service it does to service IRQ 4 or IRQ 7. Therefore, when the system starts to crawl, we're going to triple the CPU time given to IRQ 4 or IRQ 7 (or we're going to skip IRQ 3 66% of the time -- just to be fair). At that point, is the real problem fairness?

And to you I ask -- should the network be running at "congestion" often enough and for periods long enough for Bob Briscoe's suggestion to actually matter very much? Is a network that runs at "congestion" for long durations a healthy one?

And to everyone I ask -- when you read George Ou's article, is his intent to solve a problem? Or is George really trying to brand BitTorrent as some kind of an exploiter -- so that any thoughts about Network Neutrality would only apply fractionally toward BitTorrent based on how many open TCP connections it has (including the idle connections)?

The algorithm is remarkably fair for bandwidth per TCP session, where the whole thing gets hosed up is the number of TCP sessions per end user.
Which only comes into play when the network capacity is exceeded -- a condition which network providers are expected to avoid. We pay ISPs to meet user demand, not to shape it into their own vision of what user demand ought to be.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
"We don't throttle any traffic," -Charlie Douglas, Comcast spokesman, on this report.


justbits
More fiber than ATT can handle
Premium
join:2003-01-08
Chicago, IL
Reviews:
·Comcast Business..

said by funchords:

And to everyone I ask -- when you read George Ou's article, is his intent to solve a problem? Or is George really trying to brand BitTorrent as some kind of an exploiter -- so that any thoughts about Network Neutrality would only apply fractionally toward BitTorrent based on how many open TCP connections it has (including the idle connections)?

The algorithm is remarkably fair for bandwidth per TCP session, where the whole thing gets hosed up is the number of TCP sessions per end user.
Which only comes into play when the network capacity is exceeded -- a condition which network providers are expected to avoid. We pay ISPs to meet user demand, not to shape it into their own vision of what user demand ought to be.
You've singled out BitTorrent, when it's not the only P2P protocol that's out there. The graphs, in particular, show several other P2P protocols. Modified BitTorrent clients and other P2P apps are designed to be the most greedy of all network users.

Yes, we ideally are paying ISPs to provide quality Internet service, but does your Terms Of Service Agreement say anything about quality of service? Most likely _no_. Yes, they should have been and should be upgrading their pipes continually. Yes, they likely have been shucking their responsibility on this in favor of providing returns to investors. So, if a change to everybody's TCP stack would make it less necessary for ISPs to spend money on network traffic control devices like Sandvine and instead spend money on upgrading the network, I'm all for that! The problem that Sandvine excessively solves ideally results in a better network experience for all users. So, modifying all TCP stacks to have better congestion control could result in less need for network traffic devices or policies that are as aggressive as Sandvine's.

So, if the network traffic becomes more self-policing/self-regulating by modifying underlying existing Internet protocols, then, maybe a paradigm shift will occur. Maybe then network neutrality will become a battle ground that's more directed at ISPs failing to provide bandwidth instead of it currently being an attack against ISPs for attempting to implement their own forms of congestion control.


funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6

said by justbits:

You've singled out BitTorrent, when it's not the only P2P protocol that's out there. The graphs, in particular, show several other P2P protocols. Modified BitTorrent clients and other P2P apps are designed to be the most greedy of all network users.
I did this on purpose. Worldwide, BitTorrent is top dog by a long shot. Also, #2-#4 (DC++, ED2K, Gnutella/G2) behave more like FTP and HTTP -- and so they can't 'exploit' as George Ou describes. The 2006 chart is from Japan -- which not only has a very different in-country architecture than in North America, but also has a very different pattern of application adoption. North American ISPs don't want to show us their charts, even to researchers working under NDA.

said by justbits:

Yes, we ideally are paying ISPs to provide quality Internet service, but does your Terms Of Service Agreement say anything about quality of service? Most likely _no_.
Actually, now that you mention it, it does (-- in a way)! (And I had not thought of this before now, so if this idea is half-baked, it's because it's half-baked.)

The name of the Comcast tier that I am on is called 6Mbps. Six months ago, Verizon Wireless settled with the State of New York because they were advertising Unlimited service that was not, in fact, unlimited. Now a settlement sets no court precedent, and a New York case may only have consultative value outside of that state, but it does demonstrate that such a case would have merit.

That aside, your question has nothing to do with this topic. We're not talking about Quality of Service here (unless I missed your point)?

The problem that Sandvine excessively solves ideally results in a better network experience for all users.
All? Have you read my original report? Sandvine deteriorated my network experience (link in signature). That's how Comcast got caught! Comcast improved the experience for some at the expense of others -- even if those others were completely within the boundaries of the law and their subscription terms.

everybody's TCP stack would make it less necessary for ISPs to spend money on network traffic control devices like Sandvine and instead spend money on upgrading the network, I'm all for that!
Upgrading the network instead of wasting it on Sandvine is a simpler solution. The logic of your thinking reminds me of something from the "Bastard Operator from Hell." It goes something like this:

Help Desk: "You would like more space?"
Caller: "Yes, please!"
Help Desk (typing commands): RMDIR %USERPROFILE% /S /Q (command to delete user's files)
Caller: "AIEEEEEEEEEEEEEEEEEEEEEH!"
Help Desk: "You should have plenty of space now!"
Secondly -- and I said this above, too -- making the proposed changes does nothing to alleviate congestion. It only changes the behavior during the brief moments between congestion and recovery. Said differently, it doesn't prevent your computer from crashing, it just changes the order of the reboot process.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
"We don't throttle any traffic," -Charlie Douglas, Comcast spokesman, on this report.


espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2
Reviews:
·Vitelity VOIP

1 edit

said by funchords:

Upgrading the network instead of wasting it on Sandvine is a simpler solution.
You keep making this statement over and over again like it's an obvious binary decision. I've said this before, but that statement just keeps reading as "To solve your debt problem you should just acquire more money rather than wasting time cutting spending."

Given the state of where MSOs are at with frequency capacity and deployable DOCSIS technology (ie, still pre 3.0) the only way to "upgrade the network" is to do node splits or ungodly expensive node additions. (ie, poured concrete slab, utility power, cabinet, trench in fiber, split the coax plant, re-engineer the amp placement and gain structure, etc)

You represent the options as cost equivalent, but they're not even in the same ball park. For the cost of a single node addition Comcast can probably buy Sandvine appliances for an entire region. You're talking about a box with a couple power cords, a couple network connections, and some licensing overhead compared to right-of-way contracts, conduit & fiber, power utility installation, cabinet/equipment fees, and a significant number of engineering hours to implement.

TV services are where the cable companies make most of their money. If it were TV-related services driving the need for expansion it might have a better chance at gaining funding. Since HSI is the only product driving the need for expansion, managing the traffic is the cheaper/faster/better approach for right now. I don't have a problem with MSOs practicing this type of network management as long as they are up-front about it. The reality is that some level of filtering will always be present on the Internet, the same way that our "free" society still has laws. Management will always be required to curb "abuse", be it mitigating Denial of Service attacks, filtering SPAM, or restricting network traffic in accordance with "Fair Access Policies". The big issue is that the rules and methods of traffic management must be disclosed; wishing for a complete "hands-off" approach to network management is a pipe dream that will rapidly turn into a nightmare.


funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6

said by espaeth:

You represent the options as cost equivalent, but they're not even in the same ball park. For the cost of a single node addition Comcast can probably buy Sandvine appliances for an entire region. You're talking about a box with a couple power cords, a couple network connections, and some licensing overhead compared to right-of-way contracts, conduit & fiber, power utility installation, cabinet/equipment fees, and a significant number of engineering hours to implement.
The Sandvine products containing the P2P policy enforcement has been the cash-cow for Sandvine. Comcast is believed to be more than 50% of its business, according to numerous analysts. Sandvine is predicting its annual income to be $80-85 Million. How much does it cost to split a node or add a node? Nobody is talking.

Don't assume that just because it's a box with two wires that it is cheap. There is a significant investment in R&D in these products, and due to patents, Sandvine can exclusively offer certain of their technology. All this means that the cost of raw materials has nothing to do with the purchase price of the device.

And I'm not sure it will cost them much at all -- take this quote from Tony Werner, CTO of Comcast:
quote:
The return bandwidth is not on the worry list right now, for a bunch of reasons. For one, were splitting a lot of nodes based on the success of voice, high-speed Internet, and VOD. In other words, all based on downstream requirements, not upstream.

On HSD (high-speed data), Im using two to three 3.2 MHz carriers (upstream). A lot more than that are sitting fallow in my CMTS cards. In most markets, I still have 12 MHz of bandwidth I can reclaim from circuit switched voice, once we migrate off of those platforms. So for now, the 5-42 MHz to me seems plenty adequate.

...

As we hit 70 percent utilization, we issue a work order to split the node. But it depends on utilization. Usually we set it to split to 250 homes. And for us, 65 percent of our node splits are really decoupling of nodes at the headend."

»www.cedmagazine.com/how-sexy-is-···nty.aspx
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
"We don't throttle any traffic," -Charlie Douglas, Comcast spokesman, on this report.


espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2
Reviews:
·Vitelity VOIP

said by funchords:

Comcast is believed to be more than 50% of its business, according to numerous analysts. Sandvine is predicting its annual income to be $80-85 Million.
Annual income in which calendar year? They can only sell the hardware once, and recurring subscription/maintenance fees aren't going to drive the same level of return as the initial capital outlay.

said by funchords:

How much does it cost to split a node or add a node? Nobody is talking.
Comcast talks to investors all the time -- I get a nice prospectus from them every year that gives a high level overview of the company, it's operation, and it's financial performance. There are a few different methods of splitting a node. Most nodes start out combined at the head-end with multiple HFC nodes sharing a common head-end port (at least on the downstream). Those can be split by simply breaking the nodes into their own CMTS ports at the head-end. According to Comcast investor numbers this costs about $7500 per split ($2500/year on a 3 year depreciation cycle). The next split happens at the node itself where there is typically a north, south, east, and west string that branches out from the platform. Each segment can be broken out to have its own dedicated frequencies back to the CMTS with dedicated fiber. The investor numbers that were given for that are $18,000 per split (again $6k/3years). If one segment of a node becomes heavy (ie, north) you need to inject another distribution point in the coax plant to divide the infrastructure out yet again. Comcast places a buildout value of $60,000 on that split ($20k / 3years). Personally I think that value is low, but it is possible if neighborhood platform buildout is not factored into the cost. (ie, a new development goes in and the developer establishes a concrete pad with power hookups for telco / MSO / utility equipment) As of the investor documentation Comcast sent out in 2007 they had approximately 102,000 HFC nodes on their network.

said by funchords:

Don't assume that just because it's a box with two wires that it is cheap. There is a significant investment in R&D in these products, and due to patents, Sandvine can exclusively offer certain of their technology.
My assumption wasn't based strictly on hardware, but on pricing that I know personally from having cut purchase orders. If I use boxes from F5 Networks and 8E6 Technologies as a starting point, my guess would be that each P2P appliance probably costs about $30k.

said by funchords:

And I'm not sure it will cost them much at all -- take this quote from Tony Werner, CTO of Comcast
You have to keep in mind the audience of that article -- you don't tell investors that you're backed into a corner when it comes to your infrastructure. As far as expandability goes, he's right there are a lot of options for long term growth but all that development takes time. It also glosses over strategic value of certain upgrades. For example, if you've got open ports on your DOCSIS 1.1 CMTS line cards then doing node splits is no big deal. When you start talking about procuring hardware things become a lot more sticky -- this gear either has a 3 or 5 year depreciation cycle, so with the company's aggressive stance on DOCSIS 3.0 deployment the last thing they want to do is sink a bunch of capital into pre-3.0 hardware and be stuck with it past the end of the decade.


funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6

said by espaeth:

said by funchords:

How much does it cost to split a node or add a node? Nobody is talking.
Comcast talks to investors all the time -- I get a nice prospectus from them every year that gives a high level overview of the company, it's operation, and it's financial performance. There are a few different methods of splitting a node. Most nodes start out combined at the head-end with multiple HFC nodes sharing a common head-end port (at least on the downstream). Those can be split by simply breaking the nodes into their own CMTS ports at the head-end. According to Comcast investor numbers this costs about $7500 per split ($2500/year on a 3 year depreciation cycle). The next split happens at the node itself where there is typically a north, south, east, and west string that branches out from the platform. Each segment can be broken out to have its own dedicated frequencies back to the CMTS with dedicated fiber. The investor numbers that were given for that are $18,000 per split (again $6k/3years). If one segment of a node becomes heavy (ie, north) you need to inject another distribution point in the coax plant to divide the infrastructure out yet again. Comcast places a buildout value of $60,000 on that split ($20k / 3years). Personally I think that value is low, but it is possible if neighborhood platform buildout is not factored into the cost. (ie, a new development goes in and the developer establishes a concrete pad with power hookups for telco / MSO / utility equipment) As of the investor documentation Comcast sent out in 2007 they had approximately 102,000 HFC nodes on their network.
Awesome! I spent about a half-hour looking for this information in Google -- and I probably can find that prospectus on the Investor Relations site. :::SHEEESH::: (BTW, there's some interesting "physical node split" alternatives being offered -- at least interesting by title only -- when searching Google).

$60K strikes me as low, too. Quite honestly, the numbers in my head had 6 digits and they didn't start with a 1.

The depreciation thing confuses me -- does this mean they can't charge the entire expense in the same year? Or does it mean they can charge the entire expense in the purchase year and then write-off the value of the capital (as a loss) over the next 3 years? (Other than taxes, does the depreciation amount have any value to this debate?)

the last thing they want to do is sink a bunch of capital into pre-3.0 hardware and be stuck with it past the end of the decade.
If Verizon doesn't get back on the stick, perhaps they can.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
"We don't throttle any traffic," -Charlie Douglas, Comcast spokesman, on this report.


funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6
reply to espaeth

said by espaeth:

My assumption wasn't based strictly on hardware, but on pricing that I know personally from having cut purchase orders. If I use boxes from F5 Networks and 8E6 Technologies as a starting point, my guess would be that each P2P appliance probably costs about $30k.
So, based on numbers like yours, and the fact that 65% of Comcast's node splits are of the "virtual" kind (either the $7.5K or $18K type), please tell me if you think is it safe-ish to say: "Comcast spends about as much on node splits as they do on Sandvine."
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
"We don't throttle any traffic," -Charlie Douglas, Comcast spokesman, on this report.


espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2

How many Sandvine appliances do you honestly think they have? My guess would be no more than a pair per market head-end.



funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6

Wow. I was thinking more than that -- some number higher than 1 per 10 Gbps hanging off the router at the aggregation point. (That's of the PTS 14000 model). I have no idea regarding the 8210s which they probably have, too.