said by freejazz_RdJ:I still think it's amazing how this whole MLPPP ecosystem has evolved. From what I believe was an accidental discovery to a 30Mbps pipe and several platforms (Tomato, Zeroshell plus commercial IOS or Mikrotik), this has come a long way. Have you applied for an R&D tax credit?
The initial MLPPP results weren't 30mbit, they were from a while back with a few brave souls who messed around with it on Linux boxes with regular 5mbit links. Tomato came about to circumvent throttling, but that became a secondary concern early on; once it was done, there was nothing left to do for single-link. All the development since has obviously focused on multiple links.
ZeroShell came about when Candlelight contracted us to do that port, but it seems like they're moving in other directions, which is why we've more or less dropped development of it (unless they want us to start up again).
As for an R&D credit, Well, it's not a business... We (idly) thought about trying to turn it into one, but I don't think we're convinced that there's enough demand for MLPPP from ISPs to support DSL_Ricer working on it full time. I'm not even sure what the business model would be. Support contracts to various MLPPP-using ISPs? Probably not enough interest from them.
MLPPP was primarily intended for dial-up connections. The fact that Juniper told TekSavvy that they were probably the single largest MLPPP user in the entire world should give you an idea about how "new" this is to broadband.
Guspaz, are the current implementation CPU bound or is there some software issue that limits scalability? If it is CPU-bound, is it because of something resembling segmentation and reassembly, which I've seen kill performance in AAL5 ATM to ethernet scenarios?
Well, we're CPU bound on the WRT54GL, but that's just because you've got a 200MHz MIPS processor with barely any cache. Although, I'm not actually sure if we're CPU bound or memory bandwidth bound or what have you; it's hard to tell. Newer routers would probably not be CPU bound as they have much more modern CPUs and much faster RAM. Unfortunately, Tomato doesn't support many platforms.
The 8-line limitation that I mentioned isn't on our end, it's the maximum number of lines supported by the Juniper hardware that TekSavvy uses. Other implementations (software-based, or Cisco's) probably don't have such limitations; they'd likely scale up to just about as many lines as you could throw at them. With Linux/MLPPP, you can bind as many connections as you'd like; we haven't set any limit.
We don't really have any data on scalability for large numbers of lines, though. I've only got two lines to my house, and it's unlikely TekSavvy cares enough about 3+ lines to start shoving more wires into my apartment. Justin and his 6 lines have shown that it does scale quite well up that high, at least with downstream. Unfortunately, Microtik's poor MLPPP performance (and horrible upstream scaling) is well known and unique to them.
Inssomniak is using ZeroShell/MLPPP, which is more or less the same code as Linux/MLPPP. If we account for DSL overhead at 15%, he's seeing 94% scaling on downstream and 85% scaling on upstream. That's not half bad, if you ask me. Because it's an untested scenario on our end, we could probably get upstream scaling closer to downstream, or at least investigate enough to explain why it isn't scaling as high.
I'll note that downstream and upstream use different techniques; upstream splits packets, while downstream uses round-robin. It's been a long time since I last discussed this with DSL_Ricer, but if memory serves, packet splitting decreases latency and allows you to send "oversized" packets (as in, you can send a full 1500 byte packet since it's going to get split), but increases PPP and ATM overhead. To get any more technical, DSL_Ricer would have to get involved.
It's possible that an optimal approach for large numbers of lines might be to use round-robin on the upstream for larger numbers of lines, but as said, we have no way to test that, and little incentive to put much effort into more than 2 lines. Getting UI support for 3 lines is about as far as we're willing to go.
When we started out on 2-line support, we had to do all our debugging remotely on JayMan's equipment (in another province, to boot) since we couldn't replicate a 2-line setup locally. Thankfully, TekSavvy eventually helped us out with that!
Wow, that *really* turned into one heck of a long rambling post...