dslreports logo
 
    All Forums Hot Topics Gallery
spc
uniqs
30

markofmayhem
Why not now?
Premium Member
join:2004-04-08
Pittsburgh, PA

markofmayhem to me1212

Premium Member

to me1212

Re: Is pcie 3.0 really worth it?

said by me1212:

Like all the laptops I've looked at the lolz recently that had a gpu also had pcie 3.0, even for just a 650m. I mean every benchmark test I have seen shows pcie 3 not making more than 6fps worth of difference until it was used in a quad cf/sli and at crazy high resolutions(multiple monitors). I can see pcie 3.0 being worth it in 5 ,maybe even 4, years, but today is it really worth it for desktop or laptop with only 1 monitor @ 1080p or less?

Check out this benchmark:

»www.overclock.net/t/1220 ··· 16915399

Turns out once you hit tri and quad SLI/Crossfire and run resolutions at/over 3600 x 1920, PCIe 2.0 becomes choked. Running 1-2 cards and resolutions below 3600p, say, 1080p? Then no, PCIe 3.0 isn't much of a gain over PCIe 1.0 except a few latency gains, just like 2.0 over 1.0's gains.

aurgathor
join:2002-12-01
Lynnwood, WA

aurgathor

Member

said by markofmayhem:

Turns out once you hit tri and quad SLI/Crossfire and run resolutions at/over 3600 x 1920, PCIe 2.0 becomes choked.

And just how many people (percentage of users) use tri and quad SLI or Crossfire?

markofmayhem
Why not now?
Premium Member
join:2004-04-08
Pittsburgh, PA

markofmayhem

Premium Member

said by aurgathor:

said by markofmayhem:

Turns out once you hit tri and quad SLI/Crossfire and run resolutions at/over 3600 x 1920, PCIe 2.0 becomes choked.

And just how many people (percentage of users) use tri and quad SLI or Crossfire?

I know of one. PCIe 3.0 was "really worth it" for them...

aurgathor
join:2002-12-01
Lynnwood, WA

aurgathor

Member

A few exceptions don't make a rule. I don't have any hard data on how many people use then, but I'm fairly certain that SLI/Crossfire with more than 2 cards is way below 0.1%.

markofmayhem
Why not now?
Premium Member
join:2004-04-08
Pittsburgh, PA

markofmayhem

Premium Member

said by aurgathor:

A few exceptions don't make a rule. I don't have any hard data on how many people use then, but I'm fairly certain that SLI/Crossfire with more than 2 cards is way below 0.1%.

And if you did have this "hard data", what context does it relate to this topic?

Without an obtuse extreme hardware configuration, PCIe 3.0 isn't "worth it" over PCIe 2.0, but doesn't harm anything if your hardware has it included as it is no additional cost.

---or---

I found one! One person it makes a difference for out of billions who has the newest, top line gear shoved into a single configuration!

They are the same concept...

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix to markofmayhem

Premium Member

to markofmayhem
Well I'd be interested in testing that

As most motherboards I've seen that can do tri or quad have 2 or more (sometimes all 4) step down to 8x to enable all 4 slots

so I wonder IF you had 4x full 16x slots would it still bog down?

and for 4x 16x 3.0 slots you'd have to do a dual 2011 board because you get 40 lanes per chip and genrally only 32 lanes are up for use for video.

though a Dual socket 2011 can do a full 4x 16x 3.0 system,

aurgathor
join:2002-12-01
Lynnwood, WA

aurgathor to markofmayhem

Member

to markofmayhem
said by markofmayhem:

Without an obtuse extreme hardware configuration, PCIe 3.0 isn't "worth it" over PCIe 2.0, but doesn't harm anything if your hardware has it included as it is no additional cost.

We're drifting far away from the original question, which was reasonably well defined and specific:
quote:
I can see pcie 3.0 being worth it in 5 ,maybe even 4, years, but today is it really worth it for desktop or laptop with only 1 monitor @ 1080p or less?

markofmayhem
Why not now?
Premium Member
join:2004-04-08
Pittsburgh, PA

markofmayhem to DarkLogix

Premium Member

to DarkLogix
See the test here, the board provides x16/x8/x8/x8 PCIe 3.0 configuration (40 lanes):

»www.overclock.net/t/1220 ··· 16915399

PCIe 3.0 doubles the bandwidth for each lane. Boards with proper "3.0" switches instead of "2.0" switches will allow the doubled bandwidth lane to travel downstream. This would result in the x8 stepdown on all slots to be equal to x16 PCIe 2.0 on all slots. In theory, the bandwidth between the dual socket PCIe 2.0 2011 board with 4 x16 slots and a PCIe 3.0 single slot board with 4 x8 slots would be equal.

Though the test is not able to know if the bandwidth is providing the increased performance or the reduction in latency, as this reduction would be amplified among the switches (or the penalty is amplified on the PCIe 2.0 board). Dual socket would also add additional latency and driver overhead. This would lead the PCIe 3.0 4 slot, single socket board to be simpler and cleaner in design with less latency, overhead, and chipwork to combine lanes resulting in better performance.

The EVGA SR-X is the only dual LGA 2011 socket board with SLI I am aware of. It is restricted to 2 x16 and 2 x8 lanes in quad sli and is unable to allow dual Sandy Bridge/Xeon 2nd gen i processors due to the QPI. I don't think this test can exist due to lack of hardware support, though the SR-2 with quad SLI could be compared to the SR-X, but I'm not sure how valid it is as it is a 1366 socket vs a 2011 socket and too many "other factors" taint the results.

»www.evga.com/products/pd ··· W888.pdf
markofmayhem

markofmayhem to aurgathor

Premium Member

to aurgathor
said by aurgathor:

said by markofmayhem:

Without an obtuse extreme hardware configuration, PCIe 3.0 isn't "worth it" over PCIe 2.0, but doesn't harm anything if your hardware has it included as it is no additional cost.

We're drifting far away from the original question, which was reasonably well defined and specific:
quote:
I can see pcie 3.0 being worth it in 5 ,maybe even 4, years, but today is it really worth it for desktop or laptop with only 1 monitor @ 1080p or less?

No drift at all, you didn't read the link. It shows what ridiculous stress must be placed on PCIe 2.0 before 3.0 results in measurable gains. I know not what else to do to help you see that my posts agree with yours and am confused why you are arguing that your assumption was correct while I posted data to back it up, which you did not. You also clearly missed this:

Down to the nitty gritty; if you run a single GPU, yes; a single 16x speed PCI-E 2.0 slot will be fine. When you start to run multiple GPU's and/or run these new cards at 8x speed, especially in Surround/Eyefinity, make sure to get PCI-E 3.0.

So, in 4 to 5 years, yes, yes it will be worth it. Today @ 1080p, no, no it is not. Why? Well, because 4 quad SLI GTX 680 PCIe 3.0 cards running at 3600 x 1920 resolution across three monitors is the only test showing measurable gains.

What is learned from this test?

That x16 PCIe 2.0 is still not capped. It only becomes hindered when reduced to x8 PCIe 2.0 lanes as is the case with SLI and CrossfireX. Even then, though, it requires a third or fourth card limited to x8 PCIe 2.0 before the fill rates begin to choke the GPU utilization below 100%. PCIe 3.0 enables an x8 lane slot to run with the performance of a PCIe 2.0 x16 lane slot relieving the choke when three to four cards deep.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

2 edits

DarkLogix to markofmayhem

Premium Member

to markofmayhem
said by markofmayhem:

The EVGA SR-X is the only dual LGA 2011 socket board with SLI I am aware of. It is restricted to 2 x16 and 2 x8 lanes in quad sli and is unable to allow dual Sandy Bridge/Xeon 2nd gen i processors due to the QPI. I don't think this test can exist due to lack of hardware support, though the SR-2 with quad SLI could be compared to the SR-X, but I'm not sure how valid it is as it is a 1366 socket vs a 2011 socket and too many "other factors" taint the results.

»www.evga.com/products/pd ··· W888.pdf

Intel has a dual 2011 workstation board

As each 2011 socket provides 40 PCIe 3.0 lanes directly from the CPU a dual2011 has 80 lanes availale (though you must have both CPU's to get all 4 to work (the intel board disables 2 slots if the 2nd CPU isn't there)

»www.newegg.com/Product/P ··· 13121589

Yes it'd be very costly to get the board and 2x Xeon's and 4x GTX680's to test this

SO yes its a doable test but would be very pricey

So for most people currently the only card that I'd say needs 3.0 is the GTX690 (a dual GPU card) because it would effectively have an 8x 3.0 per GPU

Page 26 is a usefull diagram
»download.intel.com/suppo ··· r1_1.pdf

5way SLI could in theory be done on this board is not for the fact that the slots are so close

CPU1 feeds 2x PCIe 16x 3.0
CPU2 feeds 2x PCIe 16x 3.0 and 1x 8x 3.0
all directly without a PCIe switch in the way

So in theory IF intel redid the board to allow for spacing for dual width cards then the board would be able to do it, but short of that something like the quadro PLEX setup would be the only way to connect 4 highend video cards. There are other solutions than the PLEX units to doing an external video card But I don't know if there are any others for external SLI

markofmayhem
Why not now?
Premium Member
join:2004-04-08
Pittsburgh, PA

markofmayhem

Premium Member

The board you have linked to above can not be used, it does not support SLI nor CrossfireX. It provides the slots for paralleled computing through OpenCL or Cuda.

The only dual slot LGA 2011 socket board that supports SLI/CrossfireX that I know of is the EVGA SR-X.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

There really isn't anything special to supporting SLI beyond providing the PCIe lanes and ability to connect the cards.

You could very well put 8 quadro plex units on it, totaling 16 video cards.

Electrically its fully capable and physically you could do 2 card SLI without a special external setup.

So it is possible but clearly EVGA's version was made with the intent that you could use all 4 slots with only 1 CPU

Ya I know what I linked is a server board and most likely the slots would be used for storage and/or GPU computing

BUT if someone made a full 16x 3.0 external SLI link setup (its do able) then it could be done and might be the only board capable of really seeing what quad SLI can do.

markofmayhem
Why not now?
Premium Member
join:2004-04-08
Pittsburgh, PA

markofmayhem

Premium Member

said by DarkLogix:

There really isn't anything special to supporting SLI beyond providing the PCIe lanes and ability to connect the cards.

You could very well put 8 quadro plex units on it, totaling 16 video cards.

Electrically its fully capable and physically you could do 2 card SLI without a special external setup.

So it is possible but clearly EVGA's version was made with the intent that you could use all 4 slots with only 1 CPU

Ya I know what I linked is a server board and mostlikely the slots would be used for storage and/or GPU computing

The motherboard is not SLI compatible. This defeats all test results. The drivers would need "hacked" to spoof a chipset that is SLI compatible with performance degradation. It may also completely remove the second CPU socket as a driver would need to integrate since the logic in the board is missing. We know it is missing because the board is not SLI compatible. The SR-X by EVGA is the only dual socket SLI compatible board that I know of. It is more than electrical connections, these are not light bulbs. The switches and logic need to be present either in hardware or software.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

1 edit

DarkLogix

Premium Member

The PCIe lanes go directly back to the CPU

This isn't the early dark ages of SLI where something had to be special

I can about guarantee you that someone already has that board with SLI working without having to hack it.

Its not SLI certified, all it really means is they didn't test it for SLI
BTW I've found reports that dual SLI has been done on the board I linked, only thing stopping quad is space.

markofmayhem
Why not now?
Premium Member
join:2004-04-08
Pittsburgh, PA

markofmayhem

Premium Member

said by DarkLogix:

The PCIe lanes go directly back to the CPU

Which breaks the bridge, you can't SLI across the CPU sockets...

The SR-X has all the SLI GPU slots routed to a single CPU, 40 lanes worth.

A hacked driver could (in theory) get that Intel server board to work with SLI, as long as you used the slots to a single CPU which are marked, limited to 40 lanes as well. In addition, BSOD and boot problems are highly probable as attested by those who try HyperSLI patches.

Logic in the board is used for SLI. Nvidia provides this logic for a license fee of $5. Not all boards have this logic, some do have this logic but are not "certified", determining which is which is impossible until you buy and install a second card. Hacks in the driver to ignore it are sometimes successful (especially on boards that have the logic but don't advertise it) and sometimes lead to non-bootable or BSOD crashing in full screen SLI applications (the driver is forced to use something that isn't present). Switches are used to feed information between the cards and to "route" information from the CPU to the host GPU when instructions are issued instead of simple fills or results to the slave GPU(s). The server board does not have them, they would slow the purpose of this board, which is to maximize paralleled OpenCompute/CUDA operations.

It is more than electrical lanes, SLI and CrossfireX have logic based needs. These are not light bulbs.

Now we've drifted To bring it back on topic: this is where PCIe 3.0 is "worth it". The complexities of just throwing "more sockets" and "more lanes" into the mix is less successful than beefing the lanes and sockets already there, as is the case with Gen 2 to Gen 3 deltas. Clearly "not worth it" for single GPU, single monitor, 1080p resolutions.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

What part of I've found evidence it has been done on that board? don't you get?

NON_hacked and cross CPU, it works, EVGA just wanted it not limited to REQUIRING a 2nd CPU to work on the board.

I know they aren't light bulbs and I know a fair amount about the setup of the motherboards. You need to stop acting like you think you know that which you don't.
DarkLogix

DarkLogix to markofmayhem

Premium Member

to markofmayhem
The reason that it can drift as it is is the OP is answered.

Its only worth it for beyond 2 card SLI.

Quad SLI still has issues with the majority of boards out there due to not really having enough lanes.

A mobo maker can't beef up the lanes, with the current ones out from intel the only 3.0 lanes are from the CPU and any multiplexing that some boards offer is a crap way to ram more where there isn't anything more.

2 card SLI works great on the majority of boards out there, as they can get a full 16 3.0 lanes per card, but the 2011 (ie the top end intel chip) only has 40 lanes TOTAL on the CPU, there are a few 2.0 lanes on the north bridge and some mobo makers do put those to beef up a not really there PCIe slot (which also works with SLI (even without any such bridge and having the north bridge between it and the CPU, where the 3.0 lanes end) some of the 3 slot boards use that trick to get a 16x 2.0 slot along with the 16x 3.0 slots.

But with a Hard limit of 40 lanes per CPU there s a limit on what you can do with 1 CPU.

markofmayhem
Why not now?
Premium Member
join:2004-04-08
Pittsburgh, PA

markofmayhem to DarkLogix

Premium Member

to DarkLogix
said by DarkLogix:

What part of I've found evidence it has been done on that board? don't you get?

First time you've mentioned that. I can't wait to read through the test results... post the link!

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

I already mentioned it

Their testing wasn't good enough or detailed enough IMO
But did show SLI worked, sorry lost the link it was fairly deep in a google search (not many out there with that board, let alone ones with video cards testing it like that)

BTW your statements show a lack of understanding of the current architecture of Intel multi CPU systems

so I'll go over it very briefly

1. QPI is used for Core to Core

2. The same QPI has ext interfaces on Xeons that join the internal QPI network to the QPI network of the 2nd CPU.

3. Each CPU (capable of dual or above) has 2 exposed QPI links at upto 8.0GT/s)

4. Each CPU has its own 40 PCIe 3.0 Lanes available to be connected to whatever the mobo maker wants (be it a slot or an on-board device or nothing at all)

5. Intel has made it such that the lanes DIRECTLY from the CPU's can connect DIRECTLY to video cards and do SLI

sure the 1155/1156 crap chips have a closed (ie internal to the CPU only) QPI link but that's not true of the 2011's

And the 1155/1156 don't have very many PCIe lanes available so some boards try to use the northbridge lanes (meant for on-board devices) to offer SLI but when they do that with the northbridge in the way they have to add some logic because the northbridge adds latency that the direct lanes don't have so they have to add some logic to make the northbridge lanes compatible.

But on 2011's thats not the case because of far more direct lanes and an Open QPI on xeons.

markofmayhem
Why not now?
Premium Member
join:2004-04-08
Pittsburgh, PA

markofmayhem

Premium Member

said by DarkLogix:

Their testing wasn't good enough or detailed enough IMO
But did show SLI worked, sorry lost the link it was fairly deep in a google search (not many out there with that board, let alone ones with video cards testing it like that)

Convenient...

SLI support is not solely hardware based, it requires driver communication to align registries and negotiate bridge communication across the GPU's. The Nvidia driver has white-list allow permissions. This board is not listed, non-modification of driver for support is a huge red flag. The C602 does not natively have the SLI logic. CrossfireX does not require underneath logic within the chipset, SLI does. Nvidia licenses it for $5 per use. This "logic" does not appear on all boards. Boards that are not "certified" but are limited-crops of higher models that do can be forced. Boards severely cropped from higher models that are certified fall flat most times, due to lack of the underlying logic needed for the driver to operate without access violations.

The Intel reference board is not listed, that I can see, within the drivers (doesn't mean it isn't). It would be nice to see 4 x16 PCIe 3.0 slots driving full tilt in SLI configuration (not just 4 GPU's on x16 PCIe 3.0 lanes, that is possible and working, but they are not SLI configured), the other boards don't offer this and on "paper", neither does this one. It would be a beautiful thing to see, too bad that link is bad
said by DarkLogix:

BTW your statements show a lack of understanding of the current architecture of Intel multi CPU systems... FAQ repost of QPI follow

QPI doesn't provide solution, you are missing the point. The NF200 was a switch, similar to the PEX8747 used on the SR-X. (Supermicro does not use this, which is why they only offer 3 x16 and 3 x8 slots, as 80 PCIe lanes are not available).

Here's another C602 chipset dual LGA 2011 socket board that supports SLI: Asus Z9PED8

Blue slots only, 4 x8 in quad configuration. Can't SLI across the second socket.
or
Black slots only, 2 x16 and 1 x8. Can't SLI across the second socket.

Electrically, the lanes and connects exist. It has not been implemented into a real world solution, the 80 PCIe lanes are not solely left to be used by PEG. Switches are required to feed as many x16 lanes as possible. The driver will not pass the second socket, either due to failure within the proprietary logic Nvidia requires on the board or because the driver is unable to do so due to program inept.

HyperSLI patch should get the driver to recognize the C602 boards without native SLI, but doesn't ensure BSOD and unbootable scnarios won't plague the user. P67 had closed QPI, indeed, as well as 16 PCIe PEG lanes available. Boards without switches reduced SLI down to 2 x8 slots. However not all P67 boards were "SLI certified". HyperSLI ignored the lack of the board being in the white-list and still was unable to have SLI successful on the boards. CrossfireX, no problem, no patch needed. SLI falls on its face without OS driver support and the driver doesn't work with all boards that have proper electrical connections due to access violations.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

And HP has the Z820 which is "SLI ready" (ment for quadro SLI) where it'll cut the speed of the 2nd and 3rd slot to 8x if there's only 1 CPU