dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
4853
share rss forum feed


DarkLogix
Texan and Proud
Premium
join:2008-10-23
Baytown, TX
kudos:3

1 edit
reply to markofmayhem

Re: Is pcie 3.0 really worth it?

The PCIe lanes go directly back to the CPU

This isn't the early dark ages of SLI where something had to be special

I can about guarantee you that someone already has that board with SLI working without having to hack it.

Its not SLI certified, all it really means is they didn't test it for SLI
BTW I've found reports that dual SLI has been done on the board I linked, only thing stopping quad is space.



markofmayhem
Why not now?
Premium
join:2004-04-08
Pittsburgh, PA
kudos:5

said by DarkLogix:

The PCIe lanes go directly back to the CPU

Which breaks the bridge, you can't SLI across the CPU sockets...

The SR-X has all the SLI GPU slots routed to a single CPU, 40 lanes worth.

A hacked driver could (in theory) get that Intel server board to work with SLI, as long as you used the slots to a single CPU which are marked, limited to 40 lanes as well. In addition, BSOD and boot problems are highly probable as attested by those who try HyperSLI patches.

Logic in the board is used for SLI. Nvidia provides this logic for a license fee of $5. Not all boards have this logic, some do have this logic but are not "certified", determining which is which is impossible until you buy and install a second card. Hacks in the driver to ignore it are sometimes successful (especially on boards that have the logic but don't advertise it) and sometimes lead to non-bootable or BSOD crashing in full screen SLI applications (the driver is forced to use something that isn't present). Switches are used to feed information between the cards and to "route" information from the CPU to the host GPU when instructions are issued instead of simple fills or results to the slave GPU(s). The server board does not have them, they would slow the purpose of this board, which is to maximize paralleled OpenCompute/CUDA operations.

It is more than electrical lanes, SLI and CrossfireX have logic based needs. These are not light bulbs.

Now we've drifted To bring it back on topic: this is where PCIe 3.0 is "worth it". The complexities of just throwing "more sockets" and "more lanes" into the mix is less successful than beefing the lanes and sockets already there, as is the case with Gen 2 to Gen 3 deltas. Clearly "not worth it" for single GPU, single monitor, 1080p resolutions.
--
Show off that hardware: join Team Discovery and Team Helix


DarkLogix
Texan and Proud
Premium
join:2008-10-23
Baytown, TX
kudos:3

What part of I've found evidence it has been done on that board? don't you get?

NON_hacked and cross CPU, it works, EVGA just wanted it not limited to REQUIRING a 2nd CPU to work on the board.

I know they aren't light bulbs and I know a fair amount about the setup of the motherboards. You need to stop acting like you think you know that which you don't.



DarkLogix
Texan and Proud
Premium
join:2008-10-23
Baytown, TX
kudos:3
reply to markofmayhem

The reason that it can drift as it is is the OP is answered.

Its only worth it for beyond 2 card SLI.

Quad SLI still has issues with the majority of boards out there due to not really having enough lanes.

A mobo maker can't beef up the lanes, with the current ones out from intel the only 3.0 lanes are from the CPU and any multiplexing that some boards offer is a crap way to ram more where there isn't anything more.

2 card SLI works great on the majority of boards out there, as they can get a full 16 3.0 lanes per card, but the 2011 (ie the top end intel chip) only has 40 lanes TOTAL on the CPU, there are a few 2.0 lanes on the north bridge and some mobo makers do put those to beef up a not really there PCIe slot (which also works with SLI (even without any such bridge and having the north bridge between it and the CPU, where the 3.0 lanes end) some of the 3 slot boards use that trick to get a 16x 2.0 slot along with the 16x 3.0 slots.

But with a Hard limit of 40 lanes per CPU there s a limit on what you can do with 1 CPU.



markofmayhem
Why not now?
Premium
join:2004-04-08
Pittsburgh, PA
kudos:5
reply to DarkLogix

said by DarkLogix:

What part of I've found evidence it has been done on that board? don't you get?

First time you've mentioned that. I can't wait to read through the test results... post the link!
--
Show off that hardware: join Team Discovery and Team Helix


DarkLogix
Texan and Proud
Premium
join:2008-10-23
Baytown, TX
kudos:3

I already mentioned it

Their testing wasn't good enough or detailed enough IMO
But did show SLI worked, sorry lost the link it was fairly deep in a google search (not many out there with that board, let alone ones with video cards testing it like that)

BTW your statements show a lack of understanding of the current architecture of Intel multi CPU systems

so I'll go over it very briefly

1. QPI is used for Core to Core

2. The same QPI has ext interfaces on Xeons that join the internal QPI network to the QPI network of the 2nd CPU.

3. Each CPU (capable of dual or above) has 2 exposed QPI links at upto 8.0GT/s)

4. Each CPU has its own 40 PCIe 3.0 Lanes available to be connected to whatever the mobo maker wants (be it a slot or an on-board device or nothing at all)

5. Intel has made it such that the lanes DIRECTLY from the CPU's can connect DIRECTLY to video cards and do SLI

sure the 1155/1156 crap chips have a closed (ie internal to the CPU only) QPI link but that's not true of the 2011's

And the 1155/1156 don't have very many PCIe lanes available so some boards try to use the northbridge lanes (meant for on-board devices) to offer SLI but when they do that with the northbridge in the way they have to add some logic because the northbridge adds latency that the direct lanes don't have so they have to add some logic to make the northbridge lanes compatible.

But on 2011's thats not the case because of far more direct lanes and an Open QPI on xeons.



markofmayhem
Why not now?
Premium
join:2004-04-08
Pittsburgh, PA
kudos:5

said by DarkLogix:

Their testing wasn't good enough or detailed enough IMO
But did show SLI worked, sorry lost the link it was fairly deep in a google search (not many out there with that board, let alone ones with video cards testing it like that)

Convenient...

SLI support is not solely hardware based, it requires driver communication to align registries and negotiate bridge communication across the GPU's. The Nvidia driver has white-list allow permissions. This board is not listed, non-modification of driver for support is a huge red flag. The C602 does not natively have the SLI logic. CrossfireX does not require underneath logic within the chipset, SLI does. Nvidia licenses it for $5 per use. This "logic" does not appear on all boards. Boards that are not "certified" but are limited-crops of higher models that do can be forced. Boards severely cropped from higher models that are certified fall flat most times, due to lack of the underlying logic needed for the driver to operate without access violations.

The Intel reference board is not listed, that I can see, within the drivers (doesn't mean it isn't). It would be nice to see 4 x16 PCIe 3.0 slots driving full tilt in SLI configuration (not just 4 GPU's on x16 PCIe 3.0 lanes, that is possible and working, but they are not SLI configured), the other boards don't offer this and on "paper", neither does this one. It would be a beautiful thing to see, too bad that link is bad

said by DarkLogix:

BTW your statements show a lack of understanding of the current architecture of Intel multi CPU systems... FAQ repost of QPI follow

QPI doesn't provide solution, you are missing the point. The NF200 was a switch, similar to the PEX8747 used on the SR-X. (Supermicro does not use this, which is why they only offer 3 x16 and 3 x8 slots, as 80 PCIe lanes are not available).

Here's another C602 chipset dual LGA 2011 socket board that supports SLI: Asus Z9PED8

Blue slots only, 4 x8 in quad configuration. Can't SLI across the second socket.
or
Black slots only, 2 x16 and 1 x8. Can't SLI across the second socket.

Electrically, the lanes and connects exist. It has not been implemented into a real world solution, the 80 PCIe lanes are not solely left to be used by PEG. Switches are required to feed as many x16 lanes as possible. The driver will not pass the second socket, either due to failure within the proprietary logic Nvidia requires on the board or because the driver is unable to do so due to program inept.

HyperSLI patch should get the driver to recognize the C602 boards without native SLI, but doesn't ensure BSOD and unbootable scnarios won't plague the user. P67 had closed QPI, indeed, as well as 16 PCIe PEG lanes available. Boards without switches reduced SLI down to 2 x8 slots. However not all P67 boards were "SLI certified". HyperSLI ignored the lack of the board being in the white-list and still was unable to have SLI successful on the boards. CrossfireX, no problem, no patch needed. SLI falls on its face without OS driver support and the driver doesn't work with all boards that have proper electrical connections due to access violations.
--
Show off that hardware: join Team Discovery and Team Helix


DarkLogix
Texan and Proud
Premium
join:2008-10-23
Baytown, TX
kudos:3

And HP has the Z820 which is "SLI ready" (ment for quadro SLI) where it'll cut the speed of the 2nd and 3rd slot to 8x if there's only 1 CPU



Oleg
Premium
join:2003-12-08
Birmingham, AL
kudos:2
reply to me1212

For USB flash drives it's not worth it. As for HDs eSATA will do they job by adding a bracket.



Krisnatharok
Caveat Emptor
Premium
join:2009-02-11
Earth Orbit
kudos:12

said by Oleg:

For USB flash drives it's not worth it. As for HDs eSATA will do they job by adding a bracket.

What do USB flash drives have to do with PCIe slots?

PCIe 3.0 =/= USB 3
--
If we lose this freedom of ours, history will record with the greatest astonishment, those who had the most to lose, did the least to prevent its happening.


Oleg
Premium
join:2003-12-08
Birmingham, AL
kudos:2

said by Krisnatharok:

said by Oleg:

For USB flash drives it's not worth it. As for HDs eSATA will do they job by adding a bracket.

What do USB flash drives have to do with PCIe slots?

PCIe 3.0 =/= USB 3

I am talking about the price. You can get eSATA bracker insded of USB 3.0 PCIe, becaouse it is cheaper in price.


Krisnatharok
Caveat Emptor
Premium
join:2009-02-11
Earth Orbit
kudos:12

You realize USB 3 and PCIe 3.0 are two very different technologies?



ImpldConsent
Under Siege
Premium
join:2001-03-04
Mcdonough, GA
Reviews:
·AT&T U-Verse
·magicjack.com
reply to me1212

Well, y'all, tell me this: is the card edge slot the same as PCIe 2.0? Reason I ask - That 660 would be a WONDERFUL upgrade to my 250, but, I've only got PCIe 2.0. If the edge is identical and not gaining much unless I'm doing massive SLI, then I think I'll pull the trigger. (Still in Afghanistan, so I rarely get to do massive research).
--
That's "MISTER" Kafir to you.



DarkLogix
Texan and Proud
Premium
join:2008-10-23
Baytown, TX
kudos:3

Yes the connector is the same.



Oleg
Premium
join:2003-12-08
Birmingham, AL
kudos:2
reply to Krisnatharok

said by Krisnatharok:

You realize USB 3 and PCIe 3.0 are two very different technologies?

I know. What i am trying to say is eSATA is a cheaper chose for extrnal hard drives, but if you want the benefit of speed for USB Flash drives than yes, USB 3.0 is the only way i know when it comes to speed.


Krisnatharok
Caveat Emptor
Premium
join:2009-02-11
Earth Orbit
kudos:12

The OP is asking about the bandwith of video cards. I don't know where you get USB flash drives and hard drives--hard drives would be over SATA II or III, not PCIe 2.0 or 3.0. Three different technologies:

USB 3: external peripherals
PCIe 3.0: video cards and expansion cards
SATA III: Hard drives, SSDs, optical drives
--
If we lose this freedom of ours, history will record with the greatest astonishment, those who had the most to lose, did the least to prevent its happening.



DarkLogix
Texan and Proud
Premium
join:2008-10-23
Baytown, TX
kudos:3
reply to Oleg

You could easily get a sata-to-esata bracket that would fit in a single expansion space.


me1212

join:2008-11-20
Pleasant Hill, MO
reply to ImpldConsent

Same connector, just more bandwidth. From what i've read even a quad sli 680 isn't very bottle necked by pcie 2.0/2.1 unless its using physx.