Actually the storage card manufacturer can't add circuitry for the most part -- it would require Marvell either a) increasing the PCIe lane count the IC offers, or b) moving up to PCIe 3.0. Chances are they'd go with (a) and the chip would have a different model number.
The below assumes assumes you have a PCIe 2.0 slot that provides PCIe 2.0 x2 (or more) lanes, and
not a PCIe 2.0 slot that supports x2 (or more) cards but only offers, say, x1 lanes. Anyway, to work out the math on this for those wondering:
PCIe 2.0 x1 = 1GByte/sec (1024MByte/sec)
PCIe 2.0 x2 = 2GByte/sec (2048MByte/sec)
SATA300 = supports up to 300MByte/sec
SATA600 = supports up to 600MByte/sec
(I'm also assuming a unit of 1024, not 1000 -- throughput, at least on networks, uses a unit of 1000, but I'll stick with 1024)
Standard MHDDs can do about 150-180MBytes/sec sequential from the platters -- note this
does not reflect reads from the disk that are being fed by the on-disk cache (those can sometimes reach the SATA PHY speed, but I've rarely seen this with MHDDs).
So let's say you have 4x MHDDs that are magical and can do 180MByte/sec constantly (from LBA 0 to end of drive), sequential or random. I said magical, right?
4 * 180MB/s = 720MBytes/sec total
A single PCIe 2.0 x1 lane can handle that, with ~304MB bandwidth left over (say for cached reads). I think this would be fine.
The situation changes dramatically if a person decides to use present-day SSDs (ex. Samsung 840 series) with that controller, and the PCIe lane count becomes a serious bottleneck.
Since I have a Samsung 840 256GB SSD, and know what my sequential numbers are, let's use it as an example: 560MBytes/sec read, 256MByte/sec write.
Let's say you have 4 of those and stick them on the aforementioned controller:
4 * 560MB/s = 2240MBytes/sec read total
4 * 256MB/s = 1024MBytes/sec write total
So for reads you would be hitting a bottleneck capacity and losing, potentially ~192MByte/sec worth of throughput. For writes, no impact.
Thus -- the above controllers will do you just fine as long as you're using them predominantly for MHDDs. A mix of SSDs and MHDDs is fine too, just do the math and work things out. And don't forget about things like driver overhead and OS overhead too -- those numbers above are all just math, they don't reflect real-world numbers nor have I done any actual testing with such a controller to see what its limits are. But I do think using one of those for MHDDs exclusively would be perfectly fine.