The problem here is 1. This is our first data center and no one on the net ops team has ever built one from the ground up. 2. we have a strict deadline so we can't change anything now.
We are learning about a lot of things as we go and we are actually planning on creating a spec for all future data centers. We've been taking cues from the big guys like google and facebook and we are trying to incorporate as much of their conventions that we can at this stage in the game, like running the cooling at higher temps (75 -85 degrees) and labeling every cable. One of the biggest issues we came across is cable management, I personally suck at it so I don't have any real insight here, most of what I've been doing I've actually learned from the bbphotos and from this forum. What I've never really seen is how people deal with different types of cables, we have conventional cat 6, fiber bundles, jumpers and SFP+ cables, they all have their own intricacies so i haven't figured out a good way to manage them especially since some servers have both SFP+ and cat 6 and some switches have all three.
we've investigated also things such as raised floors however i think that one is still up in the air. or possibly ducting the a/c directly into the racks but that seems pretty expensive.
it's my personal goal and that of my boss to be able to start developing LEED certified installations within five years.