dslreports logo
 
    All Forums Hot Topics Gallery
spc
Search similar:


uniqs
1029

Modus
I hate smartassery on forums
Premium Member
join:2005-05-02
us

Modus

Premium Member

converged infrastructure

I am just trying to get an idea on how good the Nutanix product is and is it really worth the price. I haven't gotten any official pricing from Nutanix yet, but list price for the kit they recommended was about $182k. That was list price but even with some discounts its still more costly than my "classic" SAN setup with fiber channel.

Is their product really that good? I know they talk about the small footprint, and the performance of local storage.

What makes Nutanix so much better and worth the high cost? Are any of you looking or installed any sort of converged infrastructure in your environment?

dennismurphy
Put me on hold? I'll put YOU on hold
Premium Member
join:2002-11-19
Parsippany, NJ

dennismurphy

Premium Member

said by Modus:

I am just trying to get an idea on how good the Nutanix product is and is it really worth the price. I haven't gotten any official pricing from Nutanix yet, but list price for the kit they recommended was about $182k. That was list price but even with some discounts its still more costly than my "classic" SAN setup with fiber channel.

Is their product really that good? I know they talk about the small footprint, and the performance of local storage.

What makes Nutanix so much better and worth the high cost? Are any of you looking or installed any sort of converged infrastructure in your environment?

Hitachi UCP.
HP ConvergedSystem 700x.
Oracle BDA.
VCE.
FlexPod.

Lots of options. Some neat stuff in there - I especially like the UCP options for logical partitions (LPAR), but my mind is always with HP ... BladeSystem c7000 is just such a strong building block ... adding 3Par to the mix just makes it stronger.

Drex
Beer...The other white meat.
Premium Member
join:2000-02-24
Not There

Drex to Modus

Premium Member

to Modus
Let's just say we HAD a converged infrastructure here. We quickly determined that our high latency was due to the 100% CPU utilization on the top of rack Cisco 5010's we were plugged into. The network team constantly denied it even though we showed them what was going on. We have a rather special group of people on our network team.
Our SAN guy said he would rather separate the storage traffic and the network traffic...you know for those instances where the network team reboots BOTH sets of switch bring BOTH fabrics down. Just sayin'...
HELLFIRE
MVM
join:2009-11-25

2 edits

HELLFIRE

MVM

Toprack 5010s? Do you know what code they're running Drex See Profile ? Wonder if you're not hitting CSCub02109, possibly?

Last account I serviced, the customer had a few N5Ks that always showed 100% CPU due to this bug but never dropped a packet in
their life... no matter what ping or throughput tests we put em through. Customer kept hemming and hawing about upgrading the
code to alleviate that particular bug though... even after Cisco told them point blank they risk a device crash as well...

Just my (side) 00000010bits

Regards
amungus
Premium Member
join:2004-11-26
America

amungus to Modus

Premium Member

to Modus
Their main pitch is simplicity, near as I can tell.

Regarding either them, or Vblock systems, the answer as to whether it's worth it or not is "it depends." If either had been just slightly more mature a couple years ago, I would have more seriously considered them for use on a project. We ended up doing the "build it ourselves" route, and have been satisfied enough with the results, though I do wish we had priced out a Nutanix and Vblock system to compare it to at the time.

Having sat (directly) next to Joe Tucci prior to him speaking once, and watching him present on the topic of Vblocks (amongst other topics), they do fascinate me, as does the Nutanix stuff.

The "build it yourself" method is still also a very valid consideration if you want full control over each component. The other factor, aside from some of the abstraction/simplification of managing these things, is the support. They each have their strengths/weaknesses there, like anything else.

Here's one (yes, imperfect) analogy - - -
***Vblock - You're essentially getting an "OEM" machine that you just "plug in" vs. buying/assembling the parts yourself, at a much larger (and obviously more expensive) scale.
***"Build it Yourself" - You're buying each component, putting it together carefully, but there are some parts you might spend more on than others, and each one has its own warranty, etc...
***Nutanix - You get a stack of "black boxes" that practically self-assembles itself into a machine.

An interesting article linked on Nutanix's site:
»www.enterprisetech.com/2 ··· ems-biz/

One thing to note:
The Vblocks still use a (Cisco UCS) chassis. Whether or not that's a "good thing" depends on your perspective. The article mentions that such a thing *might* change, but I would doubt it happens anytime soon.

DadeMurphy
Premium Member
join:2002-07-25
Danvers, MA

DadeMurphy to Modus

Premium Member

to Modus
I have a 12 node Nutanix that runs my internal IT infrastructure and a significant portion of my development environment. Honestly I think it is one of the worst purchases the prior IT manager made (still can't decide between the Nutanix and the Meraki everything). Each blade in the Nutanix we have has 128GB of RAM but the controller VMs required to coordinate everything each uses 32GB, leaving us with 96GB before we start over provisioning.

I'm waiting until it drops out of support at which point it will be replaced by an HP, IBM, or Cisco blade solution (fortunately I'll be able to migrate my EMC XIO bricks from prod to dev at that point as we'll be refreshing in my SaaS DC then too)

Personally I would steer away from their solution and stick with a traditional infrastructure, there are better ways to simplify or consolidate your environment.

Modus
I hate smartassery on forums
Premium Member
join:2005-05-02
us

Modus

Premium Member

Ok so you have Nutanix & Meraki kit, they quoted me the same. besides the ram provisioning what other pain points do you have ?

DadeMurphy
Premium Member
join:2002-07-25
Danvers, MA

DadeMurphy

Premium Member

Other issues we've run into are inconsistent disk response times from the VMs (expected with a caching infrastructure), stopping the cluster can be annoying, and of course any maintenance to the ESXi infrastructure has the added complexity of making sure you handle the Nutanix pieces correctly first.

To be honest I don't interact with the infrastructure much but I get to hear about it from my guys and when I do have to work with it I miss having a real infrastructure.
ddrant
join:2010-03-02
Womelsdorf, PA

ddrant to Modus

Member

to Modus
How many appliances were you getting for that $182k?

We just had Simplivity in to do a demo. Honestly if I didn't already have a Fibre Channel SAN (HP EVA 4400) that still has some life in it and a HP blade chassis, I'd seriously consider it. They seemed to imply that they could do some some local appliances to replace our compute/storage and a smaller remote site for DR all for less cost than replacing the EVA with a newer SAN.

PToN
Premium Member
join:2001-10-04
Houston, TX

PToN to Modus

Premium Member

to Modus
I would just go with HP chassis and blades and pair it with a tegile storage system. They are getting pretty big nowdays. (»www.tegile.com/)
goldfinger
join:2000-12-24
00000

goldfinger to Modus

Member

to Modus
We have a total of 2 Nutanix clusters and are very happy with them. One each of the 3000 series (6 nodes) and a separate 6000 series cluster with 3 nodes. The webui is nice and simple, yet packed with information and reporting capabilities. Overall we are very happy and looking forward to growing with them. Nutanix support is top notch as well.

I will be happy to answer any questions you may have.
ddrant
join:2010-03-02
Womelsdorf, PA

ddrant to PToN

Member

to PToN
Wow PToN...another Tegile customer. We run all our VDI on a HP2100EP array. We were one of their original customers 3 years ago.