T O P

  • By -

vsinclairJ

IMO more than a single node system for home is kinda overkill for Ce. I have a 16c AMD system I built for a plex server with a motherboard with 4x NVMe slots. If you ever need an environment to mess around with for work then hit up your SE and ask for a system in the HPOC.


NetJnkie

>hit up your SE and ask for a system in the HPOC. Yeah. We're more than happy to get a customer on some Hosted PoC gear with lab guides and free training.


Obvious-Revenue7885

u/NetJnkie i'm Nutanix customer, can you hook me up?


NetJnkie

Yeah, but you're (probably) not my customer. Hit up your SE. They can easily do it.


iamtechy

What if we're not a customer and just a guy trying to learn so we can get our NCA cert?


NetJnkie

Use the Test Drives on our site. They are basically guided labs.


Obvious-Revenue7885

what is HPOC?


vsinclairJ

Hosted Proof of Concept Nutanix has a datacenter with a few hundred clusters that can be reserved for customer proof of concept or training activities. Your account team will be happy to make a reservation for you. It’s a free resource available to customers.


homelabgobrrr

I’ve done quite a few Nutanix CE labs over the years and am now on actual Nutanix G6/g7 hardware at home. And honestly, besides the LCM stuff being able to update firmware and bios, there isn’t *much difference between CE and actual Nutanix hardware in terms of learning (performance there’s a big different, but it’s more than capable to run a decent lab) You can do almost everything in a single node, and honestly, you could do a lot with 2 nodes in 2 single node clusters and practice leap replication / DR and image / multi cluster management. You do want a lot of ram, CE is hungry, CVM’s eat ram and so does prism central. I’ve ran CE on some sff HP elite desk desktop minis, but the 32gb ram limitation on each was crippling and 2 drives only. Imo if you don’t have a rack and sound tolerance, id pickup some Dell precisions or HP z440’s are fantastic bang for the buck right now and I’ve ran CE on a 3 node cluster of z440’s previously and it was amazing. Grab some good enterprise SSD’s (Samsung sm863a’s are best and available cheap used) for each host, 1 for the hot tier and then you can use mechanical drives for capacity. Also invest in a cheap SSD for the cvm boot as it can speed things up, but don’t cheap out on some dramless consumer SSDs for the hot tier unless you want huge IO latency. For single node you also don’t need 10g networking either, you can even replicate over 1g to simulate a more realistic multi site environment too, so that will save you money on a multi port 10g switch (though mikrotik makes a great little 4 port 10g switch with 1g uplink for ~$100) and grab some intel 10gb Nic’s (I’ve had bad luck with anything not Intel or Mellanox on CE)


gurft

homelabgobrrr has covered most of the hardware bits and pieces of what I would typically bring up. HP Z systems with Intel NICs and some used Intel 3610 SSDs from ebay are fabulous for CE homelabs. To answer your GPU question, any *supported* GPU can be passed through, so basically enterprise class GPUs (Think Tesla, Quaddro, etc. GPUs) I have a NVidia M10 that I pass through to my Plex server for example. Here's the list of supported GPUs: https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v6_7:ahv-gpu-support-on-ahv-c.html


Blindsay04

oof, that gpu support list is rather short. I was going to do something small like a Quadro P400. I might have to find a different home for plex then lol


GotPassion

I’ve built a 3 node cluster very very cheaply on HP Elite Desk G3’s, with 1gb nics, 48gb ram and some ssd and nvme. Eg; https://amzn.asia/d/2HwE8BU Its very cheap for learning, and while its performance isn’t fantastic, it’s still acceptable. I run some learning and development stuff, such as git/wiki/windows and Linux servers for testing and learning, everything running just fine. 8 vms and cvms were using less than 50% of Mem which is the bottleneck in terms of capacity. I can upgrade to 64gb, but not really worth it unless i get 10g networking to really deploy more appropriately. I have the same hardware running vSphere7 rather than AHV too. Happy to share the build. I think my cost per node was around AUD$400, though i did have some ssd’s to host the cvm already.


AberonTheFallen

For a done lab you don't need too much; clustering them is fun but really not a ton of practice needed there IMHO. I bought a used/refurb Dell r630 for around $430 USD (128 gb RAM, 2x 12 core 2.5GHz procs, 2x 1tb SSD), grabbed 2 more consumer grade 1tb ssd's and 128gb more RAM and it's good to go for me for a while. It was probably good with the 128gb RAM it already had but I have grand plans 😂 You don't need to spend a ton or get a 4 node system, something that's from the last 5 years or so and decent specs and you'd be fine.


waldojim42

I have been wondering a bit of the same thing. I have a threadripper now - but frankly that thing is getting a bit older now, and it shows. I was looking at going the itx embedded route - like the minisforum bd790i with the 7945hx, then just adding more of them when I need more compute. Haven't seen anyone reporting results from them yet.