Building a HCI Lab with Nutanix Community Edition

It’s always handy to have access to some sort of lab gear at work/home for testing and learning. Nutanix Community Edition allows you to bring the latest HCI technology to your lab setup. I will outline some basic considerations when planning to go with Nutanix CE, show you what you need to start your lab project and what are the possibilities and limits of the platform. My personal lab setup is based on a four node cluster deployed on Intel NUCs.

It’s always handy to have access to some sort of lab gear at work/home for testing and learning. Nutanix Community Edition allows you to bring the latest HCI technology to your lab setup. I will outline some basic considerations when planning to go with Nutanix CE, show you what you need to start your lab project and what are the possibilities and limits of the platform. My personal lab setup is based on a four node cluster deployed on Intel NUCs.

twitter_survey_lab_hv

According to my short twitter survey most folks (at least in the EUC community) use VMWare ESXi for their lab setups, followed by Microsoft Hyper-V and Citrix XenServer. But hey Nutanix Community Edition is more and more gaining traction!So let’s start with some facts about Nutanix Community Edition before jumping into the installation and setup process.

What is Nutanix Community Edition
  • free version of Nutanix AOS
  • designed for test driving its main features
  • on own hardware and infrastructure
  • intended for internal business operations and non-production use only

Nutanix Community Edition is the free version of Nutanix AOS, the operating system running on the commercial hardware from Nutanix. By the way the Nutanix commercial solution is also available on selected Dell, Lenovo, Cisco and HPE hardware. The Nutanix Community Edition is designed for test driving its main features on your own hardware and infrastructure. Please not that the Community Edition is intended for internal business operations and non-production use only!

If you don’t know the basics about the Nutanix technology, the following clip explains the main concept behind their solution from a high level view. This is about how the commercial solution works, but Community Edition is the same concept.

What’s In Community Edition
  • Hypervisor (Acropolis hypervisor with virtualization management)
  • Single pane of glass control (Prism web console to manage the cluster)
  • Command-line management (Nutanix command line – nCLI)
  • Ability to add nodes to the cluster (One, three, or four nodes can comprise a cluster)
  • Ease of installation and use (Install and boot the hypervisor and AOS environment from USB device)

So what do you get with Nutanix Community Edition? You get a hypervisor together with a distributed storage file system. You get an out-of-the box HCI solution, that can be managed from a single pane of glass – the PRISM web console. For advanced configuration and management tasks you get command line access. You can deploy the Community Edition as a one, three or four node cluster setup. And of course it’s really easy to install and use.

Recommended Hardware

If you want to build your own CE cluster, here are some recommendations about the hardware. First  from the CPU perspective it’s best to have a minimum of 4 cores. There’s a way to get CE up and running with 2 cores only. I will show you later how to tweak the installer in that case.

Recommended Hardware for Nutanix Community Edition
Recommended Hardware for Nutanix Community Edition

For RAM it’s very simple, more is always better, but you need at least 16 GB. Said that with the minimum you will not have much available to run your VMs, because in the default configuration the Controller VM (CVM) takes 12 GB by default, so there are only 4 GB left. But there’s also a way to lower the memory  consumption of the CVM to 8 GB, I will show you this later too.

For the storage you need at least 2 drives per node (1 SSD + 1 HDD) plus you need a 3rd device as your boot device on top.

Other Recommendations and Tips
  • plan for a three-node or four-node cluster and be aware that single-node cluster can not be expanded
  • use static IP addresses for the hypervisor hosts and Controller VMs
    • do not use the 192.168.5.0/24 subnet, it’s reserved for internal use
  • don’t do nested deployment on ESXi for a real lab
  • spend your bucks on memory and cores, rather than on storage
  • use high quality USB 3.0 thumb drives
  • experience a live Nutanix instance for 2 hours and test drive Nutanix CE in the cloud

To experience the full power, storage features, data protection and HA of the Nutanix solution, you should plan to go for a three-node cluster or even better a four-node cluster. Be aware that you can’t expand a single-node cluster after the initial setup. You will have to destroy the cluster and start over for a multi-node cluster.

Limitations

The Nutanix Community Edition is free, so of course there are some limitations compared to the commercial product.

  • your cluster requires internet connectivity (outgoing traffic on tcp/80 and tcp/8443) to send telemetry data and usage statistics to Nutanix through «Pulse» mechanism
  • you must upgrade within 30 calendar days when an upgrade is available, else access to your cluster will be blocked
  • your hardware (especially HBAs, NICs, NVMe) may or may not work with Nutanix CE
  • Nutanix Next account is mandatory (needs registration)
  • only community support
Preparation and creation of the USB installation media

Now let’s move on to the deployment and setup on your own hardware. First you have to register for a Next account and join the community (https://www.nutanix.com/community-edition/), where you will be able to download the initial installation image from the Nutanix Next Community forum.

nutanixCE_register
Register for a Next account to get access to the Community Edition
nutanixCE_rufus
Rufus can burn the installation image to your boot media

To prepare your boot media with the installation image on a Windows machine you need some utility. I can recommend «Rufus», which is a nice tool for this and is available on https://rufus.akeo.ie for free.

  1. extract the initial install image from the downloaded tar file (ce-2017.02.23-stable.img.gz) – 7-zip can do this on Windows
  2. create a bootable disk using the option «DD Image»
  3. select the extracted initial installer image (ce-2017.02.23-stable.img)
  4. boot your system from the USB installation media

During the installation you will have to provide your network configuration. You need 2 IPs per node for the host and the Controller VM (CVM) plus one additional IP for your virtual cluster IP, and of course you have to configure the appropriate subnet mask and gateway address. As already mentioned, do not use the 192.168.5.0/24 subnet, it’s reserved for internal use.

Tweaking the Minimum Requirements

To lower the minimum requirements for memory and/or cores you can boot the installer image and login as root with the password nutanix/4u and then edit the appropriate values of the COMMUNITY_EDITION section in the /home/install/phx_iso/phoenix/minimum_reqs.py file.

nutanixCE_tweaking_cores
/home/install/phx_iso/phoenix/minimum_reqs.py
Lowering the CVM memory

To lower the default CVM memory (pre-install) you can boot the installer image and login as root with the password nutanix/4u and then edit the appropriate value of the COMMUNITY_EDITION section in th /home/install/phx_iso/phoenix/sysUtil.py file. You can drop the CVM memory to 8GB if you’re not using any data services (deduplication, compression).

nutanixCE_lowering_cvm_memory
/home/install/phx_iso/phoenix/sysUtil.py

To adjust the CVM memory after the initial installation to 8GB connect as root with the password nutanix/4u to the host running the CVM and execute the following commands.

virsh list –all

virsh shutdown <CVM-Name>

virsh setmem 8G –config

virsh setmaxmem 8G –config

virsh start <CVM-Name>

virsh list –all

After applying the new memory configuration make sure your CVM is back up and running.

Setup of a 3 or 4 Node Cluster

To setup a three-node or four-node cluster do NOT select the «Create single-node cluster» option, just install all your nodes one by one then connect to one of your CVMs (Controller VMs) via ssh. «putty» might be your favorite tool of choice for this (http://www.putty.org).

nutanixCE_multi-node_setup
do NOT select the «Create single-node cluster»

From the CLI we have to manually create the cluster and configure with appropriate DNS and NTP settings.

nutanixCE_cluster_create
manually create and configure the cluster

Execute the following commands to create, start and check your cluster.

Create the cluster

cluster –s [cvm1_ip],[cmv2_ip], [cmv3_ip], [cmv4_ip] create

Start the cluster

cluster start

Check the status of the cluster

cluster status

Configuring a Multi-Node Cluster

To configure your cluster with the appropriate settings use the following commands.

nutanixCE_set_dns
Configuring a Multi-Node Cluster

Name the cluster

ncli cluster edit-params new-name=[cluster_name]

Check / add / remove name servers

ncli cluster get-name-servers

ncli cluster add-to-name-servers servers=”[dns_server_ip]”

ncli cluster remove-from-name-servers servers=”[dns_server_ip]”

Check / add / remove time servers

ncli cluster get-ntp-servers

ncli cluster add-to-ntp-servers servers=”[ntp_server]”

ncli cluster remove-from-ntp-servers servers=”[ntp_server]”

Set a virtual cluster IP address

ncli cluster set-external-ip-address external-ip-address=”[cluster_ip]”

Additional Commands

Stopping a cluster

cluster stop

Destroying a cluster (deletes all cluster and guest VM data!)

cluster –f destroy

Configuring a proxy server

ncli http-proxy add name=[proxy_name] address=[proxy_ip] port=[proxy_port] proxyTypes=http,https

Check proxy configuration

ncli http-proxy ls

Intel NUC based four node cluster

My Nutanix Community Edition setup runs on the following Intel NUC hardware

intel_nuc
6th generation Intel Core i5 (Dual Core)
  • 32 GB RAM (2 x 16 GB DDR4 SODIMMs)
  • HD 2.5″ 1 TB SATA-III HD Drive
  • SSD m.2 256 GB NVMe
  • USB 3.0 16 GB Flash Drive
  • 5-Port Gigabit Ethernet Switch

16 thoughts on “Building a HCI Lab with Nutanix Community Edition”

  1. hello, is there anyway of configuring the nutanix ce to pass through a USB device to a VM? I have nutanix CE runnign on a Intel NUC and all is working well except i dont see a way of presenting a usb wifi adapter to a VM.

    Like

    1. Hi Dan, I don‘t think there’s a native way to map USB devices into a guest VM with Nutanix AHV. We have done this on production clusters running AHV by implementing external USB server boxes (e.q. http://www.silexeurope.com). It allows you to map USB devices over a network connection into a VM. We used this approach for USB dongles but it should work for any USB device. Hope this helps.

      Like

  2. Can you confirm if there is a way to build and learn Nutanix CE lab on VMware Workstation?

    Like

  3. Hi Rene,

    Nice blog. Quick question I take it you have 4 Intel NUCs to build a 4 node cluster?

    Thanks Stuart

    Like

  4. Hi Rene,

    Thanks for the response. I take it you never had any problems with the SSD NVMe? The support page for CE said the drives are not supported. Have you ever tried to edit the files to make the storage requirements less? Trying to build my lab as cheap as possible.

    Thanks Stuart

    Like

    1. Hi Stuart, NVMe are supported as the flash tier only – so I guess you have to use SSD + NVMe at this time. My cluster is built on spinning disks + NVMe. 500 MB + 200 MB is the minimum you need.

      Like

  5. Hi,

    Good post, watch bellow you tube video

    How to install NUTANIX CE nested on VMWARE ESXI step by step

    Like

Leave a comment