Approximately four years ago a wrote a two blog post series about Nutanix AHV Virtual Machine High Availability (VMHA) which you can find here: Acropolis Virtual Machine High Availability – Part I Acropolis Virtual Machine High Availability – Part II Over the years I have updated the blog post once every now and then and…
This blog was authored by Tuhina Goel Product Marketing Manager Nutanix
This first blog in the three-part series on Metro Availability for AHV focuses on how to achieve zero data loss with Protection Policies in Prism Central at VM granularity.
For mission-critical applications in various industry sectors like banking, capital markets, insurance, healthcare, and emergency life support services, 24x7x365 uptime is key for the business to function unhindered. In such cases, applications need to have transparent visibility into underlying IT infrastructure failures, including hardware components (disk, network cards, power supply), racks, clusters, or sites. A 2014 Gartner study puts the cost of downtime at $5,600 per minute and can go as high as $540,000 per hour for mission-critical applications.
One of our core principles at Nutanix is to ensure the continuous availability of data for all applications running on the platform. To achieve seamless business continuity, we have built High Availability and Data Protection right into our AOS platform with the assumption that the hardware components are prone to failure. While this prevents customers from experiencing component failures in a cluster, what happens when entire clusters go down? The answer is Metro Availability. Metro Availability will extend the realm of continuous availability to another cluster. Customers can configure their deployments to seamlessly keeping mission-critical applications online even in the case of entire site failures.
Read more at (here)
RedHat is one of the pioneers of the Open Source community with their distribution of Linux called RedHat Enterprise Linux. While the OS software packages and the Linux kernel are completely Open Source, and anyone is free to download, use, and distribute them (some disclosures required), if you want commercial support you have to pay…
Nutanix has released version 5.11 of their Acropolis Operating System (AOS). This obviously also means that many of the supporting components of the Nutanix eco-system also have new releases to complement AOS, namely AHV & Prism Central. Please refer to the resources section at the end for links to downloads and release notes.
After a great meeting with a current Nutanix customer they asked if we had a tool that could provide them with some more background on their current cluster utilization and report on that. While Prism/Prism Pro will give you excellent reporting I try to automate as much as possible so I decided to alter the
Advertise here with BSA Hot off the press is Nutanix Move 3.0. Now you might be asking what is Nutanix Move 3.0? Move 3.0 is the next major release of the product formerly called Xtract for VMs. 54 more words
Not so long back in time Nutanix announces a hot new unique feature in AOS 5.5 called Frodo, I mean AHV Turbo Mode, as the name suggest which key I press to enable turbo mode..
In general AHV present the single queue VirtIO-SCSI controller for VM’s, this controller can only submit 128 request at a time regardless of vCPU or number of disks.
In VMware vSphere world to get more IO throughput we used to add additional SCSI controllers and distribute disks across them, with the implementation of AHV Turbo Nutanix had introduced Frodo on AHV Host which bypasses QEMU and processes storage IOs as multi queue based on number of vCPU of VM.
The AHV Turbo mode is not a feature, rather native capability, for Linux guest VMs make sure you have Linux kernel supports (newer kernels do that by default) the blk_mq (block multiqueue) option, add the kernel parameter “scsi_mod.use_blk_mq=1” to enable blk_mq and remove the elevator=noop option.
With introduction to Nutanix VirtIO version 1.1.4 the same enhanced Storage IO throughput is now available for Windows guest VMs. For VMs having more than 2 vCPUs higher throughput might be observed after installing VirtIO 1.1.4.