Confused about my ESXi/Horizon/View related options in virtualising this small mixed setup

I’m virtualising a small setup of half a dozen mixed PCs and small servers. The aim is to consolidate hardware resources (one high spec server + thin clients rather than 8 medium-to-high spec individual devices which aren’t all used at the same time), to enhance mobility and session start-stop-suspend-move, to allow resource sharing (inactive machines can be suspended and the resources used for other things, rather than having dedicated PCs left idle, also the occasional heavy workload can be “averaged out” across VMs and doesn’t needed every machine able to handle it), and to allow session snapshots on all devices. I’ve been trialling on a small scale using VMware Workstation for a year or so to see if the approach benefits the setup and it very clearly does, enough to move to a VM server more fully.

Everything is straightforward, but my confusion is about the video handling aspects and how they interact (VDI, SVGA capable video cards, PCoIP/RDP, and pros+cons between the more generalist ESXi vs the more specialist View/Horizon for the desktops). This area is completely confusing me and holding me back.

USAGE AND REST OF NETWORK – The active VMs at any given time would be a mix of about 3-5 Windows desktops and a mix of 3-4 small internal *nix servers (shell tinkering, tiny radius server, etc). The desktops are mostly used for desktop and “productivity” work on Windows 8.1/10 (heavy duty multitasking on Office suite, browsing, coding/development, video viewing, small amount of Photoshop now and then), but the desktop “windowing” use can be intense and multitasked. Most modern software can also use 2D hardware rendering if available to offload desktop GUI compositing and controls in applications. The servers are all light load *nix. There’s a separate robust file server + offsite replication in place with enough capacity/hardware spec to support 1 or more VM servers, and 10G LAN for the file server/VM server link.

My focus for this question is the graphics handling aspect. I’m reluctant to rely purely on soft (CPU) desktop and graphics handling due to the excessive CPU load it imposes even for moderate use, so I’d like to plan and spec for a bit beyond that. I want some flexibility in video resource sharing, and the usage varies, so if CPU alone isn’t enough, I’m really looking to a VSGA or similar style solution, not dedicated-card-per-VM passthrough solutions.

My question is borne of ignorance, openly admitted. I don’t know what options make sense to consider in relation to graphics/desktop use. Virtualization is usually discussed in terms of a single purpose and on a larger scale, rather than a heterogeneous mix like this is. My points of confusion are things like these –

  • Are View/Horizon so specialised that I can only run VDI on them, or can I also use them to run general purpose VMs hosting the non-desktop servers, as I would with ESXi? Conversely if I use standard ESXi VMs to host the desktops, how much is ESXi missing compared to View/Horizon in desktop-GUI-related optimisations that can’t be made up for in other ways?

  • I want to accelerate VDI and offload much of it from the host CPU. But does this force me down the View/Horizon route, or if not, what hardware would be relevant for ordinary ESXi + client? (VSGA capable card such as quadro 6000 or would I need GRID (I’d like too avoid GRID due to additional cost over VSGA)? Teridici thin client cards as well, or not?)

  • Does moving to VDI and any perceived latency (mostly LAN but occasionally remote) force my hand in choosing the remote desktop/PCoIP protocol and system, or thin-client hardware, and my choice of VM system and video related hardware?

Answer

Are View/Horizon so specialised that I can only run VDI on them, or
can I also use them to run general purpose VMs hosting the non-desktop
servers, as I would with ESXi?

Well View/Horizon runs on top of ESXi, so there’s nothing to stop you from running ‘regular’ VMs on the same hosts.

Conversely if I use standard ESXi VMs to host the desktops, how much
is ESXi missing compared to View/Horizon in desktop-GUI-related
optimisations that can’t be made up for in other ways?

There’s two things I can think of, ESXi can handle GPU passthrough to VMs but not as well as View/Horizon which is particularly good at handling those NVidia Grid VDI-oriented GPUs. Secondly although it’s actually ESXi that does the work, in itself you have no easy option to create linked-clones with ESXi on its own – yet that’s a cornerstone of how View/Horizon works – every VDI VM is a linked-clone – they can be created via at least one of the various APIs but it’s a scripted thing and I’m not sure it’s 100% supported.

I want to accelerate VDI and offload much of it from the host CPU. But
does this force me down the View/Horizon route, or if not, what
hardware would be relevant for ordinary ESXi + client? (VSGA capable
card such as quadro 6000 or would I need GRID? Teridici thin client
cards as well, or not?)

This makes total sense, 100% with you here, but I’d very specifically stick to Grid, they work great and I love how there’s two quite different models to choose from based on your needs.

Does moving to VDI and any perceived latency (mostly LAN but
occasionally remote) force my hand in choosing the remote
desktop/PCoIP protocol and system, or thin-client hardware, and my
choice of VM system and video related hardware?

I wish I had a ‘one size fits all’ answer for you for this one but it’s one of those ‘it depends’ questions, if you want to know you’d have to build both and benchmark. What I would say is that VDI sounds like the best fit for your scenario, I worry a bit about the single-server thing as if that dies you’re dead in the water but you can fix that with a second server when budget is available. A couple of things I’d strongly recommend for VDI implementations would be to not skimp on memory, it’s cheap at the moment and I’d go for at least 128GB for what you’re trying to do, also by all means keep user data on ‘magnetic’ disks but you want your VMs/linked-clones on as fast a disk as possible – certainly 2 or more SSDs in RAID 1/10 or alternatively a PCIe NVMe adapter – they will make such a huge difference to your VDI performance it’s hard not to make this a mandatory requirement (the reason is because you have multiple users running IO against essentially the same dataset, low latency is your friend 🙂 ).

Attribution
Source : Link , Question Author : Stilez , Answer Author : Chopper3

Leave a Comment