VMware Horizon 7.0: Instant Clones

With the release of Horizon 7, VMware has introduced Instant Clones (previously code-named Project Fargo) into the wild. Instant Clones utilize the vmFork technology within vSphere to fork a virtual desktop from a running virtual machine. Below are highlights about Instant Clones:

  • This is a completely separate technology from Linked Clones, and it also does not rely on View Composer at all. View Composer does not even need to be installed, and no Composer database is required.
  • Instant Clones are only available in Horizon Enterprise edition.
    • In the initial 7.0 release, only virtual desktops and not RDSH servers are supported.
    • vSphere 6.0 Update 1 or later and Virtual Machine version 11 or later is required.
    • Windows 7 and 10 are supported. Windows 8.x are not supported.
    • Local ESXi datastores are not supported.
    • 3D rendering is not supported.
  • The customization process does not use QuickPrep, but instead a new process called ClonePrep. No reboots are required after the virtual desktop is forked from the ClonePrep ParentVM like is required with the traditional QuickPrep process.
    • Pool_Dashboard
  • All Instant Clones are non-persistent. Resetting a virtual desktop is a delete and re-fork process.
    • Persistence is available through AppVolumes and UEM.
  • VMware estimates the following time differences between View Composer and Instant Clones for 1,000 desktops:
    • Pool Creation: 25 minutes (Instant Clones) vs 170 minutes (Composer)
    • Creating 1 Desktop: 1.5 seconds (Instant Clones) vs 10.2 seconds (Composer)
  • The forking/cloning process is incredibly quick, with it only taking an average of one second per desktop. When initially creating the pool or updating the image, it can still take a few minutes of time, as there are a few intermediate steps to preparing the ClonePrep Parent VMs to be forked.
    • The Master VM snapshot is cloned to a ClonePrepInternalTemplate folder, which is then cloned to a ClonePrepReplicaVmFolder, the virtual disk digest is created, the ClonePrep Replica VM is then cloned to a ClonePrep Parent VM for each host in the cluster and forking is enabled on them, and then the virtual desktops are forked from the ClonePrep Parent VMs.
      • If updating, the old ClonePrep Parent VMs, Replica VMs, and Template are deleted.
  • A ClonePrep ParentVM has to be created for every host in the cluster and is always running, so this must be calculated into design considerations. This is the virtual machine that all virtual desktops on the host are forked from.
  • ClonePrep Parent VMs per host appear to be set to a static memory usage percentage. In my lab, all parent VMs were locked to 92% Guest Memory usage.
    • ClonePrepParentVMFolder_MemoryUsage
  • The internal Instant Clone folders are: ClonePrepInternalTemplateFolder, ClonePrepParentVmFolder, ClonePrepReplicaVmFolder, and ClonePrepResyncVmFolder.
    • The virtual machines within these folders are locked from being modified manually. Trying to click ‘Edit Settings’, perform power operations, or other activities are grayed out within the vSphere Client. These can be unprotected by using the IcUnprotect.cmd script on the View Connection servers.
    • ClonePrepFolders
  • In the initial release, Instant Clones will prevent ESXi hosts from properly entering maintenance mode if simply placing the host into maintenance mode. Instead, the IcMaint.cmd script must be ran from the View Connection Servers to put the host into maintenance mode.
  • AppVolumes 2.10 and prior are not supported for usage with Instant Clones.
  • Instant Clones place significantly less load on vCenter compared to View Composer. Comparatively, there is 1 vCenter tasks to create a desktop with Instant Clones vs 7 with Composer.
    • Boot storms no longer occur as forked desktops do not boot but instead are forked in a running state.
  • The View Agent cannot be installed to support Composer and Instant Clones at the same time. When installing the View Agent, one or the other must be chosen.
    • ViewAgentInstall

More information:

  1. Horizon 7 Instant Clone Pool Creation Demo Video
  2. What’s New with VMware Horizon 7
  3. VMware Instant Clone Technology for Just-in-Time Desktop Delivery in Horizon 7 Enterprise
  4. Horizon 7 Documentation Center: Creating Instant-Clone Desktop Pools
Posted in Virtualization | Tagged , | Comments Off on VMware Horizon 7.0: Instant Clones

VMware AppVolumes: Capturing and Deploying an Application Layer

This is video #2 in the journey to record Creating and Deploying App Layers for all of the different layering products out there. This video will cover VMware AppVolumes, and will show creating and deploying the WinSCP SFTP/SCP/FTP client for Windows as Layer. (Note: For time sensitivity, some sections have been sped up, e.g. waiting for a desktop reboot).

Posted in Virtualization | Comments Off on VMware AppVolumes: Capturing and Deploying an Application Layer

VMware Mirage: Capturing and Deploying an Application Layer

After scouring the internet, there did not seem to be any videos of creating AND deploying an application layer in VMware Mirage from start to finish, so here we are in video #1 of Capturing and Deploying an Application Layer, this time with VMware Mirage. For those not in the “Layering is Awesome” fan club yet, hopefully this and subsequent videos will show you the magic and help you onto the bandwagon.

While infrastructure setup and other deployment steps are critical in any layering tool deployment, they are (hopefully) one time processes; creating and deploying App Layers will need to be done continually as new applications and new application versions need to be deployed. As a result, understanding and seeing how difficult (or easy) these processes are is important to understand from an operational complexity standpoint.

As a result, this is video #1 in the journey to record Creating and Deploying App Layers for all of the different layering products out there. This video will cover VMware Mirage, and will show creating and deploying the WinSCP SFTP/SCP/FTP client for Windows as an App Layer. (Note: For time sensitivity, some sections have been sped up, e.g. waiting for a desktop reboot).

High-Level Overview

The following is a summary of the process completed within the video from beginning to end. This is not meant as a step by step guide on properly creating an App Layer with best practices in mind, but to provide a synopsis of what is going on at the most basic levels:

  1. Log in to a Mirage-managed Virtual Desktop.
  2. Verify that WinSCP is not installed.
  3. Open the VMware Mirage Management Console.
  4. Verify that MRG-W7APPREF is available to be used as a Reference machine for the App Layer Creation.
  5. Launch the “Capture App Layer” wizard and proceed through the settings to use MRG-W7APPREF to create the App Layer.
  6. Verify the “Capture App Layer” task shows within the “Task Monitoring” section of the Management Console.
  7. Open the VM Console for MRG-W7APPREF and verify it is “Initializing App Layer Recording” which is performing a pre-scan of the machine prior to the application install.
  8. Once the Pre-Scan is complete, MRG-W7APPREF enters the “Recording App Layer” state and is ready for application installation.
  9. Launch the installation media for WinSCP and proceed to perform a traditional install normally.
  10. After completing the application install, browse back to the Mirage Management Console.
  11. Within the Mirage Management Console, select the “Capture App Layer” task and right-click to select “Finalize App Layer Capture”.
  12. Within the “Finalize App Layer Capture” wizard, the applications installed will be displayed. Proceed through the wizard filling out the App Layer details.
  13. After completing the “Finalize App Layer Capture” wizard, the task will switch to a status of “Wait for device upload”.
  14. Opening the VM console for MRG-W7APPREF will show it enters into a “App Layer Capture” phase where it is performing a post-scan of the environment and capturing the application layer itself.
  15. Browsing back to the Mirage Management Console, the “Capture App Layer” task completes with a “Done” status. At this point, we are done with MRG-W7APPREF.
  16. Browsing to the “App Layers” section of the “Image Composer” section, the WinSCP App Layer is displayed. Within this view, it will show the size of the App Layer (354MB) as well as other parameters around its creation (e.g. created for Win7x64).
  17. Moving back to the “Common Wizards” section, launch the “Update App Layers” wizard.
  18. Within the “Update App Layers” wizard, select the machines to deploy the layer to, and then select the layers to apply. The “Select App Layer” section will provide details about each layer.
  19. After completing the “Update App Layers” wizard, browse to the “Task Monitoring” section and there will be an “Update App Layers” task kicked off.
  20. Browse to the machine that will be receiving the App Layer deployment, the desktop will display a notification that “Your system is being updated.”
  21. Launching the Mirage tray will display the status of the Layer update. For time purposes, this section has been sped up as it can take several minutes; users can continue to work while Mirage is processing.
  22. Once the Layer Update has been applied, the desktop will display a notification that a reboot is needed. This can be done immediately or suppressed for a later time.
  23. After the reboot is complete, log back in to the Mirage virtual desktop. Mirage will notify that “Your system was recently restarted. VMware Mirage is completing system updates.” Mirage finalizes a few things in the background, but the desktop can continue to be used.
  24. WinSCP is now installed to the endpoint and is completely native. It shows up as installed on the C:\ drive, shows up in the Start Menu, and shows up in Add/Remove Programs. The App Layer creation and deployment is complete.
Additional Mirage Resources
Posted in Virtualization | Comments Off on VMware Mirage: Capturing and Deploying an Application Layer

VMware Mirage: Unprotected Area / Missing Files

Recently had a client using Mirage report an issue with only a portion of a folder not being backed up by Mirage. The folder in question was kind of a unique folder; it was actually the backup of the Mirage USB Stick creation files. Of all of the data, it was missing ~40% of it. The first question was whether Mirage had a file size limit, which made sense since we were getting into the gigabytes for this folder. Mirage actually does not have any documented file size limits. Mirage does currently have a maximum of 1,000,000 files per CVD on 32-bit systems, although no limit on 64-bit systems. This is usually not hit, though.

Mirage, however, does try to keep what it backs up lean where it can; it does this via the “Unprotected Area” within the CVD Policy. By default, this includes things like *.vmdk, *.vhd, etc. It also happens to include *.wim files, which are included in the Mirage USB Stick folder, and these were the files that were being skipped.

To fix this, we simply modified the CVD Policy by navigating to Mirage Management Console -> System Configuration -> CVD Policies -> Select the CVD Policy applied to the desktop in question -> Right-Click -> Select Properties, and then browsing to the “Unprotected Area” tab.

CVD Policy

From there, there are a couple options:

  1. Remove the filter in general (e.g. click the *.wim filter and ‘remove’)
  2. Create a rule exception for the folder path

The latter of the options being the most efficient for most situations, since in most circumstances skipping backups of *.wim files is ideal. If doing the former, though, it may be ideal to create a separate CVD policy for select machines to achieve the desired result without allowing the file-type for all machines.

Posted in Virtualization | Comments Off on VMware Mirage: Unprotected Area / Missing Files

VMworld 2014 Session Notes: EUC2035 – Horizon 6 Technical Overview

Below are my notes from VMworld Session EUC2035 by Justin Venezia and Jim Yanik entitled “Horizon 6 Technical Overview” All credit for the material below belongs to the tremendous authors/speakers, Justin Venezia and Jim Yanik of VMware.

  • Cloud Pod Architecture: Efficiently manage desktop deployments across data centers
    • Overview and Benefits
      • Support single namespace for end users with a global URL
      • Global Entitlement layer to assign and manage desktops and users across multiple pools within or between View pods
      • Scale Horizon deployments easily
      • Support Active/Active and DR data center configurations
      • Support geo-roaming users
      • Simplifies pool entitlements in large environments
    • Brokering with Cloud Pod Architecture
      • Global load balancer sends user to the right location (e.g. London)
      • View returns desktops in all locations in pod federation
      • If user chooses a desktop in New York, desktop will be brokered via Security Server in London without having to redirect to a broker in New York
    • VIPA (View InterPod Architecture Protocol): Allows for exchanging info between pods over the WAN, and can tolerate WAN latency/hiccups
      • One broker per pod that communicates to one broker in another pod, instead of all brokers among all brokers.
      • Just stores transient information (status of desktop, etc.)
      • All static info like global entitlements and global pools are stored in ADLDS layer which talks and replicates via 22389
    • Real World
      • CPA can be used for a single View Pod implementation
      • Plan CPA sites accordingly
      • View Pods in same DC should be in single site.
      • Load balancing and global traffic management still required
      • CPA can do access, authentication, and brokering of virtual and RDS hosted desktops
      • CPA does not address redundancy of other dependencies
      • RDS hosted apps NOT supported
      • CPA Desktop Pools NOT supported within Horizon Workspace Portal
  • Hosted Apps: Deliver applications seamlessly to Horizon-enabled endpoints
    • Supports Windows Server 2008 and 2012 R2
    • Load management works on choosing the host with the most session slots available (Session Slots = Session Cap – Active Sessions)
    • With RDS, sessions start to become the bottleneck in building block sizing. Apps can have multiple sessions per users.
      • 10,000 session limit per pod.
    • RDS hosted desktops do not share sessions with RDS hosted applications.
    • Apps that need to talk to each other (with things like object linked and embedding) need to be on the same server
    • When to use Application Silos
      • Frequent application updates & app management
      • Dedicated and/or predictable compute/network/storage
      • Application criticality
      • Business and security compliance
      • Fewer conflicts & compatibility issues between apps
      • Predictability scalability
    • Do not overscubscribe CPU or memory (consider memory reservations)
    • 1 vCPU to each physical core and enable hyper threading
    • 4 vCPU is the “sweet spot”
    • Disable DRS for RDS Hosts & View Connection/Security Servers
    • Avg IOPS: Light is 3-6, Medium is 6-10; less than virtual desktops
    • The View Optimization Tool comes with 2008/2012 Optimizations
  • Workspace: One login, one experience, any device
    • SaaS Apps, Citrix XenApp, Web Links, Office 265, Google Apps, Packaged ThinApps, Horizon Hosted Apps, Virtual Desktops
    • Aggregation of multiple Horizon pods
    • Multiple forest AD support
    • XenDesktop Virtual Desktops are not supported
    • Workspace does not proxy any XenApp traffic
    • Cloud Pod Architecture not currently supported on Workspace
  • Client Updates: 
    • Uniform look and feel of Windows & Mac desktop clients
    • Roaming IP Capabilities: maintain connection through endpoint IP address changes without losing connection
    • Can use old versions of clients with 6.0 unless you want to do apps
  • Automation, Self Service, and Monitoring
    • VMware View Admin API Integration
    • Many common admin takes will be able to be automated with vCO
    • Limited API functionality availability in itial release
    • Focused on automating basic administrative functions
Posted in Virtualization | Tagged | Comments Off on VMworld 2014 Session Notes: EUC2035 – Horizon 6 Technical Overview

VMworld 2014 Session Notes: EUC1653 – Horizon 6 Hosted Applications Technical Deep Dive

Below are my notes from VMworld Session EUC1653 by Andrew Johnson and Warren Ponder entitled “Horizon 6 Hosted Applications Technical Deep Dive” All credit for the material below belongs to the tremendous authors/speakers, Andrew Johnson and Warren Ponder of VMware.

  • View AppTap: Enumerates installed/removed/changes apps on the RDS hosts, reports apps installed on the RDS host to View including PNG icons, product name and version, and path.
  • VMWProtocolManager: Interface that lets VMware plug into RDS, very well integrated and supported by Microsoft.
  • VMware took the Unity code from VMware Workstation for RDS Integration for View.
  • In 1.0, RDS is missing: USB Redirection, Printing (in next release), 3D Graphics, Multi-Site Routing/Failover
  • Can launch apps via View Client or Horizon Workspace
  • If you just want apps, there is no requirement to do vCenter integration, Composer, desktop functionality/setup, etc.
  • Client App Request Basic Load Balancing
    • New client requests an app from a farm
    • Farm consists of two hosts
      • Host #1, Max Session Cap = 100, 50 Active Sessions
      • Host #2, Max Session Cap = 50, 25 Active Sessions
    • Client gets sent to Host #1 as it has the most active session slots (Session Slots = Session Cap – Active Sessions).
  • Same PCoIP tuning and metrics are still valid.
  • Host Sizing and Common Practices
    • Each RDS host is configured with a maximum session limit (150 is default, but configurable)
    • Size each RDS Farm with Identical Sized Hosts
    • All RDS hosts within a farm should contain the same applications with the same configuration (Executable paths must be the same!)
    • Less powerful RDS hosts can be configured with a lower session limit
  • Common RDS Design Considerations
    • Use Group Policies to secure and harden the RDS hosts.
    • ‘Empty Session’ timeout is 1 minute by default
      • Configure using Group Policy to log off disconnected sessions after a specified time (e.g. 1 day)
    • Both App Pools & Desktop Pools can use the same RDS Farm
      • If an RDS hosted desktop and remote app are launched, two sessions will be established to the same host.
  • Lessons Learned for RDSH Sizing
    • One a 4x 8-core socket system testing with View Planner 3.5:
      • 8 x 8 vCPU VMs provide the best performance
      • 2:1 CPU over-commitment provided best performance
    • Scale out, not up
    • Allocate CPU/memory that will fit in the NUMA node
    • Resources per session (conservative estimates on medium workload)
      • CPU: 300-500MHz per session
      • Memory: 400-500MB per user
      • Disk space: 200-300MB per user in OS disk for profiles, temp files, etc.
      • Network: 50 kbps per session
  • Common Best Practices
    • Don’t overcommit CPU (e.g. more vCPU than logical cores)
    • Enable Hyper-Threading
    • Do not Disable ASLR
    • Enable Transparent Page Sharing
    • Enable Fixed Memory Allocation
    • Disable BIOS level CPU Power Saving
Posted in Virtualization | Tagged | Comments Off on VMworld 2014 Session Notes: EUC1653 – Horizon 6 Hosted Applications Technical Deep Dive

VMworld 2014 Session Notes: EUC1476 – What’s New with View and PCoIP in Horizon 6

Below are my notes from VMworld Session EUC1476 by Tony Huynh and Simon Long entitled “What’s New with View and PCoIP in Horizon 6.” All credit for the material below belongs to the tremendous authors/speakers, Tony Huynh and Simon Long of VMware.

  • Roaming IP Profile: PCoIP session persists through IP changes (e.g. moving from conference room to conference room)
  • Windows 2012 RDSH Enhancements:
    • Full support for transparent windows
    • DirectX 11.1 support
    • Fairshare of resources
    • PowerShell support
    • Centralized Resource Pooling
    • Dynamic Monitor and Resolution Changes
    • Improved User Experience
  • Bandwidth Management Improvements
    • Testing with 480p video @50ms RTT & 5 Mbps:
      • 24% frame rate improvement in general with no packet loss
      • 82% frame rate improvement with 0.5% packet loss
    • Lossy WAN performance is avg 150% frame rate improvement without any changes in 5.3 to 6.0
    • Default improvements:
      • Disable Build-to-Lossless
      • Maximum Initial Image Quality decreased to 80
      • Minimum Image Quality decreased to 40
      • These can lead to up to 10% increased in consolidation ratio & between 24-32% bandwidth savings.
  • Upgrade vs Net New
    • Automatically get new default settings with net new or upgrade if no PCoIP.adm template was used
    • Previous settings only kept if a customized PCoIP.adm was used in older version
  • Optimize your Environment for PCoIP
    • Use QoS/CoS
      • Put it right below VoIP
    • Congestion Control
    • Minimize Latency
      • Avoid deep buffers
      • Minimize routing/hops
      • Avoid in-line IDS/IPS
    • Beware of burstable circuits
    • Understand use cases: Tab Jumper vs Word Warrior
      • Volume of screen change, Pixel Perfect quality, Video, Audio, VoIP, 3D
    • Not all Clients support all features
    • Tera1 Zero Clients do not support Text CODEC or Client-side Cache
      • Only Soft Client (Win, Mac, Linux) support RTAV
      • Only Windows Soft Client supports MMR
      • RDS Apps do not work with Zero Clients currently
    • Teradici Apex Card reduces CPU overhead by offload the most active 100 virtual displays.
      • This is NOT a GPU, and does not reduce bandwidth; just offloads CPU.
    • Improving Video Performance in Browsers:
      • Browsers think a GPU is present and use an inefficient API
        • IE: Internet Options -> Advanced -> Accelerated Graphics -> “Use software rendering instead of GPU rendering”
        • Firefox: Options -> Advanced -> General -> Browsing -> Uncheck “Use hardware acceleration when available”
        • Chrome: Settings -> System -> Uncheck “Use hardware acceleration when available”
  • Optimize PCoIP for your Environment
    • Tuning Examples
      • Reduce PCoIP Session BW (Kbps)
        • Default: 90,000
        • New: 1,024
        • Experience Change: Minimal – if sized correctly
      • Maximum Initial Image Quality (%)
        • Default: 80
        • New: 60
        • Experience Change: Slightly blurry
      • Reduce Frame Rate (fps)
        • Default: 30
        • New: 12
        • Experience Change: Minimal – Videos may not play as smooth, though.
      • Reduce Audio Bandwidth (Kbps)
        • Default: 500
        • New: 250
        • Experience Change: Audio will sound more monotone
      • Increase Client-Side Cache (MB)
        • Default: 250
        • New: 350
        • Experience Change: None
  • In the future, they are working on allowing profiles that are contextually aware; unlike current PCoIP.adm templates (e.g. if on the WAN, get profile X; if on LAN, get profile Y)
Posted in Virtualization | Tagged | Comments Off on VMworld 2014 Session Notes: EUC1476 – What’s New with View and PCoIP in Horizon 6

View Cloud Pod in the Home Lab

In this post, we’ll go over setting up Cloud Pod in my home lab. Due to limited hardware, network, and sites, we’ll have to use a bit of imagination in pretending the two sites we create aren’t on the same physical hardware. The end goal here is we’ll have two independent View environments (e.g. Production & DR) but present one entitlement to users so the assignment is transparent to them (whether they go to Pod A or Pod B).

Overview

Two independent View environments, both will exist on the same vCenter and on the same LAN. Each pod will have one (1) manual, floating pool called “Standard” with one (1) desktop. The pods will be joined to the same Pod Federation, but will be moved to their own sites (Primary_Site and Secondary_Site) to emulate if they were Production & Disaster Recovery sites. “Domain Admins” will be entitled to the Global Entitlement “Standard” that will have the “Standard” pools of each pod behind it.

Setup Individual Pods

Install and configure View normally as one would in a regular install. In this example, we’ll configure as follows:

Primary Site

  1. VCS6-01
  2. VCS6-02

Secondary Site

  1. VCS6-03

Initialize Cloud Pod on First Pod

  1.  On any connection server in the pod, run ‘lmvutil —authAs AdminAccount —authDomain DomainName —authPassword “*” —initialize’. This will initialize the functionality. It can take several minutes. It is setting up the Global Data Layer on each Connection Server instance in the pods, configuring the VIPA inter pod communication channel, and stabling a replication agreement between each Connection Server instance.Pod_Initialization
  2. After the Cloud Pod is initialized, the pod federation contains a single pod that is named after the host name of the Connection Server that the pod was initialized from (e.g. Cluster-VCS6-01).
  3. System Health on the Connection Servers shows Pods after initializing, and ‘vdmadmin -X -lpinfo‘ should show all connection servers.
  4. Rename the Pod to an appropriate name instead of default name: ‘lmvutil —authAs user —authDomain domain —authPassword “*” —updatePod —podName ExistingName —newPodName NewPodNameRename Pod

Join Second Pod to Pod Federation

  1. Join a second Pod to the Pod Federation. Execute the following on the pod that will be joining the federation: ‘lmvutil —authAs user —authDomain domain —authPassword “*” —join —joinServer ServerAddress —username domain\user —password “*”Add Pod
  2. Rename the newly created second Pod to an appropriate name instead of default name: ‘lmvutil —authAs user —authDomain domain —authPassword “*” —updatePod —podName ExistingName —newPodName NewPodName
  3. After joining the second Pod to the first Pod, there will now be a “Remote Pods” item under the Dashboard’s System Health:Remote Pod Dashboard

Configure Sites

By default, both Pods are in the same site: “Default First Site”. For this example, we’ll put the Pods in different sites. If this wasn’t desired, this could be skipped.

  1. Create two new sites (Primary_Site and Secondary_Site): ‘lmvutil —authAs user —authDomain domain —authPassword “*” —createSite —siteName NewSiteName
  2. Place the pods into their respective sites: ‘lmvutil —authAs user —authDomain domain —authPassword “*” —assignPodToSite —podName PodName —siteName SiteNameAssign Pod to Site
  3. To see the pod and site info, execute: ‘vdmadmin -X -lpinfovdmadmin Pods

Create Desktop / Pool in Each Pod/Site

For simplicity, in this example we’ll just use manual, floating pools. This skips setting up Composer. Obviously, in production, go through the normal pool setup procedures.

  1. Create two new manual desktops: W7-PodA-01 & W7-PodB-01
  2. Create a manual, floating desktop pool “Standard” within each pod using the aforementioned manual desktops.  (Note: Do not set local entitlements. Best practice is not to use both local & global entitlements.)

Create Global Entitlement

There will be one global entitlement called “Standard” for users. We’ll place both pools (one from each Pod) into this entitlement, and add “Domain Admins” to it.

  1. Create a Global Entitlement called “Standard”: ‘lmvutil —authAs user —authDomain eeg3 —authPassword “*” —createGlobalEntitlement —entitlementName “EntitlementName” —isFloating —scope ANY
  2. Add the pool from each pod to the Global Entitlement (this must be ran on each pod): ‘lmvutil —authAs user —authDomain domain —authPassword “*” —addPoolAssociation —entitlementName “EntitlementName” —poolId “PoolName”
  3. Add an AD group to the entitlement: ‘lmvutil —authAs user —authDomain domain —authPassword “*” —addGroupEntitlement —entitlementName “EntitlementName” —groupName “AD\GroupName”

Testing Access

Connecting into a desktop points to a desktop in the local pod as long as it is available. For example, connecting through vcs6-01 or vcs6-02 via the View Client will connect to a desktop in PodA and connecting through vcs6-03 will connect to a desktop in PodB; this is true unless there is an existing connection in another pod. If the desktops in PodA are all down and a user attempts to connect through PodA, the user will be sent to PodB. This is because by default, when allocating a desktop, Cloud Pod gives preference to desktops in the local pod, the local site, and other pods in other sites, in that specific order. Searching sessions will show what Pod and Site the user is connected through.Pod Sessions

Test Local Pod Affinity of Entitlement
  1. Connect through PodA Connection Servers. Select “Standard”, receive W7-PodA-01. Logoff.
  2. Connect through PodB Connection Servers. Select “Standard”, receive W7-PodB-01. Logoff.
Test Connections return to Pod with Existing Connection
  1. Connect through PodA Connection Servers. Select “Standard”, receive W7-ProdA-01. Disconnect but do not logoff.
  2. Connect through PodB Connection Servers. Select “Standard”, receive W7-ProdA-01. This is because an existing desktop existed. Logoff.
Test Connections Failover to Secondary Site if Primary Site Desktops are Unavailable
  1. Log in to admin portal for PodA. Place all desktops in “Standard” pool into Maintenance Mode.
  2. Connect through PodA Connection Servers. Select “Standard”, receive W7-PodB-01. This is because there are no desktops available in the local pod or local site. Logoff.

Manage Entitlements

  1. List Global Entitlements: ‘lmvutil —authAs user —authDomain domain —authPassword “*” —listGlobalEntitlements’Global Entitlements List
  2. List Desktop Pools in a Global Entitlement: ‘lmvutil —authAs user —authDomain domain —authPassword “*” —listAssociatedPools —entitlementName “EntitlementName”List Desktop Pools in Entitlement
  3. List Users or Groups in a Global Entitlement: ‘lmvutil —authAs user —authDomain domain —authPassword “*” —listEntitlements —entitlementName “EntitlementName”’ (Can also do —userName or —groupName instead of —entitlementName to search similarly based on those criteria.)List Groups in Global Entitlement

Misc Notes

  1. Cannot use CloudPod with RDS or HTML Access
  2. Configuration is all done via the command line with the lmvutil tool
  3. Interpod communication protocol is called “View InterPod API” (VIPA). Used to launch new desktops, find existing desktops, and share health status data. View configures this when initializing CloudPod feature.
  4. A site is a collection of well-connected pods in the same physical location, treated like they’re on the same LAN. All pods in same site are treated equally. All pods are placed into “Default First Site” when initializing Cloud Pod feature. Create and move pods if applicable.
  5. Pods in different sites are assumed to be on different LANs, so Cloud Pod gives preference to desktop in the same local pod or site when allocating desktops to users.
  6. Sites are useful for DR. Assign pods in different data centers to different sites then entitle users to desktop pools that span those sites.
  7. When using Cloud Pod, you create global entitlements instead of local entitlements. View stores global entitlements in the Global Data Layer, which is replicated among all connection servers in a pod federation.
  8. Best practice is NOT to use both local and global entitlements.
  9. Each global entitlement contains list of users, list of pools, and a scope policy. A scope policy specifies where View looks for desktops.
  10. By default, when allocating a desktop, Cloud Pod gives preference to desktops in the local pod, the local site, and other pods in other sites, in that order. This can be modified in the scope policy and configuring home sites.
  11. For dedicated desktops, CloudPod only uses the search behavior the first time, any subsequent times always go back to the same desktop.
  12. A home site is affinity between user and site. There are global home sites and per-global-entitlement home sites.
  13. Global entitlements do not recognize home sites by default; they must be configured with the —fromHome option when creating or modifying the entitlement.
  14. A Global Home Site is a home site assigned to users and groups.
  15. A Per-global-entitlement home site overrides Global Home Sites.
  16. Per-global-entitlement home sites do not support nested groups.
  17. You do not have to use home sites. They are optional.
  18. Cloud Pod limits: 20,000 desktops, 4 pods, 2 sites, 20 connection servers.
  19. Cloud Pod ports: 22389/TCP for Global Data Layer LDAP; 8472/TCP for View Interpod API (VIPA)
  20. Don’t try to run the ‘lmvutil —authAs admin —authDomain domain —authPassword “*” —initialize’ in Powershell. It’ll throw an error: “Unexpected additional parameter @OpenWithToastLogo.png” because it expands the wildcard. Use a regular command prompt.
  21. Run “vdmadmin -X -lpinfo” to see Site, Pod, and Endpoint Info for the connection server the command is ran on.
  22. Even if connection servers are in different domains, they cannot have the same name (e.g. vcs6-01.primary.lab vcs6-01.secondary.lab)
  23. The amount of time it takes to initialize a pod depends on the number of connection servers.
  24. You have to log out and log back in to the Admin page for the Dashboard’s System Health to show the Pod correctly.
  25. Event databases are not shared across pods.
  26. Cloud Pod uses SSL certificates to protect and validate the VIPA Interpod communication channel. These are replaced every seven automatically. They can also be forced to change earlier than that via the lmvutil command.
  27. Make sure Global Entitlement options (i.e. Prevent Protocol Override, Default Protocol, Allow Reset, Floating, Dedicated) match the Pool Settings or it won’t let the pool be added to the entitlement
  28. When using Global Entitlements, they do not show in View Admin. View Admin will show “No users or groups are entitled to this pool.” even if a Global Entitlement exists.
  29. Pools within Pods are managed independently. It’s up to the admin to keep images, etc. similar. Cloud Pod does nothing to assist with that.
Posted in Virtualization | Comments Off on View Cloud Pod in the Home Lab

Adjust Display Size within a Horizon View Virtual Desktop

By default, modifying the settings under Display within the Control Panel (e.g. Screen Resolution, etc.) is disabled. This is because View inherits the resolution of the underlying system whether it be a Windows system or even zero client. This is perfect for most folks as they want to run at the native resolution with the regular size of items on the screen; however, for individuals that have a harder time seeing smaller items on the screen, they prefer to increase the size of fonts, windows, etc. in order to make them easier to see.

The way to do this in Horizon VIew takes only two steps. First, use the View Agent ADM template (vdm_agent.adm) to create a GPO and set “Toggle Display Settings Control” to “Disabled.” This allows users to edit display settings directly.

View Agent Display GPO

Second, have the users right-click their desktop and click “Screen Resolution” then select “Make text and other items larger or smaller” and then select either “Medium” or “Large” (depending on how big they want items to appear). This will require the user to log off and log back on, so it does require Persona Management or a similar profile management solution to be in place; the setting would be lost at log off without profile management.

Original Resolution

Change DIsplay Settings Size

Once the user logs back in, items should be larger even though screen resolution stays the same.

New Display Size

The reason we modify the size of text and other items instead of modifying the screen resolution is because the latter does not save within the user’s profile and therefore would be lost whenever the desktop is reset. In addition, modifying screen resolution can cause unexpected display results depending on the resolution being set.

In the image above, we can see that the resolution stayed at 1300×800 even though the viewable sizes increased. This persisted across non-persistent desktops, as well.

Posted in Virtualization | Comments Off on Adjust Display Size within a Horizon View Virtual Desktop

VMware View Client shows Red X on Server with Valid SSL Certificate

Was in the process of replacing an SSL certificate on the View Connection Servers and after doing so, I noticed a red X on the server on the icon indicating SSL certificate validity. IE, Firefox, Chrome, etc. all confirmed it was a valid, green, working SSL certificate.

Invalid Cert

Turns out, it was a red X the whole time even before the cert change (just never paid attention to it) and it was due to the setting on the client itself. If the View Client is configured not to verify server identity certificates, it puts a red X even if the cert is valid. This, of course, makes sense since it is not verifying it at all, including verifying if it is valid.

Change the client setting back to either “Never connect to untrusted servers” or “Warn before connecting to untrusted servers” and the certificate icon should now show green properly.

SSL Client Settings

Valid Cert

Posted in Virtualization | Comments Off on VMware View Client shows Red X on Server with Valid SSL Certificate