To start:
The best websites to review Meltdown and Spectre:
http://frankdenneman.nl/2018/01/05/explainer-spectre-meltdown-graham-sutherland/
https://meltdownattack.com/

Intel Announcement:
https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00088&languageid=en-fr

Cisco Announcement:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180104-cpusidechannel

Edit:12Jan2018 — added some more research URLs:
https://newsroom.intel.com/news/intel-security-issue-update-addressing-reboot-issues/
http://www.businessinsider.com/intels-telling-customers-not-to-install-its-fix-for-spectre-2018-1
https://www.theregister.co.uk/2018/01/12/intel_warns_meltdown_spectre_fixes_make_broadwells_haswells_unstable/
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180104-cpusidechannel

This leads to a conversation about patches to the OS:
MS Windows KB:
https://support.microsoft.com/en-us/help/4072698/windows-server-guidance-to-protect-against-the-speculative-execution

MS SQL KB:
https://support.microsoft.com/en-us/help/4073225/guidance-for-sql-server

VMware KB:
https://kb.vmware.com/s/article/52245
https://kb.vmware.com/s/article/52264
https://www.vmware.com/us/security/advisories/VMSA-2018-0002.html
https://www.vmware.com/security/advisories/VMSA-2018-0004.html

Red Hat KB:
https://access.redhat.com/security/vulnerabilities/speculativeexecution (click on Resolve tab)
https://access.redhat.com/articles/3311301

 

The patch leads to a conversation about “CPU Performance” impact.
Linux based testing, (Kaiser patch):
https://medium.com/implodinggradients/meltdown-c24a9d5e254e
https://www.phoronix.com/scan.php?page=article&item=linux-415-x86pti&num=2

Red Hat specific testing:
https://access.redhat.com/node/3307751

Microsoft SQL Server specific:
https://www.brentozar.com/archive/2018/01/sql-server-patches-meltdown-spectre-attacks/

Cloud impact:
AWS:
https://www.theregister.co.uk/2018/01/04/amazon_ec2_intel_meltdown_performance_hit/

Azure:
https://azure.microsoft.com/en-us/blog/securing-azure-customers-from-cpu-vulnerability/
*** interesting note here, the article talks about a network impact.

Video Game Company that uses cloud reports CPU impact:
https://www.theverge.com/2018/1/6/16857878/meltdown-cpu-performance-issues-epic-games-fortnite

Vmware Performance impact:
No online testing found yet. (as of 08Jan2018)

General hardware Testing from Blog Sites:  (Be sure to check out all four pages.)
https://www.techspot.com/article/1556-meltdown-and-spectre-cpu-performance-windows/

 

Actual Performance KBs or Blogs from:
Microsoft:
https://cloudblogs.microsoft.com/microsoftsecure/2018/01/09/understanding-the-performance-impact-of-spectre-and-meltdown-mitigations-on-windows-systems/

 

*********************
Just to be 100% clear here,
As of this blog post (08Jan2018) not all research has been completed by all Application, OS, Hardware, and Software companies.  We are still learning from Hardware vendors (Dell, HP, Cisco, Intel, Apple, etc…) that we need microcode updates, and firmware updates.  I’m sure Cell phone companies will push carrier updates and Antivirus companies will patch to identify bad behavior.
Once we get a full vetting of our Hardware, then a vetting of our hypervisor stack, we can start to work on every OS type and patch level, and then the application patches.   All it takes is that single rogue machine to thwart all of this patching and testing.

With all of this said, (and knowing that not ALL patches are released for the VMware vSphere suite as of today), I wanted to do some simple testing to see what the current patch + build differences would do to some simple Windows OS builds.   We hear a ton of hype on the internet about massive 30% CPU impacts and while that may be true for some, I needed to make sure that wasn’t a global impact and easy to reproduce.
*********************

I did complete some quick lab testing:
I used four Cisco B200M4 blades with the same E5-2699 @ 2.2Ghz CPU to test.
Host1 – vSphere 6.0 – 5050593 – No Patch
Host2 – vSphere 6.0 – 6921384 – Security Patch
Host3 – vSphere 6.5 – 5969303 – No Patch
Host4 – vSphere 6.5 – 7388607 – Security Patch

I deployed four VMs to each host, each host will run four VMs – sixteen total VMs.

Windows 7 – 2vcpu 6GB-MEM
Windows 2008 – 4vCPU – 16GB-MEM
Windows 2012 – 4vCPU – 16GB-MEM
Windows 2016 – 4vCPU – 16GB-MEM

I used DRS Pinning rules to keep each VM set pinned to each host
I used two “quick” testing tools that are able to run under all four Windows OSs.

1 – CPU Mark from the PassMark suite.
– This tool is able to run a suite of tests (Interger Math, Compression, Floating Point, Extended Instructions, Encryption, Sorting, and Single Thread testing)
– I configured all sixteen servers to run all tests in “long test” mode with thirty iterations. The test ran the VM CPU to “near” 100% for two hours.
2 – 7-Zip benchmark tool.
– Under the tools menu for 7-zip is a tool call Benchmark. This tool will aggressivly max out the CPU and measure “MIPS” Millions of Instructions Per Second attempting to Compress and Decompress data.
– The test runs until stopped. I ran the test across all sixteen servers for two hours minimum and recorded the difference in “MIPS” per OS and ESXI host.

Vmware has a testing suite called VMMark but it is mostly Linux appliance based and the level of difficulty to configure is large. After spending a few hours, I was able to get it to fully deploy but it would never successfully complete a load test with valid results. The product even suggests sending the results to vmware for correct analysis.

I ran both tools over several hours across all four Windows OS and across all four vSphere ESXi patch versions.
Extremely summerized report of my findings:

First Application (CPU Mark):
– No known patterns between OS and ESXi version or patch level to show an easy to identify performance impact for just this OS.

Data numbers: (higher is better)
Key : OS – 6.0 No Patch – 6.0 Patched – 6.5 No Patch – 6.5 Patched

Windows 7 – 3763 – 3768 – 3761 – 3760
Windows 2008 – 7017 – 7026 – 7016 – 7011
Windows 2012 – 6987 – 6982 – 6984 – 6984
Windows 2016 – 7004 – 6997 – 6994 – 6990

*** Quick review for CPU Mark load testing:
The only test to show different results (out of the nine CPU tests in the single score) was Floating Point Math.
The other eight load tests show similar results across all ESXi hosts within the same Windows OS.
Results do show a slight decrease in performance Between 6.0 non patched and 6.5 patched.
In some cases, the vSphere 6.0 patched version performaned better than non patched 6.0 version
In no cases did the 6.5 patched perform better than non patched 6.5
I would suggest a 1% reduction in performace between current non patched 6.0 and patch 6.5 for any application that requires “Floating Point Math” operations.

Second Application (7-zip):
– No known patterns between OS and ESXi version or patch level to show an easy to identify performance impact.

I ran two tests:
Test 1:
7-zip 32MB setting 1:1 Core to thread :
Host1 – vSphere 6.0 – 5050593 – No Patch
Host2 – vSphere 6.0 – 6921384 – Security Patch
Host3 – vSphere 6.5 – 5969303 – No Patch
Host4 – vSphere 6.5 – 7388607 – Security Patch

Win7-Host1 – compress = 7238 – decompress = 6182
Win7-Host2 – compress = 7325 – decompress = 6182
Win7-Host3 – compress = 7238 – decompress = 5780
Win7-Host4 – compress = 7130 – decompress = 6222

Win2008-Host1 – compress = 13971 – decompress = 12241
Win2008-Host2 – compress = 14092 – decompress = 12282
Win2008-Host3 – compress = 13930 – decompress = 12282
Win2008-Host4 – compress = 13731 – decompress = 12323

Win2012-Host1 – compress = 14071 – decompress = 12341
Win2012-Host2 – compress = 14111 – decompress = 12303
Win2012-Host3 – compress = 14029 – decompress = 12344
Win2012-Host4 – compress = 14196 – decompress = 12303

Win2016-Host1 – compress = 13906 – decompress = 12261
Win2016-Host2 – compress = 13865 – decompress = 12303
Win2016-Host3 – compress = 13708 – decompress = 12223
Win2016-Host4 – compress = 13906 – decompress = 12144

Test 2:
7-Zip 192MB setting 1:1 Core to thread : 10 passes minimum
Win7-Host1 – Total score = 189% – 3477/6561 MIPS
Win7-Host2 – Total score = 189% – 3493/6569 MIPS
Win7-Host3 – Total score = 187% – 3444/6432 MIPS
Win7-Host4 – Total score = 189% – 3438/6467 MIPS

Win2008-Host1 – Total score = 370% – 3436/12653 MIPS
Win2008-Host2 – Total score = 370% – 3452/12712 MIPS
Win2008-Host3 – Total score = 371% – 3453/12733 MIPS
Win2008-Host4 – Total score = 370% – 3437/12644 MIPS

Win2012-Host1 – Total score = 375% – 3504/13058 MIPS
Win2012-Host2 – Total score = 373% – 3457/12821 MIPS
Win2012-Host3 – Total score = 370% – 3477/12818 MIPS
Win2012-Host4 – Total score = 377% – 3483/13060 MIPS

Win2016-Host1 – Total score = 372% – 3473/12828 MIPS
Win2016-Host2 – Total score = 371% – 3441/12674 MIPS
Win2016-Host3 – Total score = 371% – 3423/12638 MIPS
Win2016-Host4 – Total score = 369% – 3444/12654 MIPS

Load test summary:
No patterns found compared to the data from the first load gen tool (CPU Mark)
For Windows 7 the 6.0 patched version performed better
For Windows 2008 the 6.5 unpatched performed better but not by a lot.
For Windows 2012 the 6.5 patched version performed better.
For Windows 2016 the 6.0 unpatched performed better.

Overall notes and my thoughts:
I was unable to find a performance impact just from patches and version of vSphere based on only using MS Windows OS benchmark testing.
I would like to configure VMMark and find better results. Maybe for another day.

Windows 2008 and 2012 will not get a patch (for now)
Older versions of MSSQL will not get patched.

I think every application and Operating System will be impacted in different ways.
We have no way to know what Application specific impacts will exist because of different configurations per application instance.
We have no way to know what OS level impacts will exist because we cannot calcuate all possible application configurations across all versions and patch levels of MS Windows.
If an increase in CPU usage is identified, the amout of electricity used, per system, and electricity used to cool the datacenter will be large. (globaly)

The folks on the VMware API team along with some Community Support ( @butch7903 – Russell Hamker)
have created an amazing backup script for your vCenter Appliance. Give it a once over.
https://github.com/butch7903/PowerCLI/blob/master/Backup_VCSA_DB_to_File.ps1

Some quick notes:
– Only works on vSphere 6.5 and higher.
– It requires PowerCLI version 6.5.3 or greater.
– It needs the Powershell PSFTP or WinSCP Powershell module if you want to use FTP or WinSCP to copy the backup from vCenter to a storage location.
* (Optional) – I deployed a Photon OS VM and I use that as my SCP target to save my backups.

If you want to save all of the Command Line GUI in the 772 Line file, all you need are these lines to complete the backup.
Import-Module -Name VMware.VimAutomation.core
Import-Module -Name WinSCP
connect-cisserver “vcenter01” -username “administrator@vsphere.local” -pass “myPass”
$BackupAPI = Get-CisService com.vmware.appliance.recovery.backup.job
$CreateSpec = $BackupAPI.Help.create.piece.Create()
$CreateSpec.parts = @(“common”,”seat”)
$CreateSpec.backup_password = “”
$CreateSpec.location_type = “SCP”
$CreateSpec.location = “10.10.10.10/storage”
$CreateSpec.location_user = “root” #username of your SCP location
$CreateSpec.location_password = New-Object –TypeName System.Management.Automation.PSCredential –ArgumentList “backup location username”, (ConvertTo-SecureString –String “backup location password” –AsPlainText -Force)
$CreateSpec.comment = $Comment
$BackupJob = $BackupAPI.create($CreateSpec)

Then you can use the following command to check on its status:
$BackupJob | select id, progress, state

Over the Holidays, I get a little free time to get back into the entertainment side of computers and electronics. Last week, I noticed some websites starting to load slower than normal. I jumped into the firewall logs and noticed the advertisements were doing massive DNS queries. Some websites were making 50-200 connections every second in an attempt to load as much click through and advertisement as possible. I took all of the DNS names from my firewall log, ran my quick DNS Lookup Powershell Script to lookup all of the possible IP addresses hidden behind that DNS name, then blocked all of it in my firewall. These few websites forced me to block a few Class C subnets and forty-five single IPs. After this firewall config update, pages loaded instantly again. This is a lot of work to do every few months to keep up with the changing advertisement landscape.
I know a lot of pay-for appliances exist for the enterprise to block advertisements and “bad sites” but I never gave it much thought to research the raspberry Pi offerings. I needed something quick, automatic, and very low cost + low power.

My bro @EuroBrew told me about this software for the Raspberry Pi called “Pi-Hole” — Website here https://pi-hole.net/
After installing it, I am impressed. The web-based Admin GUI is great. You can edit, add or remove items to your black+white list on the fly. It even has fancy graphs to show you how many items were blocked. Pi-Hole was running for two minutes and with a few seconds of Facebook Traffic from the wife, it had already started to show some big numbers. Summary: You don’t know what you accessing until you see how much is blocked.

The Dashboard:

pihole1

Even though you are accessing a single website, it’s amazing how many other sites are accessed without you even noticing. Again, within that two minutes of activity, thinking that facebook would be the top site accessed, here are the top URLs.

pihole2

Now, let me help you get this installed in your home lab!

######
Here are my quick steps to get this going:
1 – Download the updated version of Raspbian — I recommend the lite version https://www.raspberrypi.org/downloads/raspbian/
2 – Use their website if you need assistance to get a clean Raspberry Pie setup.
— Use an image tool to image your SD Card
— Boot up your RasbPi with the new image and configure the Timezone + Keyboard.
—- It is important to make sure your “|” key works…. UK version of the keyboard has issues so change it to US.
—- You can do this with the “sudo raspi-config” command + menu system.
3 – Now configure your RaspPi with a static IP. The Pi-Hole installer will help you with that but its way faster to do it now.
— You edit the /etc/dhcpcd.conf file with “sudo nano /etc/dhcpcd.conf” and change the static settings for your eth0 interface.
4 – Now you can run the pi-hole installer with “curl -L https://install.pi-hole.net | bash”
— Capital L and vertical line is a must for that command.
— This will fail if DNS isn’t working so check your network settings with an ifconfig if you are having issues.
5 – When the installer is complete, it will give you a Password to write down….. Don’t forget to do this!
— You can also change the password after the installed is finished with command “pihole -a -p mynewpassword123”
6- From a browser, go to your new Pi-Hole install via http://x.x.x.x/admin and configure it.
— Click Login on the left window.
—- From here you have access to Settings where you can correct any missed settings, or change your Temperature Units from Celsius to Fahrenheit.

Now, update your PC, Wireless, router settings to use your RaspPi as your DNS server and you are done.
You can go into great detail to add additional blocks so hit the google search box for that.

With a special thanks to @VirtualDominic and my local VMUG PDX leaders and friends, I was able to play around with the new Intel i7 NUC.
Link to the Intel NUC website if interested —>  here
This new i7 NUC powers an i7-6770HQ which translates to a quad core, eight thread 2.6ghz+3.5ghz turbo processor.
Since this is part of the 6th gen Intel processors, it pairs with up to 64GB of DDR4-2133 memory, and the Intel Iris Pro 580 GPU which can push 4K @ 60fps over Display Port.
The NUC has room for two M.2 SATA3 NVMe SSDs.   This sounds like an amazing chance to test an all NVMe flash vSAN.

This Mini PC uses under 20watts at idle and up to 80 Watts at full load.  The 80 watt high water mark considers the use of the GPU and since we only care about CPU cycles for our virtual workloads, we should float around 30-40 watts for normal test lab usage.

We now have the opportunity to use two physical servers and a third as a witness VM for our three node vSAN.   This means we can float eight cores, sixteen threads and 64-128GB of ram across two i7 NUCs and have all of it running around 50-80 watts for both nodes.   A high performance test lab running on less power than the old incandescent light bulb.   Though, some day that analogy will change as LED bulbs take over our homes.   So, lets say, a high performance test lab running less than ten LED bulbs….. that your kids left on….all day….

Now to the Lego build.   For this build, we selected a tall tower.  Since the cooling profile of these i7 NUCs is to suck in heat on the side and blast it out the back, I had to make a building that was able to pull cool air up and then freely throw it out the back.   Since some reviews show the NUC running at 110F (42C) at the heat sink exhaust, I needed to add more cooling to prevent total Lego meltdown.
I started with a solid base and built in a 120mm case fan that can run off of a 5v USB connector.

i7nuc1

Next was the addition of the side wall and support columns:

i7nuc2

Time to add in the fancy front columns, some flair on the wall, and ensuring that all of the NUC ports are available.

i7nuc3

Here is a snap of the completed tower with the power button and USB ports showing in the front.

i7nuc4

And here is the final build.  A fully built tower with a throne for our Megaman Hero.

Final

Mega-Man-2-start-screen

Time to top if off with a little VMware branding.

i7nuc5

If you have any questions about the build or NUCs, please send me a message on twitter @vmnick0

MSI has announced they will be releasing a new Mini PC called “MSI Cubi 2 Plus” and the “MSI Cubi 2 Plus vPro.”  While the Case is a bit unappealing, its the hardware that makes it amazing.   In the same form factor as the Intel NUC, MSI is able to pack in an i7-6700T and a full 32GB of DDR4 Ram.   Then, to make it more exciting, they intend to allow for CPU swap by having a small “ZIF” CPU socket.  This speaks loud to me, simply because anything that is considered modular means better access to the internal parts and a more dynamic CPU cooler.

Here are some quick links you can find via a google search about the MSI Cubi 2:

https://eu.msi.com/news/emm/?List=125&N=4104
http://www.guru3d.com/news-story/msi-cubi-2-plus-is-out.html
http://www.techpowerup.com/220293/msi-announces-cubi-2-plus-and-cubi-2-plus-vpro-mini-pcs.html

In terms of performance, here are the simple CPU specs between the MSI Cubi 2 i7-6700T and the Intel NUC i7-5557U.  I understand that one is 6th gen and second is 5th gen, but I don’t see a 6th gen Intel NUC with i7 CPU announcement yet….

http://ark.intel.com/compare/88200,84993

I’m thinking this build will require an all flash VSAN configuration and a special Lego build where the MSI parts can exist outside of the mangled stock case.

More to come!

Just a quick heads up for anyone using vDSwitches.  I’ve ran into two issues and I would like to share it with those I’ve spoken with.

#1 – “load based” load balance/teaming policy.
#2 – vDS health check and physical switch Mac Address Table issues.

Here is the KB about the current bug fixes in a patch release (which will also be rolled up into 6.0 update 1).
http://kb.vmware.com/kb/2124725
Here is the text from the KB:
“When using load balancing based on physical NIC load on VDS 6.0, if one of the uplinks is disconnected or shut down, failover is not initiated.”
— This means, if you have a vDS, and “load based” teaming policy set on your 6.x ESXi host,  then you remove a network adapter/Uplink, (or that link fails) the VMs will not failover or start using the other uplinks.   This can and will cause an outage.   The simple fix is to set the vDS teaming policy to the default “Route based on originating virtual port” or something other than Physical NIC load.
Just to clarify, this is not a vSphere 6 vDS issue,  this is a host level – ESXi v6.0 issue.  This can still happen if you have a 5.5 vDS and your host is running ESXi v6.

 

Issue #2
Here is the KB describing the issue.
http://kb.vmware.com/kb/2034795
The issue is when you enable vDS Health check and your vDS is large enough to over flow your mac address tables, up stream, on your physical network devices.  This can, and will, cause an outage on the network.  I have not tested or reviewed every switch mac address table limitations but anyone can reproduce this with enough effort.
So, how is this happening under the covers?   When you enable vDS health check, it creates additional virtual mac addresses for each physical Network adapter attached to the vDS.   It then sends out “packets” on all uplinks, on all hosts, on all vlans, and all port groups for that vDS.  The text from the KB:
“There is some scaling limitation to the network health check. The distributed switch network health check generates one MAC address for each uplink on a distributed switch for each VLAN multiplied by the number of hosts in the distributed switch to be added to the upstream physical switch MAC table. For example, for a DVS having 2 uplinks, with 35 VLANs across 60 hosts, the calculation is 2 * 35 * 60 = 4200 MAC table entries on the upstream physical switch.”

So, lets scale that out further.  If you have a 64 host cluster, each host has four uplinks attached to the vDS, all on a single vDS with 40 port groups.  64 X 4 X 40 =     10,240 mac address entries just slammed into your switch mac address table.
This might not be an issue for small businesses with small host and NIC counts but that really depends on the switch and router types they are using.

If you have any questions please reach out to me on twitter @vmnick0.me

 

 

When the 5.X vCenter webclient released you could navigate to a VM and click a link to generate a VM Console URL.  It looks something like this:
https://vcenter.pcli.me:9443/vsphere-client/vmrc/vmrc.jsp?vm=urn:vmomi:VirtualMachine:vm-VirtualMachine-vm-409:3a3sf62s-q41g-172d-aj91-a71251658v87

This 5.X console URL was made up of three data variables:
vcenter + VM MoRef + vCenter UUID — See below in the parentheses.
https://(your vcenter server):9443/vsphere-client/vmrc/vmrc.jsp?vm=urn:vmomi:VirtualMachine:vm-VirtualMachine-vm-( you vms MORef Number ):(vcenters UUID)

Now this URL will not work for vCenter 6.x!
If attempted, you will see this in your browser:
console1

 

 

 

 

The formatting was changed to this:

https://vcenter.pcli.me:9443/vsphere-client/webconsole.html?vmId=vm-VirtualMachine-vm-409&vmName=vm01&serverGuid=3a3sf62s-q41g-172d-aj91-a71251658v87&host=vCenterPSC01.pcli.me:443&sessionTicket=cst-VCT

Lets break down this URL:
https://vcenter.pcli.me:9443                                                                              (— vcenter again with port)
/vsphere-client/webconsole.html?vmId=vm-VirtualMachine-vm-                      (— String changed from vmrc to webconsole…)
409                                                                                                                    (— VM moRef ID)
&vmName=vm01                                                                                              (— The Name of the VM you want the console of)
&serverGuid=3a3sf62s-q41g-172d-aj91-a71251658v87                                  (— vCenters UUID)
&host=vCenterPSC01.pcli.me:443                                                                   (— the FQDN of your vCenter Platform Service Controller)
&sessionTicket=cst-VCT                                                                                  (— and a final string)

This last string  (&sessionTicket=cst-VCT)  was something I had to play with..  Using just these characters allowed the browser to prompt for my username/password and then give me a VM console.

Here are the PowerCLI commands to find your VM MoRef ID and vCenter UUID.
— Launch a PowerCLI session and run these oneLiners
—- $global:DefaultVIServers.InstanceUUid   =  This will give you your vCenter UUID
—  ((get-vm “vm01”).id).split(“-“)[2]  =  This will get your VM MoRef ID
—- You can use this one too if you like Views….  (get-view -viewtype virtualmachine -filter @{“name” = “vm01”}).moref.value

Here are the 5.X and 6.X VM consoles… if interested in the differences.
5.X
console2

 

 

 

 

 

 

 

6.X

console3

Twitter has exploded with this new VMware fling.
https://labs.vmware.com/flings/esxi-embedded-host-client
Here is a quick snip from @lamw about it – http://www.virtuallyghetto.com/2015/08/new-html5-embedded-host-client-for-esxi.html

Normally I would throw this in the home lab, play with it for a few minutes, brag to some friends, then get bored and let it absorb into long term brain matter.
But,  this is really cool.  Cool enough to log into the ole’ blog site and type some words about it!

Steps to awesomeness:
1 – Download the VIB @ http://download3.vmware.com/software/vmw-tools/esxui/esxui-2976804.vib
2 – SCP or upload it to a datastore on the host you want to test with.
3 – Install the VIB – “esxcli software vib install -v esxui-2976804.vib”
– you may need to throw in a “cp esxui-2976804.vib /var/log/VMware” or use the full path in the above command. Which ever is your favorite flavor.
4 – Then navigate to the new client – https://hostname/ui (example : https://esx01.vmnick0.me/ui)
5 – Log in with your host user/pass  – either root or AD if you have AD integration working with your hosts.
6 – Magic…..

Some cool things to note:
– It has performance graphs… that load instantly…
– You can review almost all things related to that host on a single page or, at most, one click away from that information.
– The simple potential for this GUI is amazing.  I hope this continues to be developed.

Below is a snap of the Host management page:
HostClient1

 

 

 

 

 

 

 

 

 

 

 

Below is a snap of the VM management page:
The awesome part is the working console which you can view inside the same HTML5 window or have it break out into a new window.

HostClient2

 

 

 

 

 

 

 

 

 

 

 

 

 

It has very little in terms of storage eye candy but it looks like its coming.
I was sad to see VSAN Health Monitoring not available when I installed the VIB on my VSAN cluster/host.  Maybe I need to check configs again.
HostClient3

 

 

 

 

It has some other very useful items and some with an “under construction” logo.  Lets hope this keeps going!
HostClient4

 

 

 

You really need to throw it on a host to check it out to see its full colors and potential.
If I were to ask for one feature to prioritize,  it would be more host stats and graphs.   The ability to see current network and Storage IO/Latency would be amazing.

Two updates for this one.   This is mostly to cover the large amount of questions I’ve received about the new Intel NUC 5th Gen and if they will support the VSAN setup I’ve detailed in previous posts.

First:  Intel NUC 5th Gen is coming/released and it has all the same stats as the 4th gen but with an updated Processor+GPU and another evil but performance positive change.
Web Link : http://www.intel.com/content/www/us/en/nuc/products-overview.html

The 5th gen NUC is still limited to 16GB of RAM even though they are releasing a new i7 version NUC.  Now here is the catch,  they replaced the half length and full length mPCIe slots with a M.2 slot.  This removes the ability to fit in the second wired network adapter and makes it harder to find or reuse your existing mSATA SSDs.  This M.2 slot does increase the “SATA” SSD speed from 3Gbps to 6Gbps if you are looking for more NUC IOPS/Bandwidth.   So, if you do not care about the second wired NIC, the 5th gen NUC may be an upgrade to your SSD performance in your home lab.

Here is a quick snip to show the difference in the mSATA and M.2 SSDs:

msatassd m2ssd

Second : As small form factor compute is getting popular in the home lab space more fun tools and toys are released.
As an example, Lian-Li has created an Intel NUC Replacement Case.  It helps with cooling and makes it look a little more fancy.
Here is a Link to a review of the case : http://www.legitreviews.com/lian-li-pc-n1-intel-nuc-replacement-case-review_2141/2
I tried to find an actual “Buy Now” link or a spec page on the Lian-Li website but I couldn’t find it.
Also, It looks like Intel had a “NUC Case Mod” competition.
http://www.bit-tech.net/modding/2015/01/05/intel-nuc-case-design-competition-2014/1
http://www.bit-tech.net/modding/2014/12/01/intel-nuc-design-competition-2014/1
So, if you are interested in more things “NUC,” Give those URLs a quick once over.

Find me on twitter if you have any questions or comments!

Current assumptions:
– You followed the Part 1 and Part 2 walkthrough.
– – Part 1 > http://vmnick0.me/?p=7
– – Part 2 > http://vmnick0.me/?p=25
– You have all of your NUCs powered on, fully booted into the ESXi hypervisor and configured with a valid IP address. Throw out a few Pings just to make sure.
– You have your (flash based) vCenter Web Client launched and ready.
Hopefully you see a screen like this: (if not, click the house icon at the top)

vCenterWebClient

Let’s start by loading up our inventory.

Click the “Host and Clusters” icon and then {right click} on the vCenter object that loads in the left menu. Select “New Datacenter,” wait for the popup to load, give it a name like Rambo or GIJoe and click OK. When your new “Rambo” cluster is created you should see it pop into the left window. {Right click} on your new datacenter object and select “New Cluster.” When the popup loads, enter a name like CaddyShack or ShopSMart, check the box marked “Turn On” for DRS, HA, and Virtual SAN and click OK. As you click the check boxes, lots of settings will load in.  We shouldn’t need to change these for a fully functional vSAN cluster so the defaults will work for now.|

Now it’s time to add our ESXi hosts to the cluster object. If you do not see the cluster you created above, click on the little arrow to the left of your Rambo datacenter object. {Right click} on your CaddyShack cluster object and select “Add Host.” When the wizard starts, enter the IP address (or Fully Qualified DNS Name) of your first NUC (the one you created your vSAN datastore on) and click Next. Enter the user and password you configured, (the default username is root and the its the password you created during the install,) click Next. You may get prompted about a Cert error, click Yes to move on and Next, Next, Next, Next, Next, Finish. Give it a minute to install some bits on the host and load it into your cluster object. If you don’t see it load into the listing on the left, be sure to click the arrow to the left of your CaddyShack cluster object to show the host. If you selected to add the host where you created your vCenter Server VM, you should see that VM listed under your host.  Let’s add the other two NUCs (hosts) to our vcenter. It’s the same steps as above but with the other IP addresses or DNS names. {Right click} on your CaddyShack cluster, select “New Host,” enter the IP or DNS, and Next, type the username (root by default) and password, then Next, Click Yes, Next, Next, Next, Finish. Then again for your third NUC (host.)

Here is our Inventory ready to go!
WebClientInventorySS

Now it’s time to create our “VSAN” Virtual Distributed Switch and assign some Virtual Kernel Nics from each host to the vDS. We will need this for the VSAN replication traffic.  I’m going to go quick here so you may want to copy and paste this part into notepad and break it up.
In your vSphere Web Client, click on the home (house icon) at the top and select “Networking” below.  {Right click} your datacenter object (Rambo) and select “New Distributed Switch.” In the wizard, type a name for your Distributed Switch, (“vds-VSAN” for example) and Next, Next, Next, Finish. Once your Distributed Switch is created, {right click} on it and select “Add and Manage Hosts.” In the wizard, select the bullet for “Add host and manage host networking” and click Next. Click “New Hosts” next to the big green plus sign, click the checkbox next to all three of your ESXi hosts and click Next. Make sure that “Manage physical adapters” and “manage VMKernel adapters” are checked and click Next. Click on “vmnic1,” click “assign uplink” at the top, and click “ok” in the popup. You will need to do this for each of the three hosts in the list then click Next when done.

It should look something like this when all three hosts have vmnic1 assigned to the vDSwitch.
vDSSS1

The next window we will create and assign VMKnic interfaces for each host. Click on each hosts name and click the “New Adapter” button at the top. A new wizard will pop up and ask you for IP settings. At the first window click Browse next to “Select an existing distributed port group” and select your lonely vds-VSAN default port group, then click OK. When you are back to the main wizard click Next. Check the checkbox for “Virtual SAN Traffic” and click Next. Here, we need to enter our VMKernel IP information. If you are using DHCP you can leave it “as-is” and click Next but I suggest clicking “Use static ipv4” and enter an IP address. I used IPs from the same class C as my management network and just plugged all of the NICs into the same vlan/switch. All of the IP addresses you used for your host management and the following vmknic interfaces will need to be unique.  Enter your IP and subnet mask then click Next and Finish to take you back to the main wizard. Do this same step for the remaining two hosts to create a VMKnic for each one. Click host, click new adapter, browse, click, ok, next, select VSAN traffic checkbox and next. Enter the IP address and click OK. Once back at the main wizard and all three hosts have a “VMK1” created, click Next, Next, Finish.

Example of the wizard window with a vmk1 interface on each host:
vDSSS2

There is one last thing to do before we are done configuring our vSphere cluster. Let’s configure vMotion on our management network so VMs can float around as needed. Click on the house icon at the top to get the main menu and select Host and clusters. Click on an ESXi host to the left and select the Manage tab at the top. Then, select Networking, the “Vmkernel adapters” submenu and click on vmk0 (vmkzero – management network.) Click the Pencil icon above it (edit settings), check the checkbox for “vMotion traffic” and click OK. Be sure to do this on all three of your hosts. Click host, manage, networking, vmkernel adapters, vmk0, edit settings, checkbox, OK.

You are now configured! You can select each host and see the “vsanDatastore” we created in part 2 http://vmnick0.me/?p=25 via CLI commands. As a test, to make sure everything is working, you can select the vCenter Appliance VM you created and “migrate/vmotion” it to another host in the cluster. To do this, {right click} on your vCenter VM and select Migrate. In the Wizard, click Next, check the checkbox for “Allow Host selection within cluster” at the bottom and click Next, select a host that your VM “is not” running on (second or third in the list maybe), and click Next, Next, Finish. You should see a task start on the right hand side. If it completes successfully then you are set! You can even migrate it to the third host just to make sure that host is happy too. If it fails, throw the error message into Google and you will find a ton of information on what happened and how to fix it.

Congrats on your new VSAN cluster!
Here is a quick snip of the resources you can expect for each host in this cluster (keep in mind total available storage will depend on how you configure your vSAN cluster)
VSANResrouces
If you have any questions or run into any issues while running through this three part walkthrough, feel free to send me a message on Twitter.

My next write-up will be about the performance we can expect from our Intel NUC vSAN cluster.  This will include some storage IO, network throughput and over all latency metrics for our Virtual Machines.