Just a quick post about vCenter 7.x vs vCenter 6.7.
There is a new ‘permission’ requirement you will need to add to your users role if you are deploying OVAs. This will be seen in the ‘Select Storage’ menu of an OVA deployment.
In vCenter 7.x, you will need to add the permission ‘Profile-driven storage > Profile-driven storage view’ to your users ‘Role.’
Log into your vCenter with creds that allow edit of user roles. Navigate to Administration and select ‘Access Control > Roles.’ Select your Role from the menu and click the edit icon. Scroll down to Profile-driven storage and then select the check box for ‘Profile-driven storage view.’ Click Next then Finish.
Users impacted by the added permissions will need to log out and back into vcenter to use the new permission.
Virtual
It’s been a while since I needed to update a VIB because VUM does a great job for upgrades/updates. Again, (for all my blog posts) you can reduce the code but I’m leaving it bloated so folks see the action lines and can edit/add as needed.
*** BIG gotcha with the script below. If you would like to loop through all hosts in a cluster, you will need to upload the VIB to a datastore that all the hosts can see/access. If you do not have shared storage, you can run the script serially (without the foreach loop) and edit the $vibpath var for each host.
#Update all hosts! foreach($a in (get-vmhost | sort name)){ $vibpath = "/vmfs/volumes/datastore/YourNewVIB.vib" $q = get-esxcli -vmhost $a $dowork = $q.software.vib.install($null,$null,$null,$null,$null,$true,$null,$null,$vibpath) $a.name $dowork } ## OR update a single esxi host $vibpath = "/vmfs/volumes/datastore/YourNewVIB.vib" $q = get-esxcli -vmhost "YourHostName" $dowork = $q.software.vib.install($null,$null,$null,$null,$null,$true,$null,$null,$vibpath) $dowork $a (in the foreach loop) will dump the hostname and $dowork will dump the 'success' message for the VIB install.
Scenario:
You have a few vCenter servers connected via “linked mode” or within the same SSO domain.
If you are on vSphere 6.5, you now have two GUI clients for every day VM administration.
Now, lets say you want to direct users to a single URL and that URL will direct users to a random vCenter in your SSO domain and you can even force users to the HTML5 client or Flash client.
Well, doing this is easy with a simple 500MB Photon OS VM and a small docker NGINX container.
Once you work through these steps, you can have a single URL, like “http://vcenter.vmnick0.me” and it will redirect you to one of your many vCenters and to the client of choice.
Quick disclaimer:
– I do not own or maintain any of the downloads, images, or commands you will use below. You are free to accept and download software as you see fit and assume all responsibility and risk… if any risk exists.
Requirements:
– A single VM, with a single IP address that can reach the internet (to download updates and the needed docker image.)
– A list of URLs you want to be “redirected” to.
Steps:
1 – Go to Git Hub and download the Photon OS appliance, “OVA”
https://github.com/vmware/photon/wiki/Downloading-Photon-OS
2 – Import that OVA to your vCenter or ESXi host using the client of choice.
— The VM will be about 500MB in size when complete so using thin provisioning might save you 15.5GB on this OVA install.
3 – Once imported, Power on the VM and launch a console.
4 – On the console you will need to complete three things.
– a. Change the root password.
– b. Enable SSH and restart the SSH daemon.
– c. Configure the IP address of the VM and restart the network service
Steps are as follow:
Log into the VM via Console as root/changeme. You will be prompted to change the root password. You can do that now or leave it as is….
Enable SSH so you can log in as root:
– Run command “vi /etc/ssh/sshd_config” and find the line “#PermitRootLogin prohibit-password” and edit it so it says “PermitRootLogin yes” (remove the comment too!)
— It’s vi so remember to press “i” to insert and “escape :wq” to save and quit.
– Run command “systemctl restart sshd” to start the SSH daemon.
Configure the IP address of the VM and restart the service:
– Edit the IP, Gateway and DNS IP address below then run this multi line command ”
cat > /etc/systemd/network/10-static-en.network << “EOF”
[Match]
Name=eth0
[Network]
Address=10.10.10.5/24
Gateway=10.10.10.1
DNS=8.8.8.8
EOF
Cd /etc/systemd/network
Chmod 644 10-static-en.network
rm 99-dhcp-en.network
Systemctl restart systemd-networkd
— Some notes about the above commands
— You created a file named 10-static-en.network and put some ip config into it.
— You then went to the folder where the file was created, changed the permissions of the file, removed the DHCP config file, then restarted the service.
From here, your VM should be reachable from your desktop. Try to ping or simply SSH to the VM and log in. If you cant SSH, recheck your steps above. The only thing preventing you from being able to now SSH to your VM is something on your desktop or your network blocking the connection.
Now, we have a clean Photon Appliance deployed. Lets run some updates, get docker running then start building our docker image.
5 – Run the following commands. These commands will update your OS and packages, install GIT, then reboot the appliance. This reboot is important because some updates will make docker perform weird when started.
– tdnf update -y
– tdnf install -y git
(only when the above is complete – reboot the VM)
– reboot
Once update is complete and your VM is back online (should only take a few seconds to reboot) SSH back into your VM and lets prep things for Docker and our nginx image.
6 – Run these commands to open up the IP Tables firewall for port 80 and 443.
echo ‘iptables -A INPUT -p tcp -m tcp –dport 80 -j ACCEPT’ >> /etc/systemd/scripts/iptables
echo ‘iptables -A INPUT -p tcp -m tcp –dport 443 -j ACCEPT’ >> /etc/systemd/scripts/iptables
systemctl restart iptables
7 – Run these commands to enable and start docker, get our nginx “openresty” image, build the image, then run our first Container!
systemctl enable docker
systemctl start docker
git clone https://github.com/openresty/docker-openresty.git
cd docker-openresty
docker build -t myopenresty -f trusty/Dockerfile .
docker run –name nginx -p 80:80 -it myopenresty /bin/bash
** The “docker build” command can take two to ten minutes to complete. I suggest pasting that command by itself, waiting for it to complete, refill your coffee, then running the docker run command after the build process is 100% complete. The build command will pull down all of the needed images and build your docker container image. This is where having a working internet connection for your Photon VM is most important. If you are running this install inside a corporate network, you may be blocked from some of the download sites in the image. Watch the SSH session for any download issues. You will know it is complete and successful if you are back at your Photon root prompt and see “Successfully built ############”.
You can also run the “docker image ls” command to see if the image is in your local repo.
8 – You should now have a running docker container on your Photon OS appliance and be inside the shell for that Container. Your prompt will change from Photon root to the HASH of your container image.
From here, you have one last thing to do, configure your nginx server nginx.conf file, restart the service, and test!
– You should be at the console of your docker container within your SSH session. Edit the below code to reflect your URLs then paste this long multi line code into your SSH session to delete the old config files, create your nginx.conf file and restart nginx. Notice below that I specified “/ui/” in my vcenter URLs. If you are running vSphere 6.5, this will force your users to the HTML5 client. If you want to force users to the Flash client…. {pause for thought here…} then add “/vsphere-client/” to the end of your vCenter URLs instead of the /ui/. Or, simply leave the tail off to direct them to the page where you can select your GUI flavor.
mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.confCOPY
cp /usr/local/openresty/nginx/conf/nginx.conf /usr/local/openresty/nginx/conf/nginx.confCOPY
cd /usr/local/openresty/nginx/conf/
rm nginx.conf
cat > nginx.conf << “EOF”
events {
worker_connections 1024;
}
http {
server {
set_random $rnd 1 4;
listen 80;
location / {
if ($rnd = 1){
return 301 https://vcenter-prod.vmnick0.me/ui/;
}
if ($rnd = 2){
return 301 https://vcenter-dr.vmnick0.me/ui/;
}
if ($rnd = 3){
return 301 https://vcenter-test.vmnick0.me/ui/;
}
if ($rnd = 4){
return 301 https://vcenter-dev.vmnick0.me/ui/;
}
}#location
}#server
}#http
EOF
nginx -s reload
*** If the “nginx -s reload” command fails with some text about “error….PID…failed” nginx might not be running, so a reload isn’t going to work. You can test this by simply starting nginx with command “nginx”
Now you can test your setup. Go to a browser and type in the IP address (or DNS name or FQDN) of the VM you just created. If everything is working correctly you should get redirected to one of the URLs you entered in the code above. If you didnt edit the config file, you might have trouble getting to my vcenter servers.
**BIG NOTE! you have two options to exit now
1 – Within your SSH session, press “Control+P+Q” to exit without killing your docker container. Doing this should exit back to your photon root prompt.
2 – Kill the Session by closing the window. Do not type exit or press Control+c.
If you accidentally typed exit and killed your container, you will need to start the container, restart another interactive session with that container, start nginx then exit gracefully. Commands for that will be different for you because your container ID will be different than mine. Edit the below as needed:
docker container list -a
docker container start 1b359d921f60
docker container exec -it myopenresty /bin/bash
nginx
Control+P+Q (Pressing this key combo is what created the “read escape sequence” text in the below screen shot. That is not a command you can type.)
Some things to consider:
1 – Some browsers will cache the Parent URL and may not be “redirected” to a different URL after the first.
2 – Using more than one browser or different machines should result in other users/browsers receiving a different redirected URL.
3 – This can be used for any URLs or even IP addresses! If you have some very log URLs, you can use this to auto complete for you. NGINX is so flexible you can redirect based on trailing folder /folder/ in the URL using /location blocks. You can even redirect based on server port, port 8080 and 8081 as an example could send you to different URLs based on the listen line. Example, listen 8080 sends you to google.com and listen 8081 sends you to bing.com!
4 – If you have any questions, find me on twitter @vmnick0
To start:
The best websites to review Meltdown and Spectre:
http://frankdenneman.nl/2018/01/05/explainer-spectre-meltdown-graham-sutherland/
https://meltdownattack.com/
Intel Announcement:
https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00088&languageid=en-fr
Cisco Announcement:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180104-cpusidechannel
Edit:12Jan2018 — added some more research URLs:
https://newsroom.intel.com/news/intel-security-issue-update-addressing-reboot-issues/
http://www.businessinsider.com/intels-telling-customers-not-to-install-its-fix-for-spectre-2018-1
https://www.theregister.co.uk/2018/01/12/intel_warns_meltdown_spectre_fixes_make_broadwells_haswells_unstable/
https://arstechnica.com/gadgets/2018/01/heres-how-and-why-the-spectre-and-meltdown-patches-will-hurt-performance/
This leads to a conversation about patches to the OS:
MS Windows KB:
https://support.microsoft.com/en-us/help/4072698/windows-server-guidance-to-protect-against-the-speculative-execution
MS SQL KB:
https://support.microsoft.com/en-us/help/4073225/guidance-for-sql-server
VMware KB:
https://kb.vmware.com/s/article/52264
https://www.vmware.com/us/security/advisories/VMSA-2018-0002.html
(VMware URL Updates – 20March2018)
https://blogs.vmware.com/security/2018/03/vmsa-2018-0004-3.html
https://www.vmware.com/security/advisories/VMSA-2018-0004.html
https://kb.vmware.com/s/article/52085
CVE-2017-5753, CVE-2017-5715, CVE-2017-5754
https://kb.vmware.com/s/article/52245
VMware Performance Impact for CVE-2017-5753, CVE-2017-5715, CVE-2017-5754
https://kb.vmware.com/s/article/52337
Red Hat KB:
https://access.redhat.com/security/vulnerabilities/speculativeexecution (click on Resolve tab)
https://access.redhat.com/articles/3311301
https://access.redhat.com/articles/3307751
The patch leads to a conversation about “CPU Performance” impact.
Linux based testing, (Kaiser patch):
https://medium.com/implodinggradients/meltdown-c24a9d5e254e
https://www.phoronix.com/scan.php?page=article&item=linux-415-x86pti&num=2
Red Hat specific testing:
https://access.redhat.com/node/3307751
Microsoft SQL Server specific:
https://www.brentozar.com/archive/2018/01/sql-server-patches-meltdown-spectre-attacks/
Cloud impact:
AWS:
https://www.theregister.co.uk/2018/01/04/amazon_ec2_intel_meltdown_performance_hit/
Azure:
https://azure.microsoft.com/en-us/blog/securing-azure-customers-from-cpu-vulnerability/
*** interesting note here, the article talks about a network impact.
Video Game Company that uses cloud reports CPU impact:
https://www.theverge.com/2018/1/6/16857878/meltdown-cpu-performance-issues-epic-games-fortnite
Vmware Performance impact:
https://blogs.vmware.com/management/2018/01/assess-performance-impact-spectre-meltdown-patches-using-vrealize-operations-manager.html
https://kb.vmware.com/s/article/52337
General hardware Testing from Blog Sites: (Be sure to check out all four pages.)
https://www.techspot.com/article/1556-meltdown-and-spectre-cpu-performance-windows/
http://www.datacenterknowledge.com/security/spectre-meltdown-hit-prem-windows-servers-hardest
Actual Performance KBs or Blogs from:
Microsoft:
https://cloudblogs.microsoft.com/microsoftsecure/2018/01/09/understanding-the-performance-impact-of-spectre-and-meltdown-mitigations-on-windows-systems
VMWare:
Assess performance impact of Spectre & Meltdown patches using vRealize Operations Manager
Some Scripting from VMWare blogs to assist with validation:
https://vinfrastructure.it/2018/01/meltdown-spectre-check-vsphere-environment/
https://www.ivobeerens.nl/2018/01/10/know-spectre-meltdown-vmware-environments/
https://www.virtuallyghetto.com/2018/01/verify-hypervisor-assisted-guest-mitigation-spectre-patches-using-powercli.html
https://github.com/lamw/vghetto-scripts/blob/master/powershell/VerifyESXiMicrocodePatch.ps1
*********************
Just to be 100% clear here,
As of this blog post (08Jan2018) not all research has been completed by all Application, OS, Hardware, and Software companies. We are still learning from Hardware vendors (Dell, HP, Cisco, Intel, Apple, etc…) that we need microcode updates, and firmware updates. I’m sure Cell phone companies will push carrier updates and Antivirus companies will patch to identify bad behavior.
Once we get a full vetting of our Hardware, then a vetting of our hypervisor stack, we can start to work on every OS type and patch level, and then the application patches. All it takes is that single rogue machine to thwart all of this patching and testing.
With all of this said, (and knowing that not ALL patches are released for the VMware vSphere suite as of today), I wanted to do some simple testing to see what the current patch + build differences would do to some simple Windows OS builds. We hear a ton of hype on the internet about massive 30% CPU impacts and while that may be true for some, I needed to make sure that wasn’t a global impact and easy to reproduce.
*********************
I did complete some quick lab testing:
I used four Cisco B200M4 blades with the same E5-2699 @ 2.2Ghz CPU to test.
Host1 – vSphere 6.0 – 5050593 – No Patch
Host2 – vSphere 6.0 – 6921384 – Security Patch
Host3 – vSphere 6.5 – 5969303 – No Patch
Host4 – vSphere 6.5 – 7388607 – Security Patch
I deployed four VMs to each host, each host will run four VMs – sixteen total VMs.
Windows 7 – 2vcpu 6GB-MEM
Windows 2008 – 4vCPU – 16GB-MEM
Windows 2012 – 4vCPU – 16GB-MEM
Windows 2016 – 4vCPU – 16GB-MEM
I used DRS Pinning rules to keep each VM set pinned to each host
I used two “quick” testing tools that are able to run under all four Windows OSs.
1 – CPU Mark from the PassMark suite.
– This tool is able to run a suite of tests (Interger Math, Compression, Floating Point, Extended Instructions, Encryption, Sorting, and Single Thread testing)
– I configured all sixteen servers to run all tests in “long test” mode with thirty iterations. The test ran the VM CPU to “near” 100% for two hours.
2 – 7-Zip benchmark tool.
– Under the tools menu for 7-zip is a tool call Benchmark. This tool will aggressivly max out the CPU and measure “MIPS” Millions of Instructions Per Second attempting to Compress and Decompress data.
– The test runs until stopped. I ran the test across all sixteen servers for two hours minimum and recorded the difference in “MIPS” per OS and ESXI host.
Vmware has a testing suite called VMMark but it is mostly Linux appliance based and the level of difficulty to configure is large. After spending a few hours, I was able to get it to fully deploy but it would never successfully complete a load test with valid results. The product even suggests sending the results to vmware for correct analysis.
I ran both tools over several hours across all four Windows OS and across all four vSphere ESXi patch versions.
Extremely summerized report of my findings:
First Application (CPU Mark):
– No known patterns between OS and ESXi version or patch level to show an easy to identify performance impact for just this OS.
Data numbers: (higher is better)
Key : OS – 6.0 No Patch – 6.0 Patched – 6.5 No Patch – 6.5 Patched
Windows 7 – 3763 – 3768 – 3761 – 3760
Windows 2008 – 7017 – 7026 – 7016 – 7011
Windows 2012 – 6987 – 6982 – 6984 – 6984
Windows 2016 – 7004 – 6997 – 6994 – 6990
*** Quick review for CPU Mark load testing:
The only test to show different results (out of the nine CPU tests in the single score) was Floating Point Math.
The other eight load tests show similar results across all ESXi hosts within the same Windows OS.
Results do show a slight decrease in performance Between 6.0 non patched and 6.5 patched.
In some cases, the vSphere 6.0 patched version performaned better than non patched 6.0 version
In no cases did the 6.5 patched perform better than non patched 6.5
I would suggest a 1% reduction in performace between current non patched 6.0 and patch 6.5 for any application that requires “Floating Point Math” operations.
Second Application (7-zip):
– No known patterns between OS and ESXi version or patch level to show an easy to identify performance impact.
I ran two tests:
Test 1:
7-zip 32MB setting 1:1 Core to thread :
Host1 – vSphere 6.0 – 5050593 – No Patch
Host2 – vSphere 6.0 – 6921384 – Security Patch
Host3 – vSphere 6.5 – 5969303 – No Patch
Host4 – vSphere 6.5 – 7388607 – Security Patch
Win7-Host1 – compress = 7238 – decompress = 6182
Win7-Host2 – compress = 7325 – decompress = 6182
Win7-Host3 – compress = 7238 – decompress = 5780
Win7-Host4 – compress = 7130 – decompress = 6222
Win2008-Host1 – compress = 13971 – decompress = 12241
Win2008-Host2 – compress = 14092 – decompress = 12282
Win2008-Host3 – compress = 13930 – decompress = 12282
Win2008-Host4 – compress = 13731 – decompress = 12323
Win2012-Host1 – compress = 14071 – decompress = 12341
Win2012-Host2 – compress = 14111 – decompress = 12303
Win2012-Host3 – compress = 14029 – decompress = 12344
Win2012-Host4 – compress = 14196 – decompress = 12303
Win2016-Host1 – compress = 13906 – decompress = 12261
Win2016-Host2 – compress = 13865 – decompress = 12303
Win2016-Host3 – compress = 13708 – decompress = 12223
Win2016-Host4 – compress = 13906 – decompress = 12144
Test 2:
7-Zip 192MB setting 1:1 Core to thread : 10 passes minimum
Win7-Host1 – Total score = 189% – 3477/6561 MIPS
Win7-Host2 – Total score = 189% – 3493/6569 MIPS
Win7-Host3 – Total score = 187% – 3444/6432 MIPS
Win7-Host4 – Total score = 189% – 3438/6467 MIPS
Win2008-Host1 – Total score = 370% – 3436/12653 MIPS
Win2008-Host2 – Total score = 370% – 3452/12712 MIPS
Win2008-Host3 – Total score = 371% – 3453/12733 MIPS
Win2008-Host4 – Total score = 370% – 3437/12644 MIPS
Win2012-Host1 – Total score = 375% – 3504/13058 MIPS
Win2012-Host2 – Total score = 373% – 3457/12821 MIPS
Win2012-Host3 – Total score = 370% – 3477/12818 MIPS
Win2012-Host4 – Total score = 377% – 3483/13060 MIPS
Win2016-Host1 – Total score = 372% – 3473/12828 MIPS
Win2016-Host2 – Total score = 371% – 3441/12674 MIPS
Win2016-Host3 – Total score = 371% – 3423/12638 MIPS
Win2016-Host4 – Total score = 369% – 3444/12654 MIPS
Load test summary:
No patterns found compared to the data from the first load gen tool (CPU Mark)
For Windows 7 the 6.0 patched version performed better
For Windows 2008 the 6.5 unpatched performed better but not by a lot.
For Windows 2012 the 6.5 patched version performed better.
For Windows 2016 the 6.0 unpatched performed better.
Overall notes and my thoughts:
I was unable to find a performance impact just from patches and version of vSphere based on only using MS Windows OS benchmark testing.
I would like to configure VMMark and find better results. Maybe for another day.
Windows 2008 and 2012 will not get a patch (for now)
Older versions of MSSQL will not get patched.
I think every application and Operating System will be impacted in different ways.
We have no way to know what Application specific impacts will exist because of different configurations per application instance.
We have no way to know what OS level impacts will exist because we cannot calcuate all possible application configurations across all versions and patch levels of MS Windows.
If an increase in CPU usage is identified, the amout of electricity used, per system, and electricity used to cool the datacenter will be large. (globaly)
With a special thanks to @VirtualDominic and my local VMUG PDX leaders and friends, I was able to play around with the new Intel i7 NUC.
Link to the Intel NUC website if interested —> here
This new i7 NUC powers an i7-6770HQ which translates to a quad core, eight thread 2.6ghz+3.5ghz turbo processor.
Since this is part of the 6th gen Intel processors, it pairs with up to 64GB of DDR4-2133 memory, and the Intel Iris Pro 580 GPU which can push 4K @ 60fps over Display Port.
The NUC has room for two M.2 SATA3 NVMe SSDs. This sounds like an amazing chance to test an all NVMe flash vSAN.
This Mini PC uses under 20watts at idle and up to 80 Watts at full load. The 80 watt high water mark considers the use of the GPU and since we only care about CPU cycles for our virtual workloads, we should float around 30-40 watts for normal test lab usage.
We now have the opportunity to use two physical servers and a third as a witness VM for our three node vSAN. This means we can float eight cores, sixteen threads and 64-128GB of ram across two i7 NUCs and have all of it running around 50-80 watts for both nodes. A high performance test lab running on less power than the old incandescent light bulb. Though, some day that analogy will change as LED bulbs take over our homes. So, lets say, a high performance test lab running less than ten LED bulbs….. that your kids left on….all day….
Now to the Lego build. For this build, we selected a tall tower. Since the cooling profile of these i7 NUCs is to suck in heat on the side and blast it out the back, I had to make a building that was able to pull cool air up and then freely throw it out the back. Since some reviews show the NUC running at 110F (42C) at the heat sink exhaust, I needed to add more cooling to prevent total Lego meltdown.
I started with a solid base and built in a 120mm case fan that can run off of a 5v USB connector.
Next was the addition of the side wall and support columns:
Time to add in the fancy front columns, some flair on the wall, and ensuring that all of the NUC ports are available.
Here is a snap of the completed tower with the power button and USB ports showing in the front.
And here is the final build. A fully built tower with a throne for our Megaman Hero.
Time to top if off with a little VMware branding.
If you have any questions about the build or NUCs, please send me a message on twitter @vmnick0
MSI has announced they will be releasing a new Mini PC called “MSI Cubi 2 Plus” and the “MSI Cubi 2 Plus vPro.” While the Case is a bit unappealing, its the hardware that makes it amazing. In the same form factor as the Intel NUC, MSI is able to pack in an i7-6700T and a full 32GB of DDR4 Ram. Then, to make it more exciting, they intend to allow for CPU swap by having a small “ZIF” CPU socket. This speaks loud to me, simply because anything that is considered modular means better access to the internal parts and a more dynamic CPU cooler.
Here are some quick links you can find via a google search about the MSI Cubi 2:
https://eu.msi.com/news/emm/?List=125&N=4104
http://www.guru3d.com/news-story/msi-cubi-2-plus-is-out.html
http://www.techpowerup.com/220293/msi-announces-cubi-2-plus-and-cubi-2-plus-vpro-mini-pcs.html
In terms of performance, here are the simple CPU specs between the MSI Cubi 2 i7-6700T and the Intel NUC i7-5557U. I understand that one is 6th gen and second is 5th gen, but I don’t see a 6th gen Intel NUC with i7 CPU announcement yet….
http://ark.intel.com/compare/88200,84993
I’m thinking this build will require an all flash VSAN configuration and a special Lego build where the MSI parts can exist outside of the mangled stock case.
More to come!
Just a quick heads up for anyone using vDSwitches. I’ve ran into two issues and I would like to share it with those I’ve spoken with.
#1 – “load based” load balance/teaming policy.
#2 – vDS health check and physical switch Mac Address Table issues.
Here is the KB about the current bug fixes in a patch release (which will also be rolled up into 6.0 update 1).
http://kb.vmware.com/kb/2124725
Here is the text from the KB:
“When using load balancing based on physical NIC load on VDS 6.0, if one of the uplinks is disconnected or shut down, failover is not initiated.”
— This means, if you have a vDS, and “load based” teaming policy set on your 6.x ESXi host, then you remove a network adapter/Uplink, (or that link fails) the VMs will not failover or start using the other uplinks. This can and will cause an outage. The simple fix is to set the vDS teaming policy to the default “Route based on originating virtual port” or something other than Physical NIC load.
Just to clarify, this is not a vSphere 6 vDS issue, this is a host level – ESXi v6.0 issue. This can still happen if you have a 5.5 vDS and your host is running ESXi v6.
Issue #2
Here is the KB describing the issue.
http://kb.vmware.com/kb/2034795
The issue is when you enable vDS Health check and your vDS is large enough to over flow your mac address tables, up stream, on your physical network devices. This can, and will, cause an outage on the network. I have not tested or reviewed every switch mac address table limitations but anyone can reproduce this with enough effort.
So, how is this happening under the covers? When you enable vDS health check, it creates additional virtual mac addresses for each physical Network adapter attached to the vDS. It then sends out “packets” on all uplinks, on all hosts, on all vlans, and all port groups for that vDS. The text from the KB:
“There is some scaling limitation to the network health check. The distributed switch network health check generates one MAC address for each uplink on a distributed switch for each VLAN multiplied by the number of hosts in the distributed switch to be added to the upstream physical switch MAC table. For example, for a DVS having 2 uplinks, with 35 VLANs across 60 hosts, the calculation is 2 * 35 * 60 = 4200 MAC table entries on the upstream physical switch.”
So, lets scale that out further. If you have a 64 host cluster, each host has four uplinks attached to the vDS, all on a single vDS with 40 port groups. 64 X 4 X 40 = 10,240 mac address entries just slammed into your switch mac address table.
This might not be an issue for small businesses with small host and NIC counts but that really depends on the switch and router types they are using.
If you have any questions please reach out to me on twitter @vmnick0.me
Twitter has exploded with this new VMware fling.
https://labs.vmware.com/flings/esxi-embedded-host-client
Here is a quick snip from @lamw about it – http://www.virtuallyghetto.com/2015/08/new-html5-embedded-host-client-for-esxi.html
Normally I would throw this in the home lab, play with it for a few minutes, brag to some friends, then get bored and let it absorb into long term brain matter.
But, this is really cool. Cool enough to log into the ole’ blog site and type some words about it!
Steps to awesomeness:
1 – Download the VIB @ http://download3.vmware.com/software/vmw-tools/esxui/esxui-2976804.vib
2 – SCP or upload it to a datastore on the host you want to test with.
3 – Install the VIB – “esxcli software vib install -v esxui-2976804.vib”
– you may need to throw in a “cp esxui-2976804.vib /var/log/VMware” or use the full path in the above command. Which ever is your favorite flavor.
4 – Then navigate to the new client – https://hostname/ui (example : https://esx01.vmnick0.me/ui)
5 – Log in with your host user/pass – either root or AD if you have AD integration working with your hosts.
6 – Magic…..
Some cool things to note:
– It has performance graphs… that load instantly…
– You can review almost all things related to that host on a single page or, at most, one click away from that information.
– The simple potential for this GUI is amazing. I hope this continues to be developed.
Below is a snap of the Host management page:
Below is a snap of the VM management page:
The awesome part is the working console which you can view inside the same HTML5 window or have it break out into a new window.
It has very little in terms of storage eye candy but it looks like its coming.
I was sad to see VSAN Health Monitoring not available when I installed the VIB on my VSAN cluster/host. Maybe I need to check configs again.
It has some other very useful items and some with an “under construction” logo. Lets hope this keeps going!
You really need to throw it on a host to check it out to see its full colors and potential.
If I were to ask for one feature to prioritize, it would be more host stats and graphs. The ability to see current network and Storage IO/Latency would be amazing.
Current assumptions:
– You followed the Part 1 and Part 2 walkthrough.
– – Part 1 > http://vmnick0.me/?p=7
– – Part 2 > http://vmnick0.me/?p=25
– You have all of your NUCs powered on, fully booted into the ESXi hypervisor and configured with a valid IP address. Throw out a few Pings just to make sure.
– You have your (flash based) vCenter Web Client launched and ready.
Hopefully you see a screen like this: (if not, click the house icon at the top)
Let’s start by loading up our inventory.
Click the “Host and Clusters” icon and then {right click} on the vCenter object that loads in the left menu. Select “New Datacenter,” wait for the popup to load, give it a name like Rambo or GIJoe and click OK. When your new “Rambo” cluster is created you should see it pop into the left window. {Right click} on your new datacenter object and select “New Cluster.” When the popup loads, enter a name like CaddyShack or ShopSMart, check the box marked “Turn On” for DRS, HA, and Virtual SAN and click OK. As you click the check boxes, lots of settings will load in. We shouldn’t need to change these for a fully functional vSAN cluster so the defaults will work for now.|
Now it’s time to add our ESXi hosts to the cluster object. If you do not see the cluster you created above, click on the little arrow to the left of your Rambo datacenter object. {Right click} on your CaddyShack cluster object and select “Add Host.” When the wizard starts, enter the IP address (or Fully Qualified DNS Name) of your first NUC (the one you created your vSAN datastore on) and click Next. Enter the user and password you configured, (the default username is root and the its the password you created during the install,) click Next. You may get prompted about a Cert error, click Yes to move on and Next, Next, Next, Next, Next, Finish. Give it a minute to install some bits on the host and load it into your cluster object. If you don’t see it load into the listing on the left, be sure to click the arrow to the left of your CaddyShack cluster object to show the host. If you selected to add the host where you created your vCenter Server VM, you should see that VM listed under your host. Let’s add the other two NUCs (hosts) to our vcenter. It’s the same steps as above but with the other IP addresses or DNS names. {Right click} on your CaddyShack cluster, select “New Host,” enter the IP or DNS, and Next, type the username (root by default) and password, then Next, Click Yes, Next, Next, Next, Finish. Then again for your third NUC (host.)
Here is our Inventory ready to go!
Now it’s time to create our “VSAN” Virtual Distributed Switch and assign some Virtual Kernel Nics from each host to the vDS. We will need this for the VSAN replication traffic. I’m going to go quick here so you may want to copy and paste this part into notepad and break it up.
In your vSphere Web Client, click on the home (house icon) at the top and select “Networking” below. {Right click} your datacenter object (Rambo) and select “New Distributed Switch.” In the wizard, type a name for your Distributed Switch, (“vds-VSAN” for example) and Next, Next, Next, Finish. Once your Distributed Switch is created, {right click} on it and select “Add and Manage Hosts.” In the wizard, select the bullet for “Add host and manage host networking” and click Next. Click “New Hosts” next to the big green plus sign, click the checkbox next to all three of your ESXi hosts and click Next. Make sure that “Manage physical adapters” and “manage VMKernel adapters” are checked and click Next. Click on “vmnic1,” click “assign uplink” at the top, and click “ok” in the popup. You will need to do this for each of the three hosts in the list then click Next when done.
It should look something like this when all three hosts have vmnic1 assigned to the vDSwitch.
The next window we will create and assign VMKnic interfaces for each host. Click on each hosts name and click the “New Adapter” button at the top. A new wizard will pop up and ask you for IP settings. At the first window click Browse next to “Select an existing distributed port group” and select your lonely vds-VSAN default port group, then click OK. When you are back to the main wizard click Next. Check the checkbox for “Virtual SAN Traffic” and click Next. Here, we need to enter our VMKernel IP information. If you are using DHCP you can leave it “as-is” and click Next but I suggest clicking “Use static ipv4” and enter an IP address. I used IPs from the same class C as my management network and just plugged all of the NICs into the same vlan/switch. All of the IP addresses you used for your host management and the following vmknic interfaces will need to be unique. Enter your IP and subnet mask then click Next and Finish to take you back to the main wizard. Do this same step for the remaining two hosts to create a VMKnic for each one. Click host, click new adapter, browse, click, ok, next, select VSAN traffic checkbox and next. Enter the IP address and click OK. Once back at the main wizard and all three hosts have a “VMK1” created, click Next, Next, Finish.
Example of the wizard window with a vmk1 interface on each host:
There is one last thing to do before we are done configuring our vSphere cluster. Let’s configure vMotion on our management network so VMs can float around as needed. Click on the house icon at the top to get the main menu and select Host and clusters. Click on an ESXi host to the left and select the Manage tab at the top. Then, select Networking, the “Vmkernel adapters” submenu and click on vmk0 (vmkzero – management network.) Click the Pencil icon above it (edit settings), check the checkbox for “vMotion traffic” and click OK. Be sure to do this on all three of your hosts. Click host, manage, networking, vmkernel adapters, vmk0, edit settings, checkbox, OK.
You are now configured! You can select each host and see the “vsanDatastore” we created in part 2 http://vmnick0.me/?p=25 via CLI commands. As a test, to make sure everything is working, you can select the vCenter Appliance VM you created and “migrate/vmotion” it to another host in the cluster. To do this, {right click} on your vCenter VM and select Migrate. In the Wizard, click Next, check the checkbox for “Allow Host selection within cluster” at the bottom and click Next, select a host that your VM “is not” running on (second or third in the list maybe), and click Next, Next, Finish. You should see a task start on the right hand side. If it completes successfully then you are set! You can even migrate it to the third host just to make sure that host is happy too. If it fails, throw the error message into Google and you will find a ton of information on what happened and how to fix it.
Congrats on your new VSAN cluster!
Here is a quick snip of the resources you can expect for each host in this cluster (keep in mind total available storage will depend on how you configure your vSAN cluster)
If you have any questions or run into any issues while running through this three part walkthrough, feel free to send me a message on Twitter.
My next write-up will be about the performance we can expect from our Intel NUC vSAN cluster. This will include some storage IO, network throughput and over all latency metrics for our Virtual Machines.
Now that our hardware is configured, assuming you opened all of your packages and placed the RAM, NIC, and drives into the NUC, lets configure the hardware and start building our VSAN cluster!
Current assumptions:
– Your hardware is installed into the NUC.
– You hardware is good and error free…. We all love Electrostatic Discharge!
– You downloaded the ESXi software and have it ready on a bootable USB drive or CD to boot your NUC with.
Lets start part two by kicking off some downloads that may take a while. Download the latest vCenter Appliance from VMWare. You can find it by going to http://www.vmware.com and clicking on “Downloads” then the VMWare Virtual SAN “Download Product” link. While you are on the site, go ahead and download the vSphere client. The current one, as of this text, is “VMware vSphere Client 5.5 Update 2.” You will need this to connect to one of your hosts to deploy the vCenter Appliance. You will also want to check the Intel website for BIOS updates for your NUC. A quick google of “Intel NUC Driver Download” should get you to the page you want. I don’t know what version of NUC you purchased so I can only provide you with this: http://bit.ly/1y8GZMo. An updated BIOS is important because I did run into an issue where one of the three NUCs had an older BIOS that would not boot from a USB drive. A quick update fixed it. As of this text, the BIOS I have on my NUC is version 0033 for the D54250WYK.
Once everything is downloading, let’s jump back to the hardware and configure the BIOS.
BIOS settings and update:
Cable up your NUC; power, keyboard, mouse, monitor, network. Power it on and hit the F2 key to get into the BIOS. On the first page of the BIOS you will see the BIOS version. If you need to update it, power it down, copy the BIOS file you downloaded to a USB drive and power up your NUC. Once back into the BIOS click the wrench icon and select “Update BIOS.” If the NUC likes your drive, you will see a drive letter to the left side, click it and it will list your BIOS on the right. Select it and start the update process. It will reboot a few times and scare the crap out of you but it should be doing good stuff.
Once the BIOS version is happy, lets check some important settings that we will need for VMware ESXi to perform at its highest potential. Click the Advanced button and start by setting your system time. Make sure the Memory Information section is correct and you are not missing RAM. Click the devices tab and make sure all of your USB ports are enabled. Click through the sub-tabs under devices. SATA should have AHCI mode selected and you should see your MSATA drive listed here. Video sub-tab, make sure that the least amount of “GD minimum memory” is selected (32MB). We don’t want to waste RAM to a video adapter we will never be using. Onboard Devices sub-tab; uncheck the boxes for Audio, microphone, HDMI audio and Consumer IR. This should reduce the amount of junk ESXi needs to install and deal with. Now select the Security main tab and ensure that “Virtualization Tech” and “VT-D” check boxes are checked. Next click the Power main tab and set the Dynamic Power Technology to “OFF.” You can leave this enabled if you like but your VMs may get higher than normal CPU wait times as the CPU “powers up” from idle states. Lastly, click the Boot main tab and ensure that you can see your USB drive or USB CDROM in the list and that it’s selected as a boot device.
Hit F10 to save your new config and reboot the NUC.
Quick example of the BIOS screen:
ESXi install time!
Ensure your CD, Flash Drive, “Install media” is connected and let it boot to the ESXi installer. If your boot settings are playing nice, you can hit {F10} during the boot screen to get a boot menu. From there you can select your media of choice.
Here is the install; {enter}, {F11}, Select your USB Drive for the install and {enter}, Select your language and {enter}, type a password {tab} type the same password {enter}, {F11}. When the install is done you will get a dialog box for one more {enter} to reboot the PC. If you used a CDROM, it will auto eject the CD. If you used a flash drive for the install, be sure to remove it before the reboot to prevent it from going into the installer again.
Once the reboot is done and has fully booted into the Hypervisor, it’s time to configure the IP address on the correct network adapter. Press {F2} to bring up the login prompt and enter root for the username and type the password you created during the installer. Down arrow to “Configure Management Network” and press {enter}, {enter} again on Network Adapters, then make sure that “vmnic0” has an “X” next to it and press {enter}. *** The best way to know what adapter should be selected here is to only connect one adapter to your switch. This adapter will say “Connected” while the second adapter will show “Disconnected.” Now arrow down to “IP Configuration” and press {enter}. If you are connected to a router or network that offers a DHCP address then you may already have an IP address. If you want to configure a static IP then press the down arrow to highlight “Set Static….” And press the {space} bar. Down arrow to the IP and type it, down arrow to the subnet mask, type it, and then the gateway, and finish with {enter}. Be sure to press {enter} when you are done or it wont save your settings. When you are back at the Configure Management Network page, press {esc}, it will ask if you want to Apply Changes, press {y} to accept. From here you can do the remaining host config with the vSphere client.
Do the same install steps until all of your NUCs have the Hypervisor installed and a different IP address configured. Then, install the vSphere client you downloaded earlier. The file will be named something close to “VMware-viclient-all-5.5.0-XXXXXXXX.exe.” Once installed, launch the client. You will get a prompt to enter the IP address of the NUC you are connecting to, username “root” and the password you used to install the hypervisor. Click Login and you will get a Certificate Warning dialog. You can check the box to install the cert to suppress the warning then click Ignore to continue. You will also get a popup saying you have sixty days to evaluate the software.
Time to configure our VSAN datastore so we can install the vCenter Appliance.
If you have an existing vCenter server and plan to use that for your VSAN config then you can skip a majority of what follows here. If the three NUCs are your first or only virtual setup, then follow along to configure the hosts so we can install vCenter.
Enable SSH on your host.
Connect to one of your NUC hosts using the vSphere client, (if not already connected) click on the “Configuration” Tab at the top, select “Security Profile” on the left side, then Properties at the top right under Security Profile. This should pop up a box where you can click on “SSH” and click the options button. Then another popup box where you can click the “start” button and “ok.” You can also select the button for “Start and stop with host” if you want SSH enabled all the time, even after reboots. If you do not make this setting, the next time you reboot, SSH will be disabled.
Launch your favorite SSH program. Putty is most common and here is a download link : http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
If you are using putty , enter the IP address of your ESXi host and click Open to start an SSH session. You may get a Cert Alert, click Yes to continue. Log into your host using the user root and the password you created.
*** Big Note here: I am 100% pulling the following CLI commands from William Lam -> @lamw to create the vsanDatastore. Here is the link to his blog post if you want the long version with screen shots and details. http://www.virtuallyghetto.com/2013/09/how-to-bootstrap-vcenter-server-onto_9.html
All credit for these commands go to Mr. Lam and his awesomeness for providing via his blog.
I will summarize the commands below:
Command time : (you can copy and paste most of these commands)
esxcli vsan policy setdefault -c vdisk -p “((\”hostFailuresToTolerate\” i1) (\”forceProvisioning\” i1))”
esxcli vsan policy setdefault -c vmnamespace -p “((\”hostFailuresToTolerate\” i1) (\”forceProvisioning\” i1))”
python -c ‘import uuid; print str(uuid.uuid4());’
—- Copy and paste this into the next command — Mine was “4502fe3c-7b95-4386-8894-4367ef5a315f”
esxcli vsan cluster join -u 4502fe3c-7b95-4386-8894-4367ef5a315f
—- Be sure to change my UUID with yours in the above command ( I guess you could just use mine…)
esxcli storage core device list
—- Copy and paste these two lines into your next command.
—- You will need to save two things here, the Spinner HD name and the SSD name. Mine listed as follow:
—- HD (t10.ATA_____ST2000LM003_HN2DM201RAD__________________S32WJ9DF815599______)
—- SSD (t10.ATA_____Samsung_SSD_840_EVO_120GB_mSATA_________S1KTNSAF507039N_____)
esxcli vsan storage add -s t10.ATA_____Samsung_SSD_840_EVO_120GB_mSATA_________S1KTNSAF507039N_____ -d t10.ATA_____ST2000LM003_HN2DM201RAD__________________S32WJ9DF815599______
—- This single command will create your VSAN datastore so we can deploy the vCenter Appliance.
—- This is a single command so watch for the page break – esxcli vsan storage add -s SSDName -d HDDName
DONE! Go back to your vSphere client and you should see a new datastore named “vsanDatastore.”
Time to deploy the vCenter Appliance:
In the vSphere client, click on your ESXi host object on the left side of the window, (disable Maintenance mode if its enabled, right click the host and select “Exit Maintenance Mode” to do so.) Now click on “File” at the very top of your client and select “Deploy OVF Template.” A wizard will launch. Click browse and select the vCenter Appliance you downloaded earlier. The name will look something like “VMware-vCenter-Server-Appliance-5.5.0.20200-2183109_OVF10.ova” {Next}, {Next}, Name Your vCenter VM and {Next}, Select the bullet for “Thin Provision” and {Next}, {Finish}. It will pop up a task window as it is copied to your ESXi host. When it reaches 100% and completes successfully, click {close}.
Time to Power on vCenter and configure it:
Back in your vSphere client, click on the plus sign “+” to the left of your ESXi host to show the new vCenter VM under it. Click the vCenter VM and click the Green Play button at the top of the client. You can also power it on by right clicking the vCenter VM object, selecting Power, and Power on. Now open a console to the VM. You can do this by clicking on your VM and then clicking the small monitor looking icon above it, OR you can right click your vCenter VM and select “Open Console.”
**** NOTE!! When working in the VM console, If you need to “unlock” your mouse from the console window, press {Control}+{Alt} (at the same time.)
Once in the console, you should see either a black booting screen or a blue “ready to configure” screen. When it is at the blue screen, click into the console window and press {enter} to login. The default login for a brand new vCenter Appliance is root+vmware. If entered correctly you should get a red prompt of “localhost:~ #”
Type the following command and press {enter} — (copy and paste will not work–sorry)
/opt/vmware/share/vami/vami_config_net
This will launch into a wizard to configure the IP of the vCenter Appliance.
Press {2}{enter} and type your Default Gateway then press enter when done. {10.10.10.1}{enter} as an example.
Press {6}{enter}. {enter} for no IPV6 address, {y}{enter} for IPv4 address, {n}{enter} for DHCP, Type your IP address {10.10.10.2}{enter} as an example, Type your Subnet mask {255.255.255.0}{enter} as an example. Then {y}{enter} to save your config and reload the network interface.
To finish, Press {1}{enter} to exit the wizard, type “exit” to log out of the vCenter CLI, then press {Control}+{Alt} (at the same time) to release your mouse from the console.
*** NOTE!: look at the console for your new vCenter Web Client URL. It will look like this : https://10.10.10.2:5480 You can now close the console window if you like.
Open a web browser of your favorite flavor and type the URL you saw above: https://10.10.10.2:5480. Since its HTTPS, you will get a Cert error, click Accept, Continue etc until you get to the login page. Again, the default user/pass is root+vmware. Accept the EULA (after reading all of the words first….) and click Next, A wheel will spin for a bit then click Next again. Select the “configure with default settings” bullet and click Next and Start. It will do a bunch of stuff at this point so take a break and come back…
Breaks done! If everything completed successfully then click Close. You should see the window populate with all of the “running” services. Make sure the “vSphere Web Client” is in the Running state.
From here, we need to configure two advanced settings before we move on.
1 – Click on the Admin tab at the top and create a new password for your Administrator account. Enter the old password of “vmware” and make something new for you. Optional, you can disable the password expires bullet but if this is a 60 day evaluation, the 90 day limit wont matter. (Now before you hit submit…)
2 – Click the bullet for “Yes” next to “Certificate Regeneration Enabled” – Now hit submit. If things are happy you should get a green “Operation was Successful” at the top.
Lets log into the vCenter Appliance vSphere Web Client:
Launch a web browser (that has flash installed) and enter your vCenter URL. Example https://10.10.10.2 You will get the cert error, continue or accept as needed. Next, click the “Log in to the vSphere Web Client” link on the right side of the page. The URL will change to http://10.10.10.2:9443/vsphere-client/ and launch the flash based Web Client. Type the admin username “administrator@vsphere.local” and password. The default is vmware unless you changed it as listed above.
You now have a working vCenter server and we are ready to finish configuring VSAN!
The final installation (part 3) coming soon!
Again, Thank you Mr. Lam for your awesome blog : http://www.virtuallyghetto.com/2013/09/how-to-bootstrap-vcenter-server-onto_9.html