Just a quick post about vCenter 7.x vs vCenter 6.7.
There is a new ‘permission’ requirement you will need to add to your users role if you are deploying OVAs. This will be seen in the ‘Select Storage’ menu of an OVA deployment.
In vCenter 7.x, you will need to add the permission ‘Profile-driven storage > Profile-driven storage view’ to your users ‘Role.’
Log into your vCenter with creds that allow edit of user roles. Navigate to Administration and select ‘Access Control > Roles.’ Select your Role from the menu and click the edit icon. Scroll down to Profile-driven storage and then select the check box for ‘Profile-driven storage view.’ Click Next then Finish.
Users impacted by the added permissions will need to log out and back into vcenter to use the new permission.

It’s been a while since I needed to update a VIB because VUM does a great job for upgrades/updates. Again, (for all my blog posts) you can reduce the code but I’m leaving it bloated so folks see the action lines and can edit/add as needed.

*** BIG gotcha with the script below. If you would like to loop through all hosts in a cluster, you will need to upload the VIB to a datastore that all the hosts can see/access. If you do not have shared storage, you can run the script serially (without the foreach loop) and edit the $vibpath var for each host.

#Update all hosts!
foreach($a in (get-vmhost | sort name)){
$vibpath = "/vmfs/volumes/datastore/YourNewVIB.vib"
$q = get-esxcli -vmhost $a
$dowork = $q.software.vib.install($null,$null,$null,$null,$null,$true,$null,$null,$vibpath)

## OR update a single esxi host
$vibpath = "/vmfs/volumes/datastore/YourNewVIB.vib"
$q = get-esxcli -vmhost "YourHostName"
$dowork = $q.software.vib.install($null,$null,$null,$null,$null,$true,$null,$null,$vibpath) 

$a (in the foreach loop) will dump the hostname and $dowork will dump the 'success' message for the VIB install.

As of vCenter 6.7u1, you cannot install vCenter without working DNS (forward and reverse lookup) unless you complete a scripted install. If you attempt to deploy vCenter 6.7u1 in a lab using an IP address, it will deploy the VM in stage one but then fail the install/config in stage two. When stage two fails, you will need to delete the VM to start the GUI/VAMI style install again.
VMWare KB to show that DNS is required! – https://kb.vmware.com/s/article/57126

Example of a scripted install can be found here. (William Lam )
Additional information and examples here: (Seth Crosby)

While we can play with scripted installs all day, if we don’t already have working DNS in the home lab, we might as well get one deployed. There are tons of ways to complete this. I’ll cover what I did using a small Linux VM. Using SSH, Copy and Paste, you should be able to deploy the VM and configure DNS in under ten minutes. (Unless you are still running spinning media in your home lab…)

Start by downloading your flavor of Linux. I went with Ubuntu Server. You can download that here: https://www.ubuntu.com/download/server/

While the ISO is downloading, log into your ESXi host, create a VM with 1vCPU, 2GB ram and 5GB (or larger but thin..) hard drive. Be sure to select your flavor of linux in the OS selection so the devices are correctly selected for your OS. I selected Ubuntu 64-bit. Once the VM is created, select edit settings on the virtual machine, click the CD ROM drop down and select your ISO. I always upload the ISO to my local datastore then mount the ISO via datastore ISO file within the VM settings. Be sure to select the connect check box before clicking save.

Boot your VM to the Linux ISO, install the OS using the minimum settings you feel is required. We will need network and internet connectivity so we can download updates and the BIND package so be sure to configure an IP address that can reach the internet. Once the VM OS is installed and running, launch an SSH session to your VM. You can attempt the following configuration via VM Console but you will be doing a lot of typing which can result in errors in the BIND config.

Within your SSH session, Run the following to update your OS and test internet connectivity:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade

When those complete successfully, pull down BIND.
sudo apt-get install bind9 bind9utils bind9-doc

Now we need to configure BIND for our lab DNS Server.
Here are some quick Assumptions:
– Your local network is 192.168.1.X / 24 (If yours is different, you must edit below for reverse lookup to work)
– DNSServer (VM you are making now) IP address is
– vCenter IP address is
– We will use an IP address instead of a “name” to install vCenter server. You can edit this part of the script and use a name. I have better luck in my home lab if I use simple IP addresses.

Below I will be using Nano. Be sure to Control+X to exit and Save after each file edit. If you want to use VI instead, be sure to :wq! to exist and save.
*** Edit the “vmnick0.local” parts below if you want to use your own local domain name.

sudo nano /etc/bind/named.conf
###Paste the following into the file:
include “/etc/bind/named.conf.options”;
include “/etc/bind/named.conf.local”;
include “/etc/bind/named.conf.default-zones”;

sudo nano /etc/bind/named.conf.local
###Paste the following into the file:
zone “vmnick0.local”{
type master;
file “/etc/bind/for.vmnick0.local”;
zone “1.168.192.in-addr.arpa”{
type master;
file “/etc/bind/rev.vmnick0.local”;

sudo nano /etc/bind/for.vmnick0.local
###Paste the following into the file:
$TTL 86400
@ IN SOA pri.vmnick0.local. root.vmnick0.local. (
2011071001 ;Serial
3600 ;Refresh
1800 ;Retry
604800 ;Expire
86400 ;Minimum TTL
@ IN NS pri.vmnick0.local.
@ IN A
@ IN A
pri IN A IN A

sudo nano /etc/bind/rev.vmnick0.local
###Paste the following into the file:
$TTL 86400
@ IN SOA pri.vmnick0.local. root.vmnick0.local. (
2011071002 ;Serial
3600 ;Refresh
1800 ;Retry
604800 ;Expire
86400 ;Minimum TTL
@ IN NS pri.vmnick0.local.
@ IN PTR vmnick0.local.
pri IN A IN A

Now we need to validate and update the permissions of the files we created:
sudo chmod -R 755 /etc/bind
sudo chown -R bind:bind /etc/bind
sudo named-checkconf /etc/bind/named.conf
sudo named-checkconf /etc/bind/named.conf.local

Next, lets check if our Config files were created correctly. Run the following BIND commands and if it exits with an “OK” then you are good to move forward. If you have errors, be sure to correct them before moving forward.

sudo named-checkzone vmnick0.local /etc/bind/for.vmnick0.local
sudo named-checkzone vmnick0.local /etc/bind/rev.vmnick0.local

Lastly, we need to restart the BIND service so everything starts working:
sudo systemctl restart bind9
*** The above service restart command may depend on the flavor of Linux you used.

Now you can test from another server/desktop on your local network to see if DNS is working. You will need to update your IP configs to point to this new server for DNS. If you want to go even further with a DNS server that can reach the internet and service DNS for other devices on your local network, you can check out this blog for detailed config that includes creating a primary and secondary DNS server.


For me, I just power on the DNS server when I need to use vCenter then power them both off. A requirement for DNS to work before it can power on/boot vCenter is a bit silly but at least we have a quick work around.

You have a few vCenter servers connected via “linked mode” or within the same SSO domain.
If you are on vSphere 6.5, you now have two GUI clients for every day VM administration.
Now, lets say you want to direct users to a single URL and that URL will direct users to a random vCenter in your SSO domain and you can even force users to the HTML5 client or Flash client.

Well, doing this is easy with a simple 500MB Photon OS VM and a small docker NGINX container.
Once you work through these steps, you can have a single URL, like “http://vcenter.vmnick0.me” and it will redirect you to one of your many vCenters and to the client of choice.

Quick disclaimer:
– I do not own or maintain any of the downloads, images, or commands you will use below.  You are free to accept and download software as you see fit and assume all responsibility and risk… if any risk exists.

– A single VM, with a single IP address that can reach the internet (to download updates and the needed docker image.)
– A list of URLs you want to be “redirected” to.

1 – Go to Git Hub and download the Photon OS appliance, “OVA”
2 – Import that OVA to your vCenter or ESXi host using the client of choice.
— The VM will be about 500MB in size when complete so using thin provisioning might save you 15.5GB on this OVA install.
3 – Once imported,  Power on the VM and launch a console.
4 – On the console you will need to complete three things.
–  a. Change the root password.
–  b. Enable SSH and restart the SSH daemon.
–  c. Configure the IP address of the VM and restart the network service

Steps are as follow:
Log into the VM via Console as root/changeme.  You will be prompted to change the root password.  You can do that now or leave it as is….

Enable SSH so you can log in as root:
– Run command “vi /etc/ssh/sshd_config” and find the line “#PermitRootLogin prohibit-password” and edit it so it says “PermitRootLogin yes” (remove the comment too!)

— It’s vi so remember to press “i” to insert and “escape :wq” to save and quit.
– Run command “systemctl restart sshd” to start the SSH daemon.

Configure the IP address of the VM and restart the service:
– Edit the IP, Gateway and DNS IP address below then run this multi line command ”

cat > /etc/systemd/network/10-static-en.network << “EOF”


Cd /etc/systemd/network
Chmod 644 10-static-en.network
rm 99-dhcp-en.network
Systemctl restart systemd-networkd

— Some notes about the above commands
— You created a file named 10-static-en.network and put some ip config into it.
— You then went to the folder where the file was created, changed the permissions of the file, removed the DHCP config file, then restarted the service.

From here, your VM should be reachable from your desktop.  Try to ping or simply SSH to the VM and log in.   If you cant SSH, recheck your steps above.  The only thing preventing you from being able to now SSH to your VM is something on your desktop or your network blocking the connection.

Now, we have a clean Photon Appliance deployed.  Lets run some updates, get docker running then start building our docker image.
5 – Run the following commands.  These commands will update your OS and packages, install GIT, then reboot the appliance.  This reboot is important because some updates will make docker perform weird when started.
– tdnf update -y
– tdnf install -y git
(only when the above is complete – reboot the VM)
– reboot

Once update is complete and your VM is back online (should only take a few seconds to reboot) SSH back into your VM and lets prep things for Docker and our nginx image.
6 – Run these commands to open up the IP Tables firewall for port 80 and 443.
echo ‘iptables -A INPUT -p tcp -m tcp –dport 80 -j ACCEPT’ >> /etc/systemd/scripts/iptables
echo ‘iptables -A INPUT -p tcp -m tcp –dport 443 -j ACCEPT’ >> /etc/systemd/scripts/iptables
systemctl restart iptables

7 – Run these commands to enable and start docker, get our nginx “openresty” image, build the image, then run our first Container!
systemctl enable docker
systemctl start docker
git clone https://github.com/openresty/docker-openresty.git
cd docker-openresty
docker build -t myopenresty -f trusty/Dockerfile .

docker run –name nginx -p 80:80 -it myopenresty /bin/bash

** The “docker build” command can take two to ten minutes to complete.  I suggest pasting that command by itself, waiting for it to complete, refill your coffee, then running the docker run command after the build process is 100% complete.  The build command will pull down all of the needed images and build your docker container image.  This is where having a working internet connection for your Photon VM is most important.  If you are running this install inside a corporate network, you may be blocked from some of the download sites in the image.  Watch the SSH session for any download issues.  You will know it is complete and successful if you are back at your Photon root prompt and see “Successfully built ############”.
You can also run the “docker image ls” command to see if the image is in your local repo.

8 – You should now have a running docker container on your Photon OS appliance and be inside the shell for that Container.  Your prompt will change from Photon root to the HASH of your container image.

From here, you have one last thing to do, configure your nginx server nginx.conf file, restart the service, and test!
– You should be at the console of your docker container within your SSH session.  Edit the below code to reflect your URLs then paste this long multi line code into your SSH session to delete the old config files, create your nginx.conf file and restart nginx.  Notice below that I specified “/ui/” in my vcenter URLs.  If you are running vSphere 6.5, this will force your users to the HTML5 client.  If you want to force users to the Flash client…. {pause for thought here…}  then add  “/vsphere-client/” to the end of your vCenter URLs instead of the /ui/.  Or, simply leave the tail off to direct them to the page where you can select your GUI flavor.

mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.confCOPY
cp /usr/local/openresty/nginx/conf/nginx.conf /usr/local/openresty/nginx/conf/nginx.confCOPY
cd /usr/local/openresty/nginx/conf/
rm nginx.conf
cat > nginx.conf << “EOF”
events {
worker_connections 1024;
http {
server {
set_random $rnd 1 4;
listen 80;
location / {
if ($rnd = 1){
return 301 https://vcenter-prod.vmnick0.me/ui/;
if ($rnd = 2){
return 301 https://vcenter-dr.vmnick0.me/ui/;
if ($rnd = 3){
return 301 https://vcenter-test.vmnick0.me/ui/;
if ($rnd = 4){
return 301 https://vcenter-dev.vmnick0.me/ui/;

nginx -s reload

*** If the “nginx -s reload” command fails with some text about “error….PID…failed” nginx might not be running, so a reload isn’t going to work.   You can test this by simply starting nginx with command “nginx”

Now you can test your setup.   Go to a browser and type in the IP address (or DNS name or FQDN) of the VM you just created.  If everything is working correctly you should get redirected to one of the URLs you entered in the code above.  If you didnt edit the config file, you might have trouble getting to my vcenter servers.

**BIG NOTE!   you have two options to exit now
1 – Within your SSH session, press “Control+P+Q” to exit without killing your docker container.  Doing this should exit back to your photon root prompt.
2 – Kill the Session by closing the window.  Do not type exit or press Control+c.
If you accidentally typed exit and killed your container, you will need to start the container, restart another interactive session with that container, start nginx then exit gracefully. Commands for that will be different for you because your container ID will be different than mine.  Edit the below as needed:
docker container list -a
docker container start 1b359d921f60
docker container exec -it myopenresty /bin/bash
Control+P+Q     (Pressing this key combo is what created the “read escape sequence” text in the below screen shot.  That is not a command you can type.)

Some things to consider:
1 – Some browsers will cache the Parent URL and may not be “redirected” to a different URL after the first.
2 – Using more than one browser or different machines should result in other users/browsers receiving a different redirected URL.
3 – This can be used for any URLs or even IP addresses!  If you have some very log URLs, you can use this to auto complete for you.  NGINX is so flexible you can redirect based on trailing folder /folder/ in the URL using /location blocks.  You can even redirect based on server port, port 8080 and 8081 as an example could send you to different URLs based on the listen line.   Example,  listen 8080 sends you to google.com and listen 8081 sends you to bing.com!
4 – If you have any questions, find me on twitter @vmnick0

To start:
The best websites to review Meltdown and Spectre:

Intel Announcement:

Cisco Announcement:

Edit:12Jan2018 — added some more research URLs:


This leads to a conversation about patches to the OS:
MS Windows KB:


VMware KB:

(VMware URL Updates – 20March2018)

CVE-2017-5753, CVE-2017-5715, CVE-2017-5754

VMware Performance Impact for CVE-2017-5753, CVE-2017-5715, CVE-2017-5754


Red Hat KB:
https://access.redhat.com/security/vulnerabilities/speculativeexecution (click on Resolve tab)


The patch leads to a conversation about “CPU Performance” impact.
Linux based testing, (Kaiser patch):

Red Hat specific testing:

Microsoft SQL Server specific:

Cloud impact:

*** interesting note here, the article talks about a network impact.

Video Game Company that uses cloud reports CPU impact:

Vmware Performance impact:

General hardware Testing from Blog Sites:  (Be sure to check out all four pages.)


Actual Performance KBs or Blogs from:


Assess performance impact of Spectre & Meltdown patches using vRealize Operations Manager


Some Scripting from VMWare blogs to assist with validation:



Just to be 100% clear here,
As of this blog post (08Jan2018) not all research has been completed by all Application, OS, Hardware, and Software companies.  We are still learning from Hardware vendors (Dell, HP, Cisco, Intel, Apple, etc…) that we need microcode updates, and firmware updates.  I’m sure Cell phone companies will push carrier updates and Antivirus companies will patch to identify bad behavior.
Once we get a full vetting of our Hardware, then a vetting of our hypervisor stack, we can start to work on every OS type and patch level, and then the application patches.   All it takes is that single rogue machine to thwart all of this patching and testing.

With all of this said, (and knowing that not ALL patches are released for the VMware vSphere suite as of today), I wanted to do some simple testing to see what the current patch + build differences would do to some simple Windows OS builds.   We hear a ton of hype on the internet about massive 30% CPU impacts and while that may be true for some, I needed to make sure that wasn’t a global impact and easy to reproduce.

I did complete some quick lab testing:
I used four Cisco B200M4 blades with the same E5-2699 @ 2.2Ghz CPU to test.
Host1 – vSphere 6.0 – 5050593 – No Patch
Host2 – vSphere 6.0 – 6921384 – Security Patch
Host3 – vSphere 6.5 – 5969303 – No Patch
Host4 – vSphere 6.5 – 7388607 – Security Patch

I deployed four VMs to each host, each host will run four VMs – sixteen total VMs.

Windows 7 – 2vcpu 6GB-MEM
Windows 2008 – 4vCPU – 16GB-MEM
Windows 2012 – 4vCPU – 16GB-MEM
Windows 2016 – 4vCPU – 16GB-MEM

I used DRS Pinning rules to keep each VM set pinned to each host
I used two “quick” testing tools that are able to run under all four Windows OSs.

1 – CPU Mark from the PassMark suite.
– This tool is able to run a suite of tests (Interger Math, Compression, Floating Point, Extended Instructions, Encryption, Sorting, and Single Thread testing)
– I configured all sixteen servers to run all tests in “long test” mode with thirty iterations. The test ran the VM CPU to “near” 100% for two hours.
2 – 7-Zip benchmark tool.
– Under the tools menu for 7-zip is a tool call Benchmark. This tool will aggressivly max out the CPU and measure “MIPS” Millions of Instructions Per Second attempting to Compress and Decompress data.
– The test runs until stopped. I ran the test across all sixteen servers for two hours minimum and recorded the difference in “MIPS” per OS and ESXI host.

Vmware has a testing suite called VMMark but it is mostly Linux appliance based and the level of difficulty to configure is large. After spending a few hours, I was able to get it to fully deploy but it would never successfully complete a load test with valid results. The product even suggests sending the results to vmware for correct analysis.

I ran both tools over several hours across all four Windows OS and across all four vSphere ESXi patch versions.
Extremely summerized report of my findings:

First Application (CPU Mark):
– No known patterns between OS and ESXi version or patch level to show an easy to identify performance impact for just this OS.

Data numbers: (higher is better)
Key : OS – 6.0 No Patch – 6.0 Patched – 6.5 No Patch – 6.5 Patched

Windows 7 – 3763 – 3768 – 3761 – 3760
Windows 2008 – 7017 – 7026 – 7016 – 7011
Windows 2012 – 6987 – 6982 – 6984 – 6984
Windows 2016 – 7004 – 6997 – 6994 – 6990

*** Quick review for CPU Mark load testing:
The only test to show different results (out of the nine CPU tests in the single score) was Floating Point Math.
The other eight load tests show similar results across all ESXi hosts within the same Windows OS.
Results do show a slight decrease in performance Between 6.0 non patched and 6.5 patched.
In some cases, the vSphere 6.0 patched version performaned better than non patched 6.0 version
In no cases did the 6.5 patched perform better than non patched 6.5
I would suggest a 1% reduction in performace between current non patched 6.0 and patch 6.5 for any application that requires “Floating Point Math” operations.

Second Application (7-zip):
– No known patterns between OS and ESXi version or patch level to show an easy to identify performance impact.

I ran two tests:
Test 1:
7-zip 32MB setting 1:1 Core to thread :
Host1 – vSphere 6.0 – 5050593 – No Patch
Host2 – vSphere 6.0 – 6921384 – Security Patch
Host3 – vSphere 6.5 – 5969303 – No Patch
Host4 – vSphere 6.5 – 7388607 – Security Patch

Win7-Host1 – compress = 7238 – decompress = 6182
Win7-Host2 – compress = 7325 – decompress = 6182
Win7-Host3 – compress = 7238 – decompress = 5780
Win7-Host4 – compress = 7130 – decompress = 6222

Win2008-Host1 – compress = 13971 – decompress = 12241
Win2008-Host2 – compress = 14092 – decompress = 12282
Win2008-Host3 – compress = 13930 – decompress = 12282
Win2008-Host4 – compress = 13731 – decompress = 12323

Win2012-Host1 – compress = 14071 – decompress = 12341
Win2012-Host2 – compress = 14111 – decompress = 12303
Win2012-Host3 – compress = 14029 – decompress = 12344
Win2012-Host4 – compress = 14196 – decompress = 12303

Win2016-Host1 – compress = 13906 – decompress = 12261
Win2016-Host2 – compress = 13865 – decompress = 12303
Win2016-Host3 – compress = 13708 – decompress = 12223
Win2016-Host4 – compress = 13906 – decompress = 12144

Test 2:
7-Zip 192MB setting 1:1 Core to thread : 10 passes minimum
Win7-Host1 – Total score = 189% – 3477/6561 MIPS
Win7-Host2 – Total score = 189% – 3493/6569 MIPS
Win7-Host3 – Total score = 187% – 3444/6432 MIPS
Win7-Host4 – Total score = 189% – 3438/6467 MIPS

Win2008-Host1 – Total score = 370% – 3436/12653 MIPS
Win2008-Host2 – Total score = 370% – 3452/12712 MIPS
Win2008-Host3 – Total score = 371% – 3453/12733 MIPS
Win2008-Host4 – Total score = 370% – 3437/12644 MIPS

Win2012-Host1 – Total score = 375% – 3504/13058 MIPS
Win2012-Host2 – Total score = 373% – 3457/12821 MIPS
Win2012-Host3 – Total score = 370% – 3477/12818 MIPS
Win2012-Host4 – Total score = 377% – 3483/13060 MIPS

Win2016-Host1 – Total score = 372% – 3473/12828 MIPS
Win2016-Host2 – Total score = 371% – 3441/12674 MIPS
Win2016-Host3 – Total score = 371% – 3423/12638 MIPS
Win2016-Host4 – Total score = 369% – 3444/12654 MIPS

Load test summary:
No patterns found compared to the data from the first load gen tool (CPU Mark)
For Windows 7 the 6.0 patched version performed better
For Windows 2008 the 6.5 unpatched performed better but not by a lot.
For Windows 2012 the 6.5 patched version performed better.
For Windows 2016 the 6.0 unpatched performed better.

Overall notes and my thoughts:
I was unable to find a performance impact just from patches and version of vSphere based on only using MS Windows OS benchmark testing.
I would like to configure VMMark and find better results. Maybe for another day.

Windows 2008 and 2012 will not get a patch (for now)
Older versions of MSSQL will not get patched.

I think every application and Operating System will be impacted in different ways.
We have no way to know what Application specific impacts will exist because of different configurations per application instance.
We have no way to know what OS level impacts will exist because we cannot calcuate all possible application configurations across all versions and patch levels of MS Windows.
If an increase in CPU usage is identified, the amout of electricity used, per system, and electricity used to cool the datacenter will be large. (globaly)

The folks on the VMware API team along with some Community Support ( @butch7903 – Russell Hamker)
have created an amazing backup script for your vCenter Appliance. Give it a once over.

Some quick notes:
– Only works on vSphere 6.5 and higher.
– It requires PowerCLI version 6.5.3 or greater.
– It needs the Powershell PSFTP or WinSCP Powershell module if you want to use FTP or WinSCP to copy the backup from vCenter to a storage location.
* (Optional) – I deployed a Photon OS VM and I use that as my SCP target to save my backups.

If you want to save all of the Command Line GUI in the 772 Line file, all you need are these lines to complete the backup.
Import-Module -Name VMware.VimAutomation.core
Import-Module -Name WinSCP
connect-cisserver “vcenter01” -username “administrator@vsphere.local” -pass “myPass”
$BackupAPI = Get-CisService com.vmware.appliance.recovery.backup.job
$CreateSpec = $BackupAPI.Help.create.piece.Create()
$CreateSpec.parts = @(“common”,”seat”)
$CreateSpec.backup_password = “”
$CreateSpec.location_type = “SCP”
$CreateSpec.location = “”
$CreateSpec.location_user = “root” #username of your SCP location
$CreateSpec.location_password = New-Object –TypeName System.Management.Automation.PSCredential –ArgumentList “backup location username”, (ConvertTo-SecureString –String “backup location password” –AsPlainText -Force)
$CreateSpec.comment = $Comment
$BackupJob = $BackupAPI.create($CreateSpec)

Then you can use the following command to check on its status:
$BackupJob | select id, progress, state

Over the Holidays, I get a little free time to get back into the entertainment side of computers and electronics. Last week, I noticed some websites starting to load slower than normal. I jumped into the firewall logs and noticed the advertisements were doing massive DNS queries. Some websites were making 50-200 connections every second in an attempt to load as much click through and advertisement as possible. I took all of the DNS names from my firewall log, ran my quick DNS Lookup Powershell Script to lookup all of the possible IP addresses hidden behind that DNS name, then blocked all of it in my firewall. These few websites forced me to block a few Class C subnets and forty-five single IPs. After this firewall config update, pages loaded instantly again. This is a lot of work to do every few months to keep up with the changing advertisement landscape.
I know a lot of pay-for appliances exist for the enterprise to block advertisements and “bad sites” but I never gave it much thought to research the raspberry Pi offerings. I needed something quick, automatic, and very low cost + low power.

My bro @EuroBrew told me about this software for the Raspberry Pi called “Pi-Hole” — Website here https://pi-hole.net/
After installing it, I am impressed. The web-based Admin GUI is great. You can edit, add or remove items to your black+white list on the fly. It even has fancy graphs to show you how many items were blocked. Pi-Hole was running for two minutes and with a few seconds of Facebook Traffic from the wife, it had already started to show some big numbers. Summary: You don’t know what you accessing until you see how much is blocked.

The Dashboard:


Even though you are accessing a single website, it’s amazing how many other sites are accessed without you even noticing. Again, within that two minutes of activity, thinking that facebook would be the top site accessed, here are the top URLs.


Now, let me help you get this installed in your home lab!

Here are my quick steps to get this going:
1 – Download the updated version of Raspbian — I recommend the lite version https://www.raspberrypi.org/downloads/raspbian/
2 – Use their website if you need assistance to get a clean Raspberry Pie setup.
— Use an image tool to image your SD Card
— Boot up your RasbPi with the new image and configure the Timezone + Keyboard.
—- It is important to make sure your “|” key works…. UK version of the keyboard has issues so change it to US.
—- You can do this with the “sudo raspi-config” command + menu system.
3 – Now configure your RaspPi with a static IP. The Pi-Hole installer will help you with that but its way faster to do it now.
— You edit the /etc/dhcpcd.conf file with “sudo nano /etc/dhcpcd.conf” and change the static settings for your eth0 interface.
4 – Now you can run the pi-hole installer with “curl -L https://install.pi-hole.net | bash”
— Capital L and vertical line is a must for that command.
— This will fail if DNS isn’t working so check your network settings with an ifconfig if you are having issues.
5 – When the installer is complete, it will give you a Password to write down….. Don’t forget to do this!
— You can also change the password after the installed is finished with command “pihole -a -p mynewpassword123”
6- From a browser, go to your new Pi-Hole install via http://x.x.x.x/admin and configure it.
— Click Login on the left window.
—- From here you have access to Settings where you can correct any missed settings, or change your Temperature Units from Celsius to Fahrenheit.

Now, update your PC, Wireless, router settings to use your RaspPi as your DNS server and you are done.
You can go into great detail to add additional blocks so hit the google search box for that.

With a special thanks to @VirtualDominic and my local VMUG PDX leaders and friends, I was able to play around with the new Intel i7 NUC.
Link to the Intel NUC website if interested —>  here
This new i7 NUC powers an i7-6770HQ which translates to a quad core, eight thread 2.6ghz+3.5ghz turbo processor.
Since this is part of the 6th gen Intel processors, it pairs with up to 64GB of DDR4-2133 memory, and the Intel Iris Pro 580 GPU which can push 4K @ 60fps over Display Port.
The NUC has room for two M.2 SATA3 NVMe SSDs.   This sounds like an amazing chance to test an all NVMe flash vSAN.

This Mini PC uses under 20watts at idle and up to 80 Watts at full load.  The 80 watt high water mark considers the use of the GPU and since we only care about CPU cycles for our virtual workloads, we should float around 30-40 watts for normal test lab usage.

We now have the opportunity to use two physical servers and a third as a witness VM for our three node vSAN.   This means we can float eight cores, sixteen threads and 64-128GB of ram across two i7 NUCs and have all of it running around 50-80 watts for both nodes.   A high performance test lab running on less power than the old incandescent light bulb.   Though, some day that analogy will change as LED bulbs take over our homes.   So, lets say, a high performance test lab running less than ten LED bulbs….. that your kids left on….all day….

Now to the Lego build.   For this build, we selected a tall tower.  Since the cooling profile of these i7 NUCs is to suck in heat on the side and blast it out the back, I had to make a building that was able to pull cool air up and then freely throw it out the back.   Since some reviews show the NUC running at 110F (42C) at the heat sink exhaust, I needed to add more cooling to prevent total Lego meltdown.
I started with a solid base and built in a 120mm case fan that can run off of a 5v USB connector.


Next was the addition of the side wall and support columns:


Time to add in the fancy front columns, some flair on the wall, and ensuring that all of the NUC ports are available.


Here is a snap of the completed tower with the power button and USB ports showing in the front.


And here is the final build.  A fully built tower with a throne for our Megaman Hero.



Time to top if off with a little VMware branding.


If you have any questions about the build or NUCs, please send me a message on twitter @vmnick0

MSI has announced they will be releasing a new Mini PC called “MSI Cubi 2 Plus” and the “MSI Cubi 2 Plus vPro.”  While the Case is a bit unappealing, its the hardware that makes it amazing.   In the same form factor as the Intel NUC, MSI is able to pack in an i7-6700T and a full 32GB of DDR4 Ram.   Then, to make it more exciting, they intend to allow for CPU swap by having a small “ZIF” CPU socket.  This speaks loud to me, simply because anything that is considered modular means better access to the internal parts and a more dynamic CPU cooler.

Here are some quick links you can find via a google search about the MSI Cubi 2:


In terms of performance, here are the simple CPU specs between the MSI Cubi 2 i7-6700T and the Intel NUC i7-5557U.  I understand that one is 6th gen and second is 5th gen, but I don’t see a 6th gen Intel NUC with i7 CPU announcement yet….


I’m thinking this build will require an all flash VSAN configuration and a special Lego build where the MSI parts can exist outside of the mangled stock case.

More to come!

Just a quick heads up for anyone using vDSwitches.  I’ve ran into two issues and I would like to share it with those I’ve spoken with.

#1 – “load based” load balance/teaming policy.
#2 – vDS health check and physical switch Mac Address Table issues.

Here is the KB about the current bug fixes in a patch release (which will also be rolled up into 6.0 update 1).
Here is the text from the KB:
“When using load balancing based on physical NIC load on VDS 6.0, if one of the uplinks is disconnected or shut down, failover is not initiated.”
— This means, if you have a vDS, and “load based” teaming policy set on your 6.x ESXi host,  then you remove a network adapter/Uplink, (or that link fails) the VMs will not failover or start using the other uplinks.   This can and will cause an outage.   The simple fix is to set the vDS teaming policy to the default “Route based on originating virtual port” or something other than Physical NIC load.
Just to clarify, this is not a vSphere 6 vDS issue,  this is a host level – ESXi v6.0 issue.  This can still happen if you have a 5.5 vDS and your host is running ESXi v6.


Issue #2
Here is the KB describing the issue.
The issue is when you enable vDS Health check and your vDS is large enough to over flow your mac address tables, up stream, on your physical network devices.  This can, and will, cause an outage on the network.  I have not tested or reviewed every switch mac address table limitations but anyone can reproduce this with enough effort.
So, how is this happening under the covers?   When you enable vDS health check, it creates additional virtual mac addresses for each physical Network adapter attached to the vDS.   It then sends out “packets” on all uplinks, on all hosts, on all vlans, and all port groups for that vDS.  The text from the KB:
“There is some scaling limitation to the network health check. The distributed switch network health check generates one MAC address for each uplink on a distributed switch for each VLAN multiplied by the number of hosts in the distributed switch to be added to the upstream physical switch MAC table. For example, for a DVS having 2 uplinks, with 35 VLANs across 60 hosts, the calculation is 2 * 35 * 60 = 4200 MAC table entries on the upstream physical switch.”

So, lets scale that out further.  If you have a 64 host cluster, each host has four uplinks attached to the vDS, all on a single vDS with 40 port groups.  64 X 4 X 40 =     10,240 mac address entries just slammed into your switch mac address table.
This might not be an issue for small businesses with small host and NIC counts but that really depends on the switch and router types they are using.

If you have any questions please reach out to me on twitter @vmnick0.me