Dec 222016

Over the Holidays, I get a little free time to get back into the entertainment side of computers and electronics. Last week, I noticed some websites starting to load slower than normal. I jumped into the firewall logs and noticed the advertisements were doing massive DNS queries. Some websites were making 50-200 connections every second in an attempt to load as much click through and advertisement as possible. I took all of the DNS names from my firewall log, ran my quick DNS Lookup Powershell Script to lookup all of the possible IP addresses hidden behind that DNS name, then blocked all of it in my firewall. These few websites forced me to block a few Class C subnets and forty-five single IPs. After this firewall config update, pages loaded instantly again. This is a lot of work to do every few months to keep up with the changing advertisement landscape.
I know a lot of pay-for appliances exist for the enterprise to block advertisements and “bad sites” but I never gave it much thought to research the raspberry Pi offerings. I needed something quick, automatic, and very low cost + low power.

My bro @EuroBrew told me about this software for the Raspberry Pi called “Pi-Hole” — Website here
After installing it, I am impressed. The web-based Admin GUI is great. You can edit, add or remove items to your black+white list on the fly. It even has fancy graphs to show you how many items were blocked. Pi-Hole was running for two minutes and with a few seconds of Facebook Traffic from the wife, it had already started to show some big numbers. Summary: You don’t know what you accessing until you see how much is blocked.

The Dashboard:


Even though you are accessing a single website, it’s amazing how many other sites are accessed without you even noticing. Again, within that two minutes of activity, thinking that facebook would be the top site accessed, here are the top URLs.


Now, let me help you get this installed in your home lab!

Here are my quick steps to get this going:
1 – Download the updated version of Raspbian — I recommend the lite version
2 – Use their website if you need assistance to get a clean Raspberry Pie setup.
— Use an image tool to image your SD Card
— Boot up your RasbPi with the new image and configure the Timezone + Keyboard.
—- It is important to make sure your “|” key works…. UK version of the keyboard has issues so change it to US.
—- You can do this with the “sudo raspi-config” command + menu system.
3 – Now configure your RaspPi with a static IP. The Pi-Hole installer will help you with that but its way faster to do it now.
— You edit the /etc/dhcpcd.conf file with “sudo nano /etc/dhcpcd.conf” and change the static settings for your eth0 interface.
4 – Now you can run the pi-hole installer with “curl -L | bash”
— Capital L and vertical line is a must for that command.
— This will fail if DNS isn’t working so check your network settings with an ifconfig if you are having issues.
5 – When the installer is complete, it will give you a Password to write down….. Don’t forget to do this!
— You can also change the password after the installed is finished with command “pihole -a -p mynewpassword123″
6- From a browser, go to your new Pi-Hole install via http://x.x.x.x/admin and configure it.
— Click Login on the left window.
—- From here you have access to Settings where you can correct any missed settings, or change your Temperature Units from Celsius to Fahrenheit.

Now, update your PC, Wireless, router settings to use your RaspPi as your DNS server and you are done.
You can go into great detail to add additional blocks so hit the google search box for that.

Nov 142016

With a special thanks to @VirtualDominic and my local VMUG PDX leaders and friends, I was able to play around with the new Intel i7 NUC.
Link to the Intel NUC website if interested —>  here
This new i7 NUC powers an i7-6770HQ which translates to a quad core, eight thread 2.6ghz+3.5ghz turbo processor.
Since this is part of the 6th gen Intel processors, it pairs with up to 64GB of DDR4-2133 memory, and the Intel Iris Pro 580 GPU which can push 4K @ 60fps over Display Port.
The NUC has room for two M.2 SATA3 NVMe SSDs.   This sounds like an amazing chance to test an all NVMe flash vSAN.

This Mini PC uses under 20watts at idle and up to 80 Watts at full load.  The 80 watt high water mark considers the use of the GPU and since we only care about CPU cycles for our virtual workloads, we should float around 30-40 watts for normal test lab usage.

We now have the opportunity to use two physical servers and a third as a witness VM for our three node vSAN.   This means we can float eight cores, sixteen threads and 64-128GB of ram across two i7 NUCs and have all of it running around 50-80 watts for both nodes.   A high performance test lab running on less power than the old incandescent light bulb.   Though, some day that analogy will change as LED bulbs take over our homes.   So, lets say, a high performance test lab running less than ten LED bulbs….. that your kids left on….all day….

Now to the Lego build.   For this build, we selected a tall tower.  Since the cooling profile of these i7 NUCs is to suck in heat on the side and blast it out the back, I had to make a building that was able to pull cool air up and then freely throw it out the back.   Since some reviews show the NUC running at 110F (42C) at the heat sink exhaust, I needed to add more cooling to prevent total Lego meltdown.
I started with a solid base and built in a 120mm case fan that can run off of a 5v USB connector.


Next was the addition of the side wall and support columns:


Time to add in the fancy front columns, some flair on the wall, and ensuring that all of the NUC ports are available.


Here is a snap of the completed tower with the power button and USB ports showing in the front.


And here is the final build.  A fully built tower with a throne for our Megaman Hero.



Time to top if off with a little VMware branding.


If you have any questions about the build or NUCs, please send me a message on twitter @vmnick0

Feb 242016

MSI has announced they will be releasing a new Mini PC called “MSI Cubi 2 Plus” and the “MSI Cubi 2 Plus vPro.”  While the Case is a bit unappealing, its the hardware that makes it amazing.   In the same form factor as the Intel NUC, MSI is able to pack in an i7-6700T and a full 32GB of DDR4 Ram.   Then, to make it more exciting, they intend to allow for CPU swap by having a small “ZIF” CPU socket.  This speaks loud to me, simply because anything that is considered modular means better access to the internal parts and a more dynamic CPU cooler.

Here are some quick links you can find via a google search about the MSI Cubi 2:

In terms of performance, here are the simple CPU specs between the MSI Cubi 2 i7-6700T and the Intel NUC i7-5557U.  I understand that one is 6th gen and second is 5th gen, but I don’t see a 6th gen Intel NUC with i7 CPU announcement yet….,84993

I’m thinking this build will require an all flash VSAN configuration and a special Lego build where the MSI parts can exist outside of the mangled stock case.

More to come!

Nov 042015

Just a quick heads up for anyone using vDSwitches.  I’ve ran into two issues and I would like to share it with those I’ve spoken with.

#1 – “load based” load balance/teaming policy.
#2 – vDS health check and physical switch Mac Address Table issues.

Here is the KB about the current bug fixes in a patch release (which will also be rolled up into 6.0 update 1).
Here is the text from the KB:
“When using load balancing based on physical NIC load on VDS 6.0, if one of the uplinks is disconnected or shut down, failover is not initiated.”
— This means, if you have a vDS, and “load based” teaming policy set on your 6.x ESXi host,  then you remove a network adapter/Uplink, (or that link fails) the VMs will not failover or start using the other uplinks.   This can and will cause an outage.   The simple fix is to set the vDS teaming policy to the default “Route based on originating virtual port” or something other than Physical NIC load.
Just to clarify, this is not a vSphere 6 vDS issue,  this is a host level – ESXi v6.0 issue.  This can still happen if you have a 5.5 vDS and your host is running ESXi v6.


Issue #2
Here is the KB describing the issue.
The issue is when you enable vDS Health check and your vDS is large enough to over flow your mac address tables, up stream, on your physical network devices.  This can, and will, cause an outage on the network.  I have not tested or reviewed every switch mac address table limitations but anyone can reproduce this with enough effort.
So, how is this happening under the covers?   When you enable vDS health check, it creates additional virtual mac addresses for each physical Network adapter attached to the vDS.   It then sends out “packets” on all uplinks, on all hosts, on all vlans, and all port groups for that vDS.  The text from the KB:
“There is some scaling limitation to the network health check. The distributed switch network health check generates one MAC address for each uplink on a distributed switch for each VLAN multiplied by the number of hosts in the distributed switch to be added to the upstream physical switch MAC table. For example, for a DVS having 2 uplinks, with 35 VLANs across 60 hosts, the calculation is 2 * 35 * 60 = 4200 MAC table entries on the upstream physical switch.”

So, lets scale that out further.  If you have a 64 host cluster, each host has four uplinks attached to the vDS, all on a single vDS with 40 port groups.  64 X 4 X 40 =     10,240 mac address entries just slammed into your switch mac address table.
This might not be an issue for small businesses with small host and NIC counts but that really depends on the switch and router types they are using.

If you have any questions please reach out to me on twitter



Aug 212015

When the 5.X vCenter webclient released you could navigate to a VM and click a link to generate a VM Console URL.  It looks something like this:

This 5.X console URL was made up of three data variables:
vcenter + VM MoRef + vCenter UUID — See below in the parentheses.
https://(your vcenter server):9443/vsphere-client/vmrc/vmrc.jsp?vm=urn:vmomi:VirtualMachine:vm-VirtualMachine-vm-( you vms MORef Number ):(vcenters UUID)

Now this URL will not work for vCenter 6.x!
If attempted, you will see this in your browser:





The formatting was changed to this:

Lets break down this URL:                                                                              (— vcenter again with port)
/vsphere-client/webconsole.html?vmId=vm-VirtualMachine-vm-                      (— String changed from vmrc to webconsole…)
409                                                                                                                    (— VM moRef ID)
&vmName=vm01                                                                                              (— The Name of the VM you want the console of)
&serverGuid=3a3sf62s-q41g-172d-aj91-a71251658v87                                  (— vCenters UUID)
&                                                                   (— the FQDN of your vCenter Platform Service Controller)
&sessionTicket=cst-VCT                                                                                  (— and a final string)

This last string  (&sessionTicket=cst-VCT)  was something I had to play with..  Using just these characters allowed the browser to prompt for my username/password and then give me a VM console.

Here are the PowerCLI commands to find your VM MoRef ID and vCenter UUID.
— Launch a PowerCLI session and run these oneLiners
—- $global:DefaultVIServers.InstanceUUid   =  This will give you your vCenter UUID
—  ((get-vm “vm01″).id).split(“-“)[2]  =  This will get your VM MoRef ID
—- You can use this one too if you like Views….  (get-view -viewtype virtualmachine -filter @{“name” = “vm01″}).moref.value

Here are the 5.X and 6.X VM consoles… if interested in the differences.










Aug 122015

Twitter has exploded with this new VMware fling.
Here is a quick snip from @lamw about it –

Normally I would throw this in the home lab, play with it for a few minutes, brag to some friends, then get bored and let it absorb into long term brain matter.
But,  this is really cool.  Cool enough to log into the ole’ blog site and type some words about it!

Steps to awesomeness:
1 – Download the VIB @
2 – SCP or upload it to a datastore on the host you want to test with.
3 – Install the VIB – “esxcli software vib install -v esxui-2976804.vib”
– you may need to throw in a “cp esxui-2976804.vib /var/log/VMware” or use the full path in the above command. Which ever is your favorite flavor.
4 – Then navigate to the new client – https://hostname/ui (example :
5 – Log in with your host user/pass  – either root or AD if you have AD integration working with your hosts.
6 – Magic…..

Some cool things to note:
– It has performance graphs… that load instantly…
– You can review almost all things related to that host on a single page or, at most, one click away from that information.
– The simple potential for this GUI is amazing.  I hope this continues to be developed.

Below is a snap of the Host management page:












Below is a snap of the VM management page:
The awesome part is the working console which you can view inside the same HTML5 window or have it break out into a new window.















It has very little in terms of storage eye candy but it looks like its coming.
I was sad to see VSAN Health Monitoring not available when I installed the VIB on my VSAN cluster/host.  Maybe I need to check configs again.





It has some other very useful items and some with an “under construction” logo.  Lets hope this keeps going!




You really need to throw it on a host to check it out to see its full colors and potential.
If I were to ask for one feature to prioritize,  it would be more host stats and graphs.   The ability to see current network and Storage IO/Latency would be amazing.

Mar 102015

Two updates for this one.   This is mostly to cover the large amount of questions I’ve received about the new Intel NUC 5th Gen and if they will support the VSAN setup I’ve detailed in previous posts.

First:  Intel NUC 5th Gen is coming/released and it has all the same stats as the 4th gen but with an updated Processor+GPU and another evil but performance positive change.
Web Link :

The 5th gen NUC is still limited to 16GB of RAM even though they are releasing a new i7 version NUC.  Now here is the catch,  they replaced the half length and full length mPCIe slots with a M.2 slot.  This removes the ability to fit in the second wired network adapter and makes it harder to find or reuse your existing mSATA SSDs.  This M.2 slot does increase the “SATA” SSD speed from 3Gbps to 6Gbps if you are looking for more NUC IOPS/Bandwidth.   So, if you do not care about the second wired NIC, the 5th gen NUC may be an upgrade to your SSD performance in your home lab.

Here is a quick snip to show the difference in the mSATA and M.2 SSDs:

msatassd m2ssd

Second : As small form factor compute is getting popular in the home lab space more fun tools and toys are released.
As an example, Lian-Li has created an Intel NUC Replacement Case.  It helps with cooling and makes it look a little more fancy.
Here is a Link to a review of the case :
I tried to find an actual “Buy Now” link or a spec page on the Lian-Li website but I couldn’t find it.
Also, It looks like Intel had a “NUC Case Mod” competition.
So, if you are interested in more things “NUC,” Give those URLs a quick once over.

Find me on twitter if you have any questions or comments!

Feb 152015

Current assumptions:
– You followed the Part 1 and Part 2 walkthrough.
– – Part 1 >
– – Part 2 >
– You have all of your NUCs powered on, fully booted into the ESXi hypervisor and configured with a valid IP address. Throw out a few Pings just to make sure.
– You have your (flash based) vCenter Web Client launched and ready.
Hopefully you see a screen like this: (if not, click the house icon at the top)


Let’s start by loading up our inventory.

Click the “Host and Clusters” icon and then {right click} on the vCenter object that loads in the left menu. Select “New Datacenter,” wait for the popup to load, give it a name like Rambo or GIJoe and click OK. When your new “Rambo” cluster is created you should see it pop into the left window. {Right click} on your new datacenter object and select “New Cluster.” When the popup loads, enter a name like CaddyShack or ShopSMart, check the box marked “Turn On” for DRS, HA, and Virtual SAN and click OK. As you click the check boxes, lots of settings will load in.  We shouldn’t need to change these for a fully functional vSAN cluster so the defaults will work for now.|

Now it’s time to add our ESXi hosts to the cluster object. If you do not see the cluster you created above, click on the little arrow to the left of your Rambo datacenter object. {Right click} on your CaddyShack cluster object and select “Add Host.” When the wizard starts, enter the IP address (or Fully Qualified DNS Name) of your first NUC (the one you created your vSAN datastore on) and click Next. Enter the user and password you configured, (the default username is root and the its the password you created during the install,) click Next. You may get prompted about a Cert error, click Yes to move on and Next, Next, Next, Next, Next, Finish. Give it a minute to install some bits on the host and load it into your cluster object. If you don’t see it load into the listing on the left, be sure to click the arrow to the left of your CaddyShack cluster object to show the host. If you selected to add the host where you created your vCenter Server VM, you should see that VM listed under your host.  Let’s add the other two NUCs (hosts) to our vcenter. It’s the same steps as above but with the other IP addresses or DNS names. {Right click} on your CaddyShack cluster, select “New Host,” enter the IP or DNS, and Next, type the username (root by default) and password, then Next, Click Yes, Next, Next, Next, Finish. Then again for your third NUC (host.)

Here is our Inventory ready to go!

Now it’s time to create our “VSAN” Virtual Distributed Switch and assign some Virtual Kernel Nics from each host to the vDS. We will need this for the VSAN replication traffic.  I’m going to go quick here so you may want to copy and paste this part into notepad and break it up.
In your vSphere Web Client, click on the home (house icon) at the top and select “Networking” below.  {Right click} your datacenter object (Rambo) and select “New Distributed Switch.” In the wizard, type a name for your Distributed Switch, (“vds-VSAN” for example) and Next, Next, Next, Finish. Once your Distributed Switch is created, {right click} on it and select “Add and Manage Hosts.” In the wizard, select the bullet for “Add host and manage host networking” and click Next. Click “New Hosts” next to the big green plus sign, click the checkbox next to all three of your ESXi hosts and click Next. Make sure that “Manage physical adapters” and “manage VMKernel adapters” are checked and click Next. Click on “vmnic1,” click “assign uplink” at the top, and click “ok” in the popup. You will need to do this for each of the three hosts in the list then click Next when done.

It should look something like this when all three hosts have vmnic1 assigned to the vDSwitch.

The next window we will create and assign VMKnic interfaces for each host. Click on each hosts name and click the “New Adapter” button at the top. A new wizard will pop up and ask you for IP settings. At the first window click Browse next to “Select an existing distributed port group” and select your lonely vds-VSAN default port group, then click OK. When you are back to the main wizard click Next. Check the checkbox for “Virtual SAN Traffic” and click Next. Here, we need to enter our VMKernel IP information. If you are using DHCP you can leave it “as-is” and click Next but I suggest clicking “Use static ipv4” and enter an IP address. I used IPs from the same class C as my management network and just plugged all of the NICs into the same vlan/switch. All of the IP addresses you used for your host management and the following vmknic interfaces will need to be unique.  Enter your IP and subnet mask then click Next and Finish to take you back to the main wizard. Do this same step for the remaining two hosts to create a VMKnic for each one. Click host, click new adapter, browse, click, ok, next, select VSAN traffic checkbox and next. Enter the IP address and click OK. Once back at the main wizard and all three hosts have a “VMK1” created, click Next, Next, Finish.

Example of the wizard window with a vmk1 interface on each host:

There is one last thing to do before we are done configuring our vSphere cluster. Let’s configure vMotion on our management network so VMs can float around as needed. Click on the house icon at the top to get the main menu and select Host and clusters. Click on an ESXi host to the left and select the Manage tab at the top. Then, select Networking, the “Vmkernel adapters” submenu and click on vmk0 (vmkzero – management network.) Click the Pencil icon above it (edit settings), check the checkbox for “vMotion traffic” and click OK. Be sure to do this on all three of your hosts. Click host, manage, networking, vmkernel adapters, vmk0, edit settings, checkbox, OK.

You are now configured! You can select each host and see the “vsanDatastore” we created in part 2 via CLI commands. As a test, to make sure everything is working, you can select the vCenter Appliance VM you created and “migrate/vmotion” it to another host in the cluster. To do this, {right click} on your vCenter VM and select Migrate. In the Wizard, click Next, check the checkbox for “Allow Host selection within cluster” at the bottom and click Next, select a host that your VM “is not” running on (second or third in the list maybe), and click Next, Next, Finish. You should see a task start on the right hand side. If it completes successfully then you are set! You can even migrate it to the third host just to make sure that host is happy too. If it fails, throw the error message into Google and you will find a ton of information on what happened and how to fix it.

Congrats on your new VSAN cluster!
Here is a quick snip of the resources you can expect for each host in this cluster (keep in mind total available storage will depend on how you configure your vSAN cluster)
If you have any questions or run into any issues while running through this three part walkthrough, feel free to send me a message on Twitter.

My next write-up will be about the performance we can expect from our Intel NUC vSAN cluster.  This will include some storage IO, network throughput and over all latency metrics for our Virtual Machines.

Jan 112015

Now that our hardware is configured, assuming you opened all of your packages and placed the RAM, NIC, and drives into the NUC, lets configure the hardware and start building our VSAN cluster!

Current assumptions:
– Your hardware is installed into the NUC.
– You hardware is good and error free…. We all love Electrostatic Discharge!
– You downloaded the ESXi software and have it ready on a bootable USB drive or CD to boot your NUC with.

Lets start part two by kicking off some downloads that may take a while. Download the latest vCenter Appliance from VMWare.  You can find it by going to and clicking on “Downloads” then the VMWare Virtual SAN “Download Product” link. While you are on the site, go ahead and download the vSphere client. The current one, as of this text, is “VMware vSphere Client 5.5 Update 2.” You will need this to connect to one of your hosts to deploy the vCenter Appliance. You will also want to check the Intel website for BIOS updates for your NUC. A quick google of “Intel NUC Driver Download” should get you to the page you want. I don’t know what version of NUC you purchased so I can only provide you with this: An updated BIOS is important because I did run into an issue where one of the three NUCs had an older BIOS that would not boot from a USB drive. A quick update fixed it. As of this text, the BIOS I have on my NUC is version 0033 for the D54250WYK.
Once everything is downloading, let’s jump back to the hardware and configure the BIOS.

BIOS settings and update:
Cable up your NUC; power, keyboard, mouse, monitor, network. Power it on and hit the F2 key to get into the BIOS. On the first page of the BIOS you will see the BIOS version. If you need to update it, power it down, copy the BIOS file you downloaded to a USB drive and power up your NUC. Once back into the BIOS click the wrench icon and select “Update BIOS.” If the NUC likes your drive, you will see a drive letter to the left side, click it and it will list your BIOS on the right. Select it and start the update process. It will reboot a few times and scare the crap out of you but it should be doing good stuff.
Once the BIOS version is happy, lets check some important settings that we will need for VMware ESXi to perform at its highest potential. Click the Advanced button and start by setting your system time. Make sure the Memory Information section is correct and you are not missing RAM. Click the devices tab and make sure all of your USB ports are enabled. Click through the sub-tabs under devices. SATA should have AHCI mode selected and you should see your MSATA drive listed here. Video sub-tab, make sure that the least amount of “GD minimum memory” is selected (32MB). We don’t want to waste RAM to a video adapter we will never be using. Onboard Devices sub-tab; uncheck the boxes for Audio, microphone, HDMI audio and Consumer IR. This should reduce the amount of junk ESXi needs to install and deal with. Now select the Security main tab and ensure that “Virtualization Tech” and “VT-D” check boxes are checked. Next click the Power main tab and set the Dynamic Power Technology to “OFF.” You can leave this enabled if you like but your VMs may get higher than normal CPU wait times as the CPU “powers up” from idle states. Lastly, click the Boot main tab and ensure that you can see your USB drive or USB CDROM in the list and that it’s selected as a boot device.
Hit F10 to save your new config and reboot the NUC.
Quick example of the BIOS screen:

ESXi install time!
Ensure your CD, Flash Drive, “Install media” is connected and let it boot to the ESXi installer. If your boot settings are playing nice, you can hit {F10} during the boot screen to get a boot menu. From there you can select your media of choice.

Here is the install; {enter}, {F11}, Select your USB Drive for the install and {enter}, Select your language and {enter}, type a password {tab} type the same password {enter}, {F11}. When the install is done you will get a dialog box for one more {enter} to reboot the PC. If you used a CDROM, it will auto eject the CD. If you used a flash drive for the install, be sure to remove it before the reboot to prevent it from going into the installer again.

Once the reboot is done and has fully booted into the Hypervisor, it’s time to configure the IP address on the correct network adapter. Press {F2} to bring up the login prompt and enter root for the username and type the password you created during the installer. Down arrow to “Configure Management Network” and press {enter}, {enter} again on Network Adapters, then make sure that “vmnic0” has an “X” next to it and press {enter}. *** The best way to know what adapter should be selected here is to only connect one adapter to your switch. This adapter will say “Connected” while the second adapter will show “Disconnected.” Now arrow down to “IP Configuration” and press {enter}. If you are connected to a router or network that offers a DHCP address then you may already have an IP address. If you want to configure a static IP then press the down arrow to highlight “Set Static….” And press the {space} bar. Down arrow to the IP and type it, down arrow to the subnet mask, type it, and then the gateway, and finish with {enter}. Be sure to press {enter} when you are done or it wont save your settings. When you are back at the Configure Management Network page, press {esc}, it will ask if you want to Apply Changes, press {y} to accept. From here you can do the remaining host config with the vSphere client.
Don't You love my Mac Addresses

Do the same install steps until all of your NUCs have the Hypervisor installed and a different IP address configured. Then, install the vSphere client you downloaded earlier. The file will be named something close to “VMware-viclient-all-5.5.0-XXXXXXXX.exe.” Once installed, launch the client. You will get a prompt to enter the IP address of the NUC you are connecting to, username “root” and the password you used to install the hypervisor. Click Login and you will get a Certificate Warning dialog. You can check the box to install the cert to suppress the warning then click Ignore to continue. You will also get a popup saying you have sixty days to evaluate the software.

Time to configure our VSAN datastore so we can install the vCenter Appliance.
If you have an existing vCenter server and plan to use that for your VSAN config then you can skip a majority of what follows here. If the three NUCs are your first or only virtual setup, then follow along to configure the hosts so we can install vCenter.

Enable SSH on your host.
Connect to one of your NUC hosts using the vSphere client, (if not already connected) click on the “Configuration” Tab at the top, select “Security Profile” on the left side, then Properties at the top right under Security Profile. This should pop up a box where you can click on “SSH” and click the options button. Then another popup box where you can click the “start” button and “ok.” You can also select the button for “Start and stop with host” if you want SSH enabled all the time, even after reboots. If you do not make this setting, the next time you reboot, SSH will be disabled.

Launch your favorite SSH program. Putty is most common and here is a download link :
If you are using putty , enter the IP address of your ESXi host and click Open to start an SSH session. You may get a Cert Alert, click Yes to continue. Log into your host using the user root and the password you created.

*** Big Note here: I am 100% pulling the following CLI commands from William Lam -> @lamw to create the vsanDatastore. Here is the link to his blog post if you want the long version with screen shots and details.
All credit for these commands go to Mr. Lam and his awesomeness for providing via his blog.
I will summarize the commands below:

Command time :  (you can copy and paste most of these commands)

esxcli vsan policy setdefault -c vdisk -p “((\”hostFailuresToTolerate\” i1) (\”forceProvisioning\” i1))”
esxcli vsan policy setdefault -c vmnamespace -p “((\”hostFailuresToTolerate\” i1) (\”forceProvisioning\” i1))”

python -c ‘import uuid; print str(uuid.uuid4());’
—- Copy and paste this into the next command — Mine was “4502fe3c-7b95-4386-8894-4367ef5a315f”

esxcli vsan cluster join -u 4502fe3c-7b95-4386-8894-4367ef5a315f
—- Be sure to change my UUID with yours in the above command ( I guess you could just use mine…)

esxcli storage core device list
—- Copy and paste these two lines into your next command.
—- You will need to save two things here, the Spinner HD name and the SSD name. Mine listed as follow:
—- HD (t10.ATA_____ST2000LM003_HN2DM201RAD__________________S32WJ9DF815599______)
—- SSD (t10.ATA_____Samsung_SSD_840_EVO_120GB_mSATA_________S1KTNSAF507039N_____)

esxcli vsan storage add -s t10.ATA_____Samsung_SSD_840_EVO_120GB_mSATA_________S1KTNSAF507039N_____ -d t10.ATA_____ST2000LM003_HN2DM201RAD__________________S32WJ9DF815599______
—- This single command will create your VSAN datastore so we can deploy the vCenter Appliance.
—- This is a single command so watch for the page break – esxcli vsan storage add -s SSDName -d HDDName

DONE! Go back to your vSphere client and you should see a new datastore named “vsanDatastore.”

Time to deploy the vCenter Appliance:
In the vSphere client, click on your ESXi host object on the left side of the window, (disable Maintenance mode if its enabled, right click the host and select “Exit Maintenance Mode” to do so.) Now click on “File” at the very top of your client and select “Deploy OVF Template.” A wizard will launch. Click browse and select the vCenter Appliance you downloaded earlier. The name will look something like “VMware-vCenter-Server-Appliance-” {Next}, {Next}, Name Your vCenter VM and {Next}, Select the bullet for “Thin Provision” and {Next}, {Finish}. It will pop up a task window as it is copied to your ESXi host. When it reaches 100% and completes successfully, click {close}.

Time to Power on vCenter and configure it:
Back in your vSphere client, click on the plus sign “+” to the left of your ESXi host to show the new vCenter VM under it. Click the vCenter VM and click the Green Play button at the top of the client. You can also power it on by right clicking the vCenter VM object, selecting Power, and Power on. Now open a console to the VM. You can do this by clicking on your VM and then clicking the small monitor looking icon above it, OR you can right click your vCenter VM and select “Open Console.”

**** NOTE!! When working in the VM console, If you need to “unlock” your mouse from the console window, press {Control}+{Alt} (at the same time.)
Once in the console, you should see either a black booting screen or a blue “ready to configure” screen. When it is at the blue screen, click into the console window and press {enter} to login. The default login for a brand new vCenter Appliance is root+vmware. If entered correctly you should get a red prompt of “localhost:~ #”
Type the following command and press {enter}   — (copy and paste will not work–sorry)

This will launch into a wizard to configure the IP of the vCenter Appliance.
Press {2}{enter} and type your Default Gateway then press enter when done. {}{enter} as an example.
Press {6}{enter}. {enter} for no IPV6 address, {y}{enter} for IPv4 address, {n}{enter} for DHCP, Type your IP address {}{enter} as an example, Type your Subnet mask {}{enter} as an example. Then {y}{enter} to save your config and reload the network interface.
To finish, Press {1}{enter} to exit the wizard, type “exit” to log out of the vCenter CLI, then press {Control}+{Alt} (at the same time) to release your mouse from the console.
*** NOTE!: look at the console for your new vCenter Web Client URL. It will look like this : You can now close the console window if you like.
Open a web browser of your favorite flavor and type the URL you saw above: Since its HTTPS, you will get a Cert error, click Accept, Continue etc until you get to the login page. Again, the default user/pass is root+vmware. Accept the EULA (after reading all of the words first….) and click Next, A wheel will spin for a bit then click Next again. Select the “configure with default settings” bullet and click Next and Start. It will do a bunch of stuff at this point so take a break and come back…
Breaks done! If everything completed successfully then click Close. You should see the window populate with all of the “running” services. Make sure the “vSphere Web Client” is in the Running state.

From here, we need to configure two advanced settings before we move on.
1 – Click on the Admin tab at the top and create a new password for your Administrator account. Enter the old password of “vmware” and make something new for you. Optional, you can disable the password expires bullet but if this is a 60 day evaluation, the 90 day limit wont matter. (Now before you hit submit…)
2 – Click the bullet for “Yes” next to “Certificate Regeneration Enabled” – Now hit submit. If things are happy you should get a green “Operation was Successful” at the top.

Lets log into the vCenter Appliance vSphere Web Client:
Launch a web browser (that has flash installed) and enter your vCenter URL. Example You will get the cert error, continue or accept as needed. Next, click the “Log in to the vSphere Web Client” link on the right side of the page. The URL will change to and launch the flash based Web Client. Type the admin username “administrator@vsphere.local” and password. The default is vmware unless you changed it as listed above.

You now have a working vCenter server and we are ready to finish configuring VSAN!
The final installation (part 3) coming soon!

Again, Thank you Mr. Lam for your awesome blog :

Jan 072015

Back in November, I started seeing the “virtual” blogs light up with virtual enthusiasts installing VMware ESXi on Mac mini hardware. This increased as Apple released their new Mac mini, even though the cpu core count was reduced. Apple also announced that the new Mac mini hardware (ram and flash storage) would be soldiered to the board giving the user less flexibility. This pushed me to attempt the same idea but on Intel NUC hardware.
For those that are not familiar with the Intel NUC , it’s a small form factor PC that allows for user selected RAM, drives, and even two mini PCIe slots.
Website here :
The important part, is the ability to configure the mini PC in any way and continue to change it even after purchase. You can configure the NUC with four, eight, or sixteen gigabytes of RAM, a full 2.5 inch drive (spinner or SSD), and two internal mini PCIe slots. (half length and full length)

The idea for a mini VSAN cluster starts with a few requirements, mostly set by VMware.
– At least three physical PCs/Servers that contain the following:
1 – A SSD or flash device.
2 – A spinning drive.
3 – A drive to install ESXi separate from the above two drives.
4 – A Gigabit network adapter. Two is preferred and 10Gb makes it optimal.
5 – Enough CPU cores and RAM to run the ESXi software plus any Virtual Machines.

If you need extra info on VMware VSAN , click this link here —>

The challenge:
How can I find three physical devices that use very little power, is stable enough to run a home lab without any unexpected down time, has all of the above requirements in a low cost, easy to obtain package.

Lets start with the hardware:
The Intel NUC i5 series (D54250WYKH) flavor provides two core, four threads @ 1.3Ghz with 2.6Ghz turbo processor. It has two SODIMM slots for up to 16GB RAM but it only accepts 1.35 Volt SODIMMs so make sure you watch what you buy. The 1.5 Volt SODIMMs will prevent the NUC from booting. The D54250WYKH has a taller case when compared to the D54250WYK flavor so you can slide in a 2.5 Drive. I found a few 2TB spinner drives but only the Samsung SpinPoint 2TB drive will fit into the NUC. The Western Digital 2TB 2.5 drive is too thick and you will need to bend metal to make it fit. If you plan to run the system 24/7 then I would suggest a WD Red 1TB or an enterprise drive that is made to spin all day, every day. It must be a SATA drive though so check your selection. The idea of the 4TB San Disk SSD in this spot would be amazing. The NUC has a Full length and half length mini PCIe port. The Full length will allow for a msata SSD. Samsung EVO msata SSDs tested great. Now the difficult part; where can I find a mini PCIe network adapter that will fit into a half length mini PCIe slot. My SSD is hogging the full length slot and SSDs in the half-length form factor are very limited. I even researched the idea of a mini PCIe extension ribbon cable.
This is where the Syba Mini PCI-e Gigabit Ethernet Adapter (SD-MPE24031) comes in. This adapter, with a bit of tweaking, fits perfectly under the msata SSD. I modified the adapter so the cat5 cable is wired directly to the adapter. I lose out on an activity LED but if you like flashy LEDs, you can watch them on your switch. The only thing left is to feed the cable outside of the NUC and your hardware is golden. You will need to throw in a 16GB or 32GB flash drive so you can install your Hypervisor to. I would suggest a good brand and a USB3 version just to make sure any Hypervisor bits do not corrupt and are accessed better than USB v2 speed. A 32GB or larger is suggested to handle a memory dump or provide room for logs etc. I’ve tested with a 8GB thumb and it works fine minus a few annoying messages about it being too small for persistent storage.

Quick hardware summary: (all of this times three for our VSAN cluster)
– Intel NUC D54250WYKH
– 8GB DDR3 SODIMM 1.35volt (two sticks of 8GB for 16GB)
– 2TB Samsung SpinPoint 2.5TB spinner
– 120GB Samsung EVO msata SSD
– Syba half length mini PCIe 1Gb Network Adapter (SD-MPE24031)
– 16GB-32GB flash drive / thumb drive to install the Hypervisor to.
– Mini HDMI to HDMI cable, keyboard, mouse, monitor, tools, brain etc…..

Some Photos of the goods:

Now the software build:
Start by downloading your favorite flavor of VMWare ESXi. I started with 5.5u2p3 build 2143827 because it was the current download at the time. Download ESXi Customizer. I used a copy from .

Next, download the drivers:
— Intel Network Driver :
–SATA Controller driver :
Lastly, you need the Realtek driver for your Syba mini PCIe NIC.
Now just unzip all of the above software, run the esxi Customizer GUI, select your downloaded ISO and the VIB, click RUN. Repeat as needed for each VIB being sure to include the Exported ISO as your new Import ISO so it includes the previous VIB you injected. Once you have your ISO you can burn it to an old fashioned CD or you can play with the many other methods; PXE, USB drive/flash drive….etc.
If you found a better or updated driver, please send me the link and I will update the page.
If you don’t trust any of the above software, you can download all of the software inside a Virtual Machine, create the custom ISO, then export it out of the VM and destroy the VM.

Now you are ready for the install:
Install ESXi to a thumb drive on each NUC, configure the IP Network settings, then load up your management GUI of choice and start playing.

Sneak Peek of LegoEVORack Version 1:   Cooling and Top of Rack switch just needs to be installed…..


Here is Version 2:   — Note the top of rack switch…
vsanlegoV2-1 vsanlegoV2-4vsanlegoV2-2

Quick shout out to a few websites that google sent me to while I was learning how to do this.