Short how-to on installing NexentaStore 3.3.x or 4.0 on vSphere 5.x including vmware-tools.
NexentaStor community v4 is a OpenIndiana based ZFS storage appliance. Out of the box it is limited to 18TB of raw disk storage in the free version.
This howto supplements the installations instructions on creating a VMware vSphere guest.
This install will use a e1000 nic for managment traffic, and a 2nd vmxnet3 NIC configured with Jumbo frames for storage traffic.
Ideally a second SAS controller should be passed-through directly (directpath I/O) to the VM for your storage disks, but this requires VT-d support in the BIOS and this is not available on my Dell 2950 servers.
Preperation
- Download ISO v4 (beta) image from Nexenta
- Upload ISO to your ISO (NFS) share in your vSphere environment.
Installation
- Create a new VM with OS type ( Solaris 10 ) , 2vcpu, 4GB RAM, and a 10 GB thick disk.
Use a single e1000 NIC.
Note: as of 3.1.3.5, select PVSCSI disk controller prior to install.
- Attach NexentaStore ISO to CD, mounted on boot.
- Power on VM, and run installer. You will get a device ID, that must be entered into the registration page at http://www.nexenta.com/register-eval, and it will email you the community (free) license code.
- Setup the networking to use a static IP in the installer.
- Select web page to use HTTPS, and port 2000, set both web and root
Passwords.
- Reboot and verify web access and password work.
VMware Tools
- Unmount existing installer CDROM, and select do not mount at power-on.
- Reboot VM. When up select "Install Vmware-tools"
- Login to VM as root. ( Note when logging in as root you get the "NMC" management prompt. This is a restricted CLI, and is not BASH )
- To get a bash prompt type
option expert_mode=1
!bash
- The CD should have auto-mounted as /media/VMware\tools Verify with Mount command. Copy the Tools installer to /root and untar.
cd /media/VMware\ Tools
cp vnmware-solaris-tools.tar.gz /root
cd /root
tar -xzf vmware-solaris-tools
cd vmware-tools-distrib
- Run the installer.
./vmware-install.pl
- Use defaults. When you get to the step where you "configure" VMware tools answer NO
- Run the configure (defaults) option.
vmware-config-tools.pl -d
- This step might fail with a note that "SUNWuiu8" package is required.
Edit the vmware-config-tools.pl script to comment out this section.
cd /usr/bin
vi vmware-config-tools.pl
/SUNWuiu (search for WUNWuiu string )
- When the code is found insert "#" comment characters in front of the entire IF statement
# Be sure that the SUNWuiu8 package is installed before trying to configure
# if (vmware_product() eq 'tools-for-solaris') {
# if(does_solaris_package_exist('SUNWuiu8') == 0){
# error("Package \"SUNWuiu8\" not found. " .
# "This pac kage must be installed in order " .
# "for configuration to continue." . "\n\n") ;
# }
# }
- Save file with ":" then wq.
- Re-run configuration with defauts.
vmware-config-tools.pl -d
- The configuration should succeed this time. After it finishes, reboot.
exit ( exit bash back to NMC)
setup appliance reboot
Note that the vmxnet3 NIC is now recognized after vmware-tools is installed.
- Shutdown VM, and install a vmxnet3 NIC, power up VM, and use GUI to configure NIC for storage network. Static IP, MTU=9000 ( Skip MTU, broken, see below )
- Reboot Appliance so that MTU takes effect.
VMXNET3 Note: in v 3.1.3.5 vmxnet3
di
d not seem to work so I reverte
d to e1000. In v4 beta (As of 5/25/13 ) vmxnet3 works but setting the MTU is broken, even from bash. See this
link for low level
comman
ds to set MTU.
Add storage disks
My setup does not support DirectPath IO to my hosts SAS storage, so we will have to use VMFS volumes. This is not ideal, as ZFS hot-swap doesn't work and you will likely have to do a shutdown and manually replace any failed disks.
We will create a VMFS-5 volume filling up the entire disk for each disk dedicated to storage. In my example my host has 6 2TB SAS drives.
Note the OS and all storage disks must be on the same host, and the VM disabled for vMotion.
- On host with storage, create VMFS-5 partitions filling the entire disk, for each one of the storage disks.
- In NexentaStor create one new disk that is the maximum size for each VMFS volume. Set this disk options to Thick, lazy-zero. Set Disk-Mode = Independent (Persistent )
Important: Verify disk mode to make sure disk except OS are set as above. This is to make sure that VMware snapshots do not affect the NexentaStor disks.
- Power on Nexenta and make sure disks show up under "Settings, Disks"
ZFS notes
ZFS uses slightly different terminology than traditional volume managers
In ZFS a zPool is composed of vDev's ( each of which is redundant set or mirror )
zPool (many vDev's) -> vDev (several disks)
Folder ( quota allocated within zPool )
Howerver in NexentaStor these items are often referred by their non-ZFS terms.
- Volume -> this is really a zPool
- Share -> this is a ZFS folder ( with quota )
- zVol -> this is a ZFS volume ( block device for iSCSI )
NFS Setup
The NFS protocol is recommended over iSCSI. The
storage allocated to each 'share' is then really only a quota, and you
can change it at will.
Set NFS server as follows for ESXi compatability:
Data management, Shares, NFS Server, Configure:
Set "Client Version" = 3
Create Volume ( zPool )
Create a single z-pool called "tank" ( this is a play on the term pool ... )
Create Folder ( share )
NOTE: Recommend not using de-duplication on any volume as it may be difficult to turn off without issues. ( This was as of v3.3.x , status in v4 unknown )
Each folder has options ( compress, block-size, etc. ) These are typically used for NFS, or CIFS mounts. You can set/change the quote on each folder while running. To create a NFS mount for ESXi hosts.
- Data Management, Share, Create
- Don't change de-dup. For VM's use 128K blocks, bias=latency. Create Folder
- Enable NFS sharing on folder.
- Edit NFS options, leave defaults, Anon=Y, AnonRW=Y
Set Read/Write to @10.25.1.0/24 ( to allow 10.25.1.x hosts write access )
Thanks to
lmarzke from: http://plone.4aero.com/Members/lmarzke/howto/nexentastore-installation-on-vsphere