Feed on

Deploy ScaleIO 1.32 in virtual lab – I’m using VMware vSphere 5.5, but you can use any virtualization platform.

Server minimum hardware requirements 2 CPU cores and 2 GB RAM – I’m using 2 cores and 4 GB of RAM

Deploy ScaleIO 1.32 Gateway on Windows 2008/2012 R2 (if you followed this, skip to the next section)
* Configure disk (I’m using 40 GB) and networking
* Update – if necessary – obviously
* Install JRE 1.7 or higher (ideally 64-bit) – if not included in image
* Turn off the Windows Firewall with Advanced Security (netsh advfirewall set allprofiles state off) – if not included in image
* Install the Gateway binary (choose 64-bit if you chose 64-bit JRE or 32-bit for 32-bit JRE)
* Done and done…you can use the gateway to do systems installs/upgrades/etc. from now on…

Deploy 4 Linux VMs (I used CentOS 6.6 x86_64 Minimal)

Deploy ScaleIO 1.32 SDC/SDS Nodes on Linux (on each server)
* Configure disk (I’m using 8 GB for OS and 100 GB for data) and networking
* Update – if necessary – obviously (wash, rinse, repeat)
* Disable the firewall (service iptables stop && service ip6tables stop && chkconfig iptables off && chkconfig ip6tables off) and selinux (sed -i s’/SELINUX=enforcing/SELINUX=disabled’/g /etc/sysconfig/selinux)
* Install dependencies – yum -y install openssh-clients libaio numactl
* Reboot

Use ScaleIO Installer via ScaleIO Gateway to deploy
* Connect to https://<ip address> for the IP address of the Gateway (accept all the warnings)
* Enter “admin” as the username and the password you provided when you installed the Gateway binaries
* Click “Get Started” under “Install using this web interface”
* Click “Browse” and select the installation packages you wish to deploy – for Linux this consists of all the rpm files under ScaleIO_1.32_RHEL6_Download, excluding the *callhome* rpm…
* Then click Open, then Upload
* Then click Proceed to Install

At this point, you can decide whether to use the installation wizard or perform a configured installation using a CSV file to provide the necessary details.  If you use the installation wizard, it will create a default protection domain and storage pool, which is fine for a demo environment, but I’d prefer a custom install, so I’m choosing the “Upload installation CSV” option and supplying the following CSV contents:

Domain,Username,Password,Operating System,Is MDM/TB,MDM Mgmt IP,MDM IPs,Is SDS,SDS Name,SDS All IPs,SDS-SDS Only IPs,SDS-SDC Only IPs,Protection Domain,Fault Set,SDS Device List,SDS Pool List,SDS Device Names,Optimize IOPS,Is SDC

* To upload that CSV, click the “Browse” button, navigate to the CSV file, then click “Upload installation CSV” button.
* Verify the MDM and LIA passwords
* Check the “I accept the terms…” check box
* Set any advanced options or syslog details
* Uncheck “Call Home” check box
* Verify the content supplied in the CSV
* Click “Start Installation” button (If you’ve done everything correctly up until this point, you should see no errors in the query, upload, install and configure phases to follow. If you do see any failures, first retry, then troubleshoot using the information provided via the “Details” button next to each of the failed tasks.)
* Click the “Monitor” button to view the status of the installation process
* Once the Query Phase completes successfully, click the “Start upload phase” button to continue
* Once the Upload Phase completes successfully, click the “Start install phase” button to continue
* Once the Install Phase completes successfully, click the “Start configure phase” button to continue
* Once the Configure Phase completes successfully, click the “Mark operation completed” button and follow the “Post installation instructions…” displayed to add and map volumes – you should already have SDS devices, if you followed my instructions.

* Install the EMC ScaleIO GUI using the EMC-ScaleIO-gui-1.32-402.1.msi in the ScaleIO_Windows_SW_Download\ScaleIO_1.32_GUI_for_Windows_Download from the downloaded installation zip.
* Connect to your primary MDM at with the username “admin” and the password “ScaleIO1″ (if you used my IP addresses)…
* From there, you can perform some minor configuration changes – like renaming objects or configuring capacity and spare percentage…
* You’ll need to create volumes and map them to SDCs using the command line…
* For example, I’ll create 4 24GB volumes and map each of them to all 4 SDCs below:
cd /opt/emc/scaleio/mdm/bin/
scli –login –username admin –password 1nT3ll1g3nt
scli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL1
scli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL2
scli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL3
scli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL4
scli –map_volume_to_sdc –volume_name VOL1 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL1 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL1 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL1 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL2 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL2 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL2 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL2 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL3 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL3 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL3 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL3 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL4 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL4 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL4 –sdc_ip –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL4 –sdc_ip –allow_multi_map

* Now each volume is mapped to the SDCs as follows:
VOL1 = /dev/scinia
VOL2 = /dev/scinib
VOL3 = /dev/scinic
VOL4 = /dev/scinid
* You can now partition, create file systems and mount those file systems on the SDCs…
* I’d suggest setting tune2fs to prevent the file checking on mount and perform those file system checks (if you feel necessary) on one of the SDCs (e.g. tune2fs -c 0 /dev/scinia1)

Comments are closed.