The first step is to assure we have enough space to configure a new volume of the size we desire. GUI or CLI will suffice:
I’ve decided to provision a 512 GB volume and as can been seen from the above screenshots, I have plenty of space. So on to it.
The following command creates a volume:
scli --add_volume --protection_domain_name <protection domain> --storage_pool_name <storage pool> --size <size in GB> --volume_name <name of volume to be created>
You need to know the protection domain, the storage pool, the size of the volume you want and then a friendly name to be given to the volume. Keep it descriptive for easier management later.
I typically will SSH into the primary MDM (directly or via the virtual IP, doesn’t matter) and run the command, this saves me from having to add the MDM IP into the commands each time.
Simple enough, right?
So now we have to map the new volume to one or more SDCs, I want to present it to my entire ESXi cluster which consists of 4 hosts so I will have to map it to all four respective SDCs. SCLI provides you with two options for mapping a volume, you can either map it to an individual SDC or all of the SDCs at once. The benefit of the latter option is that it will save time, but when it means all SDCs, it means all of them. Any new SDC added to the protection domain at a later point in time will have an automatic mapping to this volume. So use the “all SDCs” option with care. In this case I am just going to map it to one SDC at a time so it will be restricted to just the ones I want until I want to manually expand it. And drumroll…the syntax:
scli --map_volume_to_sdc --volume_name <volume name> --sdc_ip <IP address of SDC>
If you do not remember the IPs of your SDCs, run the following:
scli --sdc --query_all_sdc
It will list the IPs of all of your SDCs like so:
Let’s map. For my four SDCs it will take four commands:
SDCs will periodically rescan for new mapped volumes, but if you want to force the process there is a way–you can run an executable in the SDC directory to check for newly mapped volumes. From the SDC(s) run the following operation (note that this is not a scli command):
This will force the detection of any new mapped volumes. New volumes seen by a SDC will show up in /proc/partitions with the prefix “scini”. The rescan followed by a cat of /proc/partitions can be seen in the below image, the new 512 GB volume being named scinib.
Once the SDC can properly see the new volume you need to map it to the proper SCSI initiators (iSCSI for VMware). I am going to map it to all four of my hosts in my cluster. To do this you need your volume name and one of the following:
- SCSI initiator friendly name
- iSCSI IQN
- SCSI initiator ID
I am going to use my initiator name to map the volumes. If you do not remember any of this, run this:
This will get you all the information you will need. Well maybe, if you have a large environment you might need to compare IQNs to the target hosts if the friendly name isn’t specific enough for you to tell.
Optionally during the mapping process you can indicate the LUN ID, otherwise it will just use the next available one.
scli --map_volume_to_scsi_initiator --volume_name <volume name> --initiator_name <initiator name>
So now all four of my SDCs can access the new volume and all ESXi hosts have a iSCSI path to the volume for each SDC. At this point rescan the ESXi cluster and the volume will appear. Note that the recommended Native Multipathing (NMP) Path Selection Policy (PSP) is Fixed. So only one path will be active at a time, therefore you want to make sure that the on each host the preferred iSCSI path is the one to the respective local SDC (the SDC VM running locally to that ESXi host). So when the device is discovered go into the configuration of the ScaleIO device and select the proper path. The path can be identified by seeing the IP address of the SDC path towards the end of the IQN. In the below example I am configuring the device running on my first ESXi server in the cluster and the IP of the SDC VM running on it is 10.10.82.200, so I selected that IQN as the preferred path. This way I/O stays internal to the ESXi server and doesn’t have to propagate over the physical network unless the SDC VM becomes unavailable and then it will switch to a different, but less optimal, path. Well at least on the first hop this is the case–though some data almost certainly is hosted on a different SDS therefore it will have to cross the physical network between the local SDC and a remote SDS to access those storage segments.
All ready to use! Make sure you configured the setup for optimal performance too, a great post on doing so can be found here: