Posts Tagged ‘iSCSI’

Promise VessRAID 1840i

Thursday, January 27th, 2011

I just installed two Promise VessRAID 1840i units for a client, each one loaded with 8 x 1TB Seagate Enterprise drives. With 8 more drive bays, we can easily take each unit to 24TB without replacing drives, and you can add up to 3 more expansion enclosures, 16 bays each, for a maximum total of 128TB. Impressive, to say the least.

My major issue with purchasing these units is that there is no good review information online for any of the Promise gear. No user forums, either. So you don’t know what you’re getting in to, and have to trust the word of the sales guy (did I mention these things are generally only available through reseller channels?). To perhaps help the next guy, I wanted to provide some feedback on my experiences.

Load

I got my units empty, which I hear they won’t be doing anymore. Came with all the necessary trays and screws to load drives. SATA drives work with no adapters (unlike some Dell arrays), haven’t tried SAS drives. Simple as pie. Biggest trouble was dealing with all the trash: boxes, bubble wrap, and clamshells for the hard drives from CDW.

Power Up

The dual power supplies are rated at 450W each, 900W total, and a max draw of 9A on 100V. So I was worried about overloading my 15A circuit with two of these starting up. From experience, however, the half-loaded unit draws far less. A CyberPower UPS (very nice unit, by the way) shows a peak wattage at startup of 225W, which is only 2A at 110V. Wattage once the fans have gone to normal speed is under 150W. (This is one expensive light bulb!)

UPS Compatibility

The VessRAID has two USB ports on the back to connect the unit to an Uninterruptible Power Supply. (The second one is for an un-defined support mechanism to upload config or debug files via flash drive.) Given that there’s a pretty well-developed UPS standard for using USB HID interfaces, and I figured just about anything new should work. Nope.

The hardware compatibility list provided by Promise lists only two compatible units: APC Smart-UPS 1500, and APC Smart-UPS 3000. My guess is they’re using an antiquated APC protocol. Important note: you cannot use the cheaper SC line from APC, because they only include a serial port, not a USB port, which you get at double the price on the non-SC units. So you probably don’t want to stray from the hardware compatibility list, particularly when buying a UPS.

If, however, you’re creative, you can make something work. If you have a regular server attached to any other UPS, you could use SSH or telnet scripting to login to the VessRAID CLI and initiate a shutdown. I tested it using the Telnet Scripting Tool by Albert Yale (widely available, including at an unauthorized archive of the guy’s software).

Initial Configuration

Initial config, particularly for the network settings, is best done via the serial console port. The units include a RJ-11 to DB-9 cable, but you’ll need a working serial port on your PC or laptop. Given that most laptops don’t have one these days, you might want to invest in a USB to Serial adapter. The Trendnet TU-S9 was cheap and seems to work well.

Management through the web interface must be done on the management port, so configure it in a subnet that you can access from your other machines. NAS and iSCSI will happen through the iSCSI ports. NAS should be on the same network as the clients; you might want to isolate iSCSI traffic in a different subnet (or even a separate physical network).

Configuration (NAS)

All the “i” units of the VessRAID 1000 series (i.e., the 1840i) are mainly intended to be used as iSCSI devices, and have 4 x Gigabit ports for that very purpose. However, they also have a built in Network Attached Storage system that can be used to provide Windows sharing, FTP, and NFS access.

The pros of this arrangement:

  • The VessRAID units operate as their own distinct servers, and need not rely on any other machine to do their storage work. Particularly useful on a smaller network, or for a very particular storage task.
  • The units will sync themselves automatically, using a customized version of rsync. You can easily configure this backup from one unit to another in the web configuration software.
  • Multiple clients can access the same file system at the same time. Remember that with iSCSI, the array is presenting the low-level data blocks to the initiator on the client (a Windows server, for instance), so there is no way that multiple clients could use one file system at the same time — unless you share it through the client.

The cons of this arrangement:

  • Active Directory support (available via a firmware upgrade) is poorly implemented. Getting the NAS connected to your domain is a touchy matter, requiring an exact combination of case-sensitive domain and user names that I got to work, but couldn’t figure out.
  • Active Directory permissions are even worse. Instead of specifying which users or groups should be used in the ACL for a particular share, the NAS web configuration presents ALL of your users and groups, with default full access permissions.  If you want to include only a few users, you have to click “Deny” on every other users. If you want to include a group, you can’t effectively, because the Deny permissions on individual users will override Allow permissions on a group. This implementation is absolutely useless.
  • Poor support for Windows permission lists. This is true in any SAMBA implementation, because the underlying Linux-based file system only supports the user/group/all permission scheme. So the NAS can’t handle fine-grained permissions on folders or files within a share.
  • Speed. Access through iSCSI is much faster.

Configuration (iSCSI)

I originally configured the units in NAS mode, but the client wanted to put some Windows user shares on the array, which require fine-grained folder permissions. So I reconfigured the logical disk for iSCSI use.

The simplest approach is to use Microsoft’s iSCSI Initiator (included in Windows 2008 server, and available to download for Windows 2003 server). There are good instructions available in the VessRAID documentation and from Microsoft on how to get this to work. Especially helpful for best practices is Microsoft’s step-by-step guide.

After connecting the client to the logical disk using iSCSI, you format it in the Windows Disk Management utility. To enable use of partitions larger than 2 TB, you have to convert the disk to use GUID Partition Tables (GPT). Once done, the whole space should be available to format using NTFS. I won’t discuss it here, but make sure to consider the types of files being stored, and other requirements (such as Shadow Copies or NTFS Compression), when choosing an appropriate cluster size.

Benchmarking

I used PassMark’s Performance Test software to obtain some basic benchmark numbers for the VessRAID’s performance. This is using a standard Broadcom Gigabit interface on a Dell Poweredge 1900. All arrays are RAID5.

Local Array
3 x 160GB
Dell PERC 5/i
Sequential Read 61.8 MBps
Sequential Write 92.5 MBps
Random R/W 7.2 MBps
WriteThru WriteBack
VessRAID 1840i
8 x 1TB
Gigabit iSCSI
Sequential Read 105.8 MBps 105.8 MBps
Sequential Write 27.9 MBps 90.6 MBps
Random R/W 12.4 MBps 31.9 MBps
ReadyNAS NV+
4 x 500GB
SMB/Gigabit
Sequential Read 12.0 MBps
Sequential Write 4.5 MBps
Random R/W 5.9 MBps

Note the very significant performance difference between the WriteBack cache setting the WriteThru cache setting on the VessRAID. WriteThru writes the data directly to the disks when it arrives. WriteBack holds data in cache before flushing it to disk, which is far more efficient, as sectors can be written together and larger chunks at a time. To safely do this, however, requires a battery backup for the cache, so that if power is cut suddenly you don’t lose that data (which hasn’t yet been written to disk). The problem is that Promise does NOT include the battery with the units. It’ll cost you an extra $100. You’d figure on larger units like this they wouldn’t nickel-and-dime you, but they do.

Correction: I had reversed the WriteThru and WriteBack terms. I have corrected it in the text above, after the feedback from the commenter below.