It’s not often that I’m so utterly disappointed by a product that I feel the need to write about my experience with it, but a situation at work with two newly-arrived Sun StorageTek 6140 disk arrays has cetrainly enraged me enough, and boy do I feel the need to rail against vendors who cripple their products on purpose.
NOTE: As of 8 Feb this issue has been resolved, but read on if you want to head about the saga.
The story begins:
Here at UMBC, much of our core storage needs are served by several Sun StorEDGE/StorageTek 3511 + 3511 JBODs. Performance has been adequate, but the management of these arrays is problematic. Their telnet-based menu system is only slightly bearable, and the included GUI management app is finicky to say the least. Nevertheless, these arrays have served us well enough.
Recently, it was decided to break the mail storage for our ~20,000 users out of AFS and put it on its own, separate storage. To do this we would need some dedicated arrays to do the job. We worked up a budget proposal which included two 7.2TB Sun 6140 arrays and associated mail servers. Our plan was to migrate the data on the existing 3511s to the 6140s, and use the old 3511s for mail storage.
Fast forward to this week, where we took delivery of the new mail servers along with the two 6140s. I got one of the 6140s racked and hooked up to our SAN, installed the management software, and started to get to work exploring this new array and its features.
Naturally, I set up a small testing zone on our SAN and began creating volumes on the 6140. So far so good. Kind of a wonky workflow to the GUI manager but it’s tons better than what I’m used to dealing with on the 3511s. At least with the 6140, I can at least name my volumes such as “Core Oracle DB” or “Blackboard App 1” and so on instead of anonymous volume numbers such as on the 3511.
Anyway, I eventually get to the point where I wished to map a particular volume on the 6140 to the WWN of a server I wish to mount that volume on. Survey said: BZZT. It wouldn’t let me. I thought I did something wrong, so I retraced my steps. Nope, everything looks fine. Why did it not allow me to map this LUN/Volume to only the host I wanted to let see it?
I noticed the error message at the top of the screen, and read it carefully. Apparently, the array was not licenesed to allow this. My mind boggled. I reasearched. The 6140 calls maps “Storage Domains”. Storage Domains can contain one or more hosts to which a volume or volumes could be mapped. By default, the 6140 comes with one Storage Domain, and any additional ones must be enabled by purchasing licenses, with a maximum of 64 Storage Domains. The single Storage Domain means that all initiator/hosts get lumped in there, and ultimately everyone can see everyone else’s volumes. Not Cool. Very Not Cool.
I must have sat at my desk for a good 2 minutes with the most incredulous look on my face.
How in the HELL could something as pedestrian in the storage world as LUN mapping be considered a “premium” feature?? You know those Sun 3511’s I mentioned? They’re half the price and I can map and mask LUNs until my heart’s content. Why this limitation on a “midrange” array? What the heck were you thinking, Sun?. According to Sun, it’s going to cost me at least $15,000 for the privilege of mapping up to 64 “Storage Domains”, and I’d need that for two 6140s.
As it stands right now, this SNAFU completely grinds our plans to replace our mail infrastructure to a halt. I’m considering returning that 6140s and getting 3511s instead. Or something from a different vendor altogether.
Sun: Don’t play these stupid crippleware games. Hiding an otherwise common and expected feature behind some new marketing name and then charging a heck of a lot of money for it is very, VERY disingenuous. You want customers to hate? Keep pulling stuff like that. I can mask LUNs on a $14,000 Apple Xserve RAID or a Sun StorageTek 3511. Why the difference with supposedly “higher end” 6140? Shame.
Update: I wrote a follow-up to this post here.