Tag: storage

OpenSolaris 2008.11 – A Preview For The Storage Admin

Many reviews have been written about OpenSolaris since its release, but all of them barely tread beyond the desktop aspect, with the obligatory screenshots of the GNOME environment and a high-level description of only the major features most are already familiar with, or at least have heard of.

I’d like to take a different approach with this review, one that descends below the GUI to highlight aspects that server administrators in particular would be more interested in.


Crying “FUD” doesn’t always mean you’re right

…and it sure doesn’t grant you instant vindication.

It appears that DaveM (Linux networking and SPARC port guru) has gotten seriously wound up in response to a blog post by Jeff Bonwick (Sun’s storage and kernel guru.)

As one can see in Jeff’s post, the suject he wrote about was within the greater context of using Solaris as a storage appliance OS (something I have an interest in) and why Solaris/OpenSolaris can and would excel when it comes to being the kernel of a storage OS.

I’m a storage guy. In the course of my work I have to not only work with Solaris hosts on my SAN, but also Windows and Linux (and soon, AIX). So I have a front-row seat when it comes to witnessing and dealing with how these various OSes deal with storage, from the filesystem to multipathing, to the HBA… and let me tell you, Linux is quite not the joy in this specific area as most people think it is on a general, all-encompassing level.

And that’s what Jeff’s angle was.

Now on to Dave’s rebuttle.

“The implication is that Linux is not rock-solid and that it does break and corrupt people’s data. Whereas on the other hand Solaris, unlike the rest of the software in this world, is without any bugs and therefore won’t ever break or corrupt your data.”

No OS comes without fault, but some OSes have faults that are more glaring than others in their analogous areas. Staying within the storage context of this discussion, I have to say, again, Linux is no shining star here.

ReiserFS is arguably the most advanced fs in terms of features when it comes to the portfolio of Linux file systems, but its issues with stability are such that you’re really walking on eggshells whenever you employ it. I have been personally told too many first-hand accounts and read plenty more on the Internets regarding its tendency to be fine and then fail spectacularly. It has been likened to a time-delayed /dev/null of sorts, and the future of it is in doubt with the legal troubles of its designer and Namesys limbo. Is any version of ReiserFS a viable Linux storage technology for a production environment? I say No. That’s sad because I dare say at one point ReiserFS had some promise.

EXT2 and 3… tried and true. Very stable and moderately fast for most tasks. But it’s an “old guard” file system. As such, it’s not very flexible, and any flexibility it gets comes from using a volume manager underneath of it. In the days where the notion of handing a server a 1TB LUN is nothing to blink at, this inflexibility can be suffocating in a dynamic environment. These “old guard” file systems (yes, Solaris’s UFS is one of them, too) are more like mere utility file systems than practical ones for today’s mass storage needs. It’s good for holding a machine’s OS and that’s about it.

XFS… Of all the file systems in the Linux file system portfolio, this one gets the gold star. Stable, fast, and decently scalable with the large amount of data you can stuff in it… but it still suffers the same problems EXT[23] and other “old guard” file systems do in terms of flexibility. In other words, it’s just a file system. Keep in mind that this critique is coming from a guy who worked with XFS on IRIX often and absolutely loved XLV… back in its the day.

As it stands now, the mainline Linux kernel doesn’t offer anything which embodies the file system triple play: being stable && fast && flexible. Solaris’s ZFS has this. I’ve so far entrusted 30TB of spinning rust to it, and it has yet to let me down. Sure, there are projects here and there that have the eventual goal endowing Linux with a ZFS analog, but as of right now they’re nothing production quality and are definitely not something a admin can call RedHat to get support for.

There are plenty of other aspects to the storage context… the fibre channel stack, for one, and other things such as multipath IO implementations and volume manager and management layers (which Linux has a host of… not necessarily a good thing… LVM, LVM2, MPIO, RDAC… it makes your head spin.)

But as far as this storage-oriented discussion goes, file systems are indeed the make or break aspect. This is why Jeff said what he said. Linux has no ZFS. Windows has no ZFS. It is not that Linux or Windows need ZFS itself in order to compete, it’s that they need to develop and employ the concepts that ZFS implements and do so as clearly and concisely as ZFS has.

Anyway, enough about storage. Now, why is it that the Linux community (let alone a prominent member of it) has to react so violently to any questioning of its perceived superiority? Is it misplaced or excess pride? Have they not tried things other than Linux recently and they’re just flying with blinders on? Is it just the social culture which prevails within it? What ever it is, seeing posts like Dave’s makes my toes curl with embarrassed amazement.

A friendly message to Dave: Chill the ad hominems, mkay? Crying “FUD!” at the mere sight of someone who you perceive as poo-poo’ing an aspect of your interest doen’t typically translate into a well thought-out rebuttle. You took the low road and tried to convey Jeff as being some instrument of some nefarious, Mr. Burns-like person at Sun. Is vilifying instead of cool-headed technical discourse really your desired style? Has anyone come biting at you saying “oh, he’s a Linux kernel developer, so he has an agenda”?


Sun and Qlogic to open up the source of storage software

Sun has announced that they will be releasing several of their storage products to the OpenSolaris community!

From their announcement:

  • Sun StorageTek 5800 storage system (Honeycomb) client interfaces along with the Honeycomb software developer kit (SDK) and Honeycomb emulator/server. Honeycomb is a third-generation digital repository solution for data capture and management.
  • SAM-FS (Storage Archive Manager) provides data classification, policy based data placement, protection, migration, long-term retention, and recovery capabilities for organizations to effectively manage and utilize data according to their business requirements. SAM-FS is used exensively in security/surveillance, digital video archiving, and medical imaging data environments.
  • QFS Sun’s shared file system software delivers significant scalability, data management, and throughput for the most data-intensive applications. Well known today in the traditional high performance computing (HPC) arena, QFS is increasingly being used in commercial environments that require multiple host, high speed access to large data repositories.

Also, and pretty awesome yet:

In addition, today QLogic is contributing their Fibre Channel HBA driver code to the OpenSolaris storage community. For the first time, developers have access to an I/O stack from the application through to the operating system.

Now how cool is that? That last sentence says it all… openness from from the app to the HBA. Most excellent. Thank you to Sun and a really surprised thank you to Qlogic!


Sun Availability Suite blog

The maintainer(s) of Availability Suite (aka AVS) now have their own blog. I can’t wait to learn more about it!


The state of enterprise storage for the Little Guy

Earlier this month I spewed some vitriol over an unpleasant discovery regarding the Sun StorageTek 6140 array and its underwhelming out-of-the-box feature set (which, three weeks later, remains an unresolved issue even after contacting and working with my VAR, Sun sales rep-proper, and two Sun SEs. Sigh) (NOTE: As of 8 Feb this issue has been resolved). This whole issue was over the sneaky renaming of a feature commonly known as LUN Masking and charging beaucoup bucks for it as a license-activated addon.

Well, I want to write some more about this with an industry-wide perspective because as of this past Thursday, Apple is now playing a similar game regarding their Xserve RAID systems. With the release of RAID Admin Tools 1.5.1 and associated firmware, Apple has removed LUN Masking as a feature of the Xserve RAID. Yep. Removed it. In a minor version release of the software, no less. Absolutely astonishing.

So, with the Sun StorageTek 6140 and its crippled features (unless you fork over $10+ mega bucks for a Storage Domains license pack of adquate seat count) and Apple rather brashly removing LUN Masking for no real stated reason and, to top it off, without warning, where does this leave us? And what of the (otherwise reputable) mid-range storage vendors who are left (HP? IBM?); who’s to say they won’t pull a similar stunt down the line?

Well, I know IBM is out of the picture for me as they OEM the same LSI Engenio system that Sun uses for the 6140. Yep, both IBM and Sun sell the exact same system, only IBM calls it the DS4700 Express and Sun calls their version the StorageTek 6140. Their only appreciable difference is one comes in IBM Black and the other in Sun Silver. You also have to buy the IBM equivalent of the 6140’s Storage Domains, which IBM calls “Partitions”. Talk about a screwed up sense of storage terminology.

Anyway, that pretty much leaves HP, and I’m petty unfamiliar with their product line or prices. I don’t even know if I can even get HP kit since I’m not aware of any current State of Maryland purchasing contract with them for this sort of stuff.

So what’s with this apparent vendor hate of LUN Masking in mid-range systems, anyway? One either has to pay out the nose to have it (regarding Sun and IBM) or it’s there but disappears into the night (Apple). Crikey. Whoever does product planning at Engenio, Sun, IBM, and Apple needs a serious reality check. For us people where mid-range is high-end, this behavior matters quite a bit. It just seems like feature sets are imploding rather than expanding, removing a distinct competitive advantage from these products.


OpenSolaris gets more storage capabilities

The OpenSolaris community got a huge present today by way of Sun’s open sourcing of its StorageTek Availability Suite. This announcement by Jim Dunham goes into the details, but here’s a summary:

Availability Suite is comprised of two primary components:

  • Instant Image, which siphons data on a disk device as it is written in real time and stores it on another device. A command snapshots this stream, creating a point in time “shadow copy” of what’s on the live storage device. This shadow copy can then be mounted and used as one would with a normal filesystem. In practice, this is similar to what you get with ZFS’s snapshot feature, only this is filesystem-agnostic.
  • Network Data Replicator – This is (in my opinion) the cream of this product. Like Instant Image, NDR interposes itself above a disk device and sends a copy of the data stream to that device to somewhere else, such a across the network to another server. This is real time remote replication.

I applaud Sun for releasing this! OpenSolaris now has far more and robust storage tools than any other FOSS (or otherwise, for that matter) OS out there.

Hey Sun, how about releasing ESM AA next, eh? There’s Aperi, but ESM AA looks to be far more mature.


Solaris 10’s new multipath storage tool (Part 1)

Solaris 10 11/06 was released in late 2006 with a plethora of new features, and among them a new tool called mpathadm, which comes as part of the SUNWmpathadm package.

Before I delve into how this specific tool works and how it helps when managing multipathed storage in Solaris, I’ll give some background on what multipathing is and how it is implemented in Solaris.


Sun StorageTek 6140: Buyer beware

It’s not often that I’m so utterly disappointed by a product that I feel the need to write about my experience with it, but a situation at work with two newly-arrived Sun StorageTek 6140 disk arrays has cetrainly enraged me enough, and boy do I feel the need to rail against vendors who cripple their products on purpose.

NOTE: As of 8 Feb this issue has been resolved, but read on if you want to head about the saga.


The hard drive: 50 years old

Here’s something that should be dear to any storage manager’s heart: the 50th birthday of the hard drive. CNet has posted a silent video of IBM’s first hard drive which held a total of 5MB of data. IBM announced this new technology on September 13, 1956.

Click here to view it. (Warning: a short ad precedes it)


Sun’s new storage arrays – O.K. but not great.

I read over the specs of Sun’s new mid-range storage models, the StorageTek 6140 and 6540, which were announced yesterday (8/11/2006) and the corners of my mouth must have visibly sank after scanning through the specs and not seeing the one big feature I was hoping to see in any new arrays from Sun. This feature is iSCSI.

Don’t worry, I checked and double-checked, and iSCSI is not a feature. With there being only 100Mb-capable ethernet ports on the controllers and not 1Gb, there’s also no holding out hope that it could be a feature-add at some point in the future.

Unless I’m overlooking some extraordinary feature that these two new arrays introduce, these are just re-skinned logical upgades of what Sun has always offered. 4Gb Fibre Channel and 4GB of cache. Higher IOPs. In one respect, that’s all nice and stuff, but it’s not like anyone could claim that they didn’t see that coming.