Tag: ZFS

OpenSolaris 2008.11 – A Preview For The Storage Admin

Many reviews have been written about OpenSolaris since its release, but all of them barely tread beyond the desktop aspect, with the obligatory screenshots of the GNOME environment and a high-level description of only the major features most are already familiar with, or at least have heard of.

I’d like to take a different approach with this review, one that descends below the GUI to highlight aspects that server administrators in particular would be more interested in.


Server upgrade time – elemental.org gets modern

After almost 8 years of running elemental.org mail, mailing lists, shell accounts, many websites (such as this one), database servers and essentially being a one-server ISP, the Sun Ultra 2 which ran all those things as lithium.elemental.org was retired and replaced this past weekend with a new server. Say hello to mercury.elemental.org.

Mercury is a Dell PowerEdge 860 with a Intel Xeon X3220 (quad core, 2.4Ghz) and 4GB 8GB of 667Mhz DDR2 RAM. Unlike lithium, mercury’s storage is entirely internal in the form of two mirrored 500GB SATA drives. This is to keep the entire package in 1 rack unit of space to keep colocation costs down.

What really excites me about this new server is that it is running Solaris 10 8/07 (lithium was running a very patched Solaris 8 FCS!). Solaris installed without a hitch and the 860’s onboard BCM5721 NICs are recognized by the bge driver, as are its IPMI baseboard controller by the bmc driver. The chipset on this system is the Intel ICH7 and unfortunately the Solaris ahci driver supports only the ICH6 at the moment, so the drives are running just fine in IDE compatibility mode.

This upgrade wasn’t just a mere update of hardware and OS. I also completely changed how the mail storage works and also make use of ZFS file systems for each user home directory and virtual web site:

  1. Out with uw-imap, in with Cyrus. All mail is delivered to Cyrus, so there are no more maildir-style spools sitting in each person’s home directory.
  2. To take advantage of Cyrus’s features, elemental.org is now operating its own Kerberos realm, ELEMENTAL.ORG. This is my first time running my own Keberos KDC, and I love it. Cyrus and Sendmail, via SASL, now offer GSSAPI authentication. Using Solaris’s pam_krb5_migrate.so.1 PAM module, as people log in with their UNIX passwords, a Kerberos principle is made for them and they are granted tickets. Pine is configured to connect to Cyrus and authenticate with GSSAPI, so shell users don’t have to type in or save their password when accessing their email!
  3. As I mentioned, all user data is now stored on a mirrored ZFS pool. Each user and virtual website gets their own ZFS file system and this will allow me to keep tabs on disk usage (and easily delete a user or site if the need should arise.) The zpool’s net size is 442GB.
  4. All incoming email is goes through greylist, ClamAV, and finally SpamAssassin milters.
  5. I’m more at ease and familiar with Solaris’s SMF facility now, having made a point to write SMF manifests for the services I’m running rather than plain old init scripts.

In addition, I’m now monitoring several aspects and services on the new system using Cacti.

Here’s to another 8 years of hopefully trouble-free operation!


My presentation at the 2007 AFS & Kerberos Workshop

This past week, the 2007 AFS & Kerberos Workshop went on at the Stanford Linear Accelerator Center (SLAC) near Palo Alto, CA. Many people from an array of places eductational, government, and commercial came and presented papers and discourse on a wide range of topics involving AFS and Kerberos.

I was lucky enough to present a slide show on how we at UMBC have been combining OpenAFS with new ZFS and Zones features of Solaris 10 to obtain a more resilient AFS server infrastructure. You can view a PDF of my presentation here.


Crying “FUD” doesn’t always mean you’re right

…and it sure doesn’t grant you instant vindication.

It appears that DaveM (Linux networking and SPARC port guru) has gotten seriously wound up in response to a blog post by Jeff Bonwick (Sun’s storage and kernel guru.)

As one can see in Jeff’s post, the suject he wrote about was within the greater context of using Solaris as a storage appliance OS (something I have an interest in) and why Solaris/OpenSolaris can and would excel when it comes to being the kernel of a storage OS.

I’m a storage guy. In the course of my work I have to not only work with Solaris hosts on my SAN, but also Windows and Linux (and soon, AIX). So I have a front-row seat when it comes to witnessing and dealing with how these various OSes deal with storage, from the filesystem to multipathing, to the HBA… and let me tell you, Linux is quite not the joy in this specific area as most people think it is on a general, all-encompassing level.

And that’s what Jeff’s angle was.

Now on to Dave’s rebuttle.

“The implication is that Linux is not rock-solid and that it does break and corrupt people’s data. Whereas on the other hand Solaris, unlike the rest of the software in this world, is without any bugs and therefore won’t ever break or corrupt your data.”

No OS comes without fault, but some OSes have faults that are more glaring than others in their analogous areas. Staying within the storage context of this discussion, I have to say, again, Linux is no shining star here.

ReiserFS is arguably the most advanced fs in terms of features when it comes to the portfolio of Linux file systems, but its issues with stability are such that you’re really walking on eggshells whenever you employ it. I have been personally told too many first-hand accounts and read plenty more on the Internets regarding its tendency to be fine and then fail spectacularly. It has been likened to a time-delayed /dev/null of sorts, and the future of it is in doubt with the legal troubles of its designer and Namesys limbo. Is any version of ReiserFS a viable Linux storage technology for a production environment? I say No. That’s sad because I dare say at one point ReiserFS had some promise.

EXT2 and 3… tried and true. Very stable and moderately fast for most tasks. But it’s an “old guard” file system. As such, it’s not very flexible, and any flexibility it gets comes from using a volume manager underneath of it. In the days where the notion of handing a server a 1TB LUN is nothing to blink at, this inflexibility can be suffocating in a dynamic environment. These “old guard” file systems (yes, Solaris’s UFS is one of them, too) are more like mere utility file systems than practical ones for today’s mass storage needs. It’s good for holding a machine’s OS and that’s about it.

XFS… Of all the file systems in the Linux file system portfolio, this one gets the gold star. Stable, fast, and decently scalable with the large amount of data you can stuff in it… but it still suffers the same problems EXT[23] and other “old guard” file systems do in terms of flexibility. In other words, it’s just a file system. Keep in mind that this critique is coming from a guy who worked with XFS on IRIX often and absolutely loved XLV… back in its the day.

As it stands now, the mainline Linux kernel doesn’t offer anything which embodies the file system triple play: being stable && fast && flexible. Solaris’s ZFS has this. I’ve so far entrusted 30TB of spinning rust to it, and it has yet to let me down. Sure, there are projects here and there that have the eventual goal endowing Linux with a ZFS analog, but as of right now they’re nothing production quality and are definitely not something a admin can call RedHat to get support for.

There are plenty of other aspects to the storage context… the fibre channel stack, for one, and other things such as multipath IO implementations and volume manager and management layers (which Linux has a host of… not necessarily a good thing… LVM, LVM2, MPIO, RDAC… it makes your head spin.)

But as far as this storage-oriented discussion goes, file systems are indeed the make or break aspect. This is why Jeff said what he said. Linux has no ZFS. Windows has no ZFS. It is not that Linux or Windows need ZFS itself in order to compete, it’s that they need to develop and employ the concepts that ZFS implements and do so as clearly and concisely as ZFS has.

Anyway, enough about storage. Now, why is it that the Linux community (let alone a prominent member of it) has to react so violently to any questioning of its perceived superiority? Is it misplaced or excess pride? Have they not tried things other than Linux recently and they’re just flying with blinders on? Is it just the social culture which prevails within it? What ever it is, seeing posts like Dave’s makes my toes curl with embarrassed amazement.

A friendly message to Dave: Chill the ad hominems, mkay? Crying “FUD!” at the mere sight of someone who you perceive as poo-poo’ing an aspect of your interest doen’t typically translate into a well thought-out rebuttle. You took the low road and tried to convey Jeff as being some instrument of some nefarious, Mr. Burns-like person at Sun. Is vilifying instead of cool-headed technical discourse really your desired style? Has anyone come biting at you saying “oh, he’s a Linux kernel developer, so he has an agenda”?