elemental.org now live on IPv6

This week, I worked with my colo provider and was allotted use of 2607:f308:7::/48 – 80 bits of IPv6 address space totaling 1,208,925,819,614,629,174,706,176 IPv6 addresses to call my own. That’s one septillion, two hundred eight sextillion, nine hundred twenty-five quintillion, eight hundred nineteen quadrillion, six hundred fourteen trillion, six hundred twenty-nine billion, one hundred seventy-four million, seven hundred six thousand, one hundred seventy-six addresses. I hope ARIN doesn’t demand proof that I use them all.

I’ve assigned daleghent.com the address of 2607:f308:7::aa as you can see here:

Setting this up on my server was a cinch. First, I configured my igb0 interface to have a link-local address. This unicast address lives in the v6 link-local reserved space of fe80::/10 and is in the EUI-64 format, using the MAC address of the physical igb0 interface:

You can see how the MAC address is used to create the EUI-64 address:

Once set up, the NDP daemon (in.ndpd) went to work, emitting Neighbor Discovery packets on this interface at regular intervals. Once my colo provider brought their side up with the IPv6 configuration, their link-local and mine discovered each other, and my IPv6 default route was auto-magically set up for me:

Above, fe80::218:74ff:fee0:b8c0 is the local-link address of the router I’m connected to. Packets to and from my two local addresses of 2607:f308:7::2 and 2607:f308:7::aa need only work through my link-local connection to reach the world.

Also ensure that the ipnodes line in /etc/nsswitch.conf knows how to use DNS:

That’s all there is to it.

Traceroute to me using IPv6

Eventually, I will enable IPv6 for all sites and services I host. I don’t think it’ll be hard at all.

Twenty one years ago in December 1995, the IETF ratified RFC 1883 which represented the culmination of years of research under IPNG-WG and codified Internet Protocol version 6, better-known as IPv6. Here in 2017, one can finally get the sense that IPv6 address usage is becoming a serious implementation topic. Entire cell phone networks are deployed using it now, however we continue to see hit-and-miss implementation on the larger Internet sites and the predominate North American ISPs such a Verizon and others. Comcast/TWC is a stand-out here, having assigned IPv6 addresses to its subscribers for a number of years. Google is an example of a provider of a large amount of services which all implement IPv6, however most don’t realize this and there aren’t any Google products of note which are available only via IPv6.

My personal prediction is that the next 3-5 years will see a sudden avalanche of v6 adoption on the service provider and ISP front. People have had enough time to experiment with it, and finger-in-the-air indications seem to favor future apps and protocols built exclusively around v6. IPv6 is also approaching a quarter of a century of existence, so basic infrastructure now groks it (the old “our routers don’t support it” excuse is a bit of unbelievable these days.) Once people finally have understood the protocol, how it is different and is in many cases superior to v4 from a performance perspective, being a part of the v6 network will hopefully become a strategic rather than just an operational imperative.

The key is to not focus on the “v4 will be depleted soon!” doomsday predictions which come and go and everyone pretty much ignores; but rather to focus on its more in-built advantages. Make people want to use it rather than trying to scare them into it.

On illumos community


I had been a Sun/Solaris fan all of my professional life – I started out on a Tatung SPARCstation 5 clone running SunOS 4.1.4 which served as the main server of the small mom & pop ISP I worked for immediately out of high school. SunOS 4 turned to Solaris 2,  and as I progressed through my career I always found myself in good company with other people who appreciated Solaris. When OpenSolaris eventually came to be, I recall being quite excited to have the mysterious veil around Solaris lifted and to be able to explore and improve it without the requirement of having a Sun Microsystems, Inc. ID badge and paycheck. When Sun became Oracle, and upon Oracle unceremoniously removing OpenSolaris from existence, the illumos project was born and it now serves as a nexus for several distributions, commercial platforms, and individual technologies.

One thing I would like to emphasize in the previous paragraph is the fact that I, while a life-long Solaris and Sun hardware user, was never a Sun employee. When OpenSolaris came about, much of the internal style of process and communicating was laid bare to the world. “The Sun Way” was now in full view, and for the environment OpenSolaris operated in it was still relevant albeit with some tweaks here and there to accommodate non-Sun interests. When Oracle closed OpenSolaris and forced the subsequent creation of illumos, this obviously caused some change. People left Sun/Orcacle, started their own companies and fostered the creation of illumos in ways patterned after the best way they knew how – The Sun Way.

Coming back to the community after the genesis of illumos, I did notice that a lot of these old “Sun-isms” were still in play. People still spoke of “putbacks” and still wanted to see “webrevs” which is gibberish to anyone who lacks their historic context. Beyond these procedural elements, we continue to this day to rely on the Sun Studio compilers and adhere to self-defeating standards observance, some of which was implemented to support now non-existent Sun products. While the illumos code itself is housed on GitHub, we utilize none of the plethora of tooling available for it which quite a number of projects have come to use. GH, in essence, is only used for the repo infrastructure aspect and not much more.

As a maintainer of OmniOS, I often have to deal with these idiosyncrasies – creating webrevs, having to use closed-source, outdated, and unmaintained compilers, posting diffs to mailing lists for other people to deliberate and so on. For all my love of illumos and respect for the incredibly knowledgable people who labor daily with their acumen, insight and knowledge to keep it going, I have to say that there are a few things here that are just too moldy to keep around and, I fear, contributes to dissuading interested persons or parties from fully engaging productively with illumos.

I’m going to pick a few things to go off on. These might be trivial to some, but they’re important to me for some reason or another.

Reliance on a closed-source compiler (Sun Studio). One of the features peculiar to illumos is that it supports what is called a “shadow compilation environment” – where the same C source code can be compiled serially by a compiler that is different from the main one. In this case, this “shadow compiler” is the one provided by the Sun Studio 12.1 suite – the last version of Sun Studio that was available before Oracle slammed the door shut. Running the shadow compiler is optional, but in the case of testing diffs prior to submission, its use is encouraged as the Sun Studio compiler has always been more attentive about code correctness around typecasting and pointers, among other things. This is good. What is (exceedingly) bad about this, however, is that it’s a closed-source compiler which will only bit rot over time, destined to become a living fossil of dodgy province and increasingly dubious utility (C standards move on while it quite obviously won’t.) Now, Sun Studio isn’t all that bad – it does provide the undeniably useful lint utility… however I feel the time is ticking on this relic and a suitable open source replacement should be devised.

Reliance on an aging version of a hacked compiler just to compile illumos (GCC 4.4.4-illumos) This is one I’ve never fully understood. I recall back in the OpenSolaris days the big push to get ON to build using GCC rather than it being the exclusive domain of Sun Studio, and GCC at the time was firmly in the 4.x era. The version used today is 4.4.4, altered for ~reasons~ which are lightly outlined here. Strict adherence to certain Open Group standards and such is the reason I see cited most of the time, however it does bother me that scant few in the community seem comfortably knowledgable about these differences and have the acumen to alter GCC appropriately.

It also begs the basic question: “Why us? Why must we be unique in this way?” As far as I’m aware, FreeBSD doesn’t require an altered GCC to compile; nor does Linux or any other *BSD for that matter. Why must we be a departure from this? Linux actually provides compatibility in the other direction – the kernel source provides the compatibility layer for each compiler it supports (currently Intel, GCC, and clang.) For example here is its compat header for GCC. Is it not possible to do something similar in illumos? Whatever the reason, this arrangement makes me feel really uneasy and one where the institutional knowledge required to maintain it could be easily lost to the sands of time.

Too many false golden cows. Within illumos-gate we still find dusty cobwebs left over from the Sun product ecosystem – a custom Kerberos stack based on a old version of MIT Kerberos was part of the SEAM suite, Sendmail which is based on 8.14.x and includes Sun-specific LDAP alterations, CMU Cyrus SASL which, again, has largely languished untouched since the fork and is mysterious as to where it fits in. Undoubtably there are other obscure corners of usr/src which contain more examples of derelict and customized externally-sourced code. These sub-systems are problems because they’re not well understood, any engineering documentation behind them is lost, and the closest we can get to any in-depth knowledge on them are the recollections of people who might not have had actual first-hand experience developing them in the first place.

I once provided a patch to get illumos-gate’s Sendmail updated to the latest version of 8.14. It was met with a bit of doubt – and the sense I got was that it was regarded as just too spooky to touch. NO, I wasn’t going to sit there and test this new sendmail against weird Sun Directory schemas to see if it behaved the same as the previous version. NO, I wasn’t going to channel John Beck. I made sure it compiled cleanly and it passed the basic out-of-the-box local and remote delivery tests. But no, the topic got the bike shed treatment. The topic of “Why don’t we just remove it?” subsequently came up, which was also met with “but we need a MTA/MDA because parts of illumos-gate depends on mail” which then devolved into another “what should -gate provide vs. demand externally” conversation which, predictably, went nowhere. Seriously? So we’d rather keep a old version of Sendmail which has at least one known CVE lodged against it in our source tree because a newer version might not be tested enough in unarticulated ways and we would be responsible for breaking someone’s ancient Sun Directory-based delivery map?

At any rate, it’s hand-wringy, petrified-in-fear attitudes such as this which completely works against us in several ways. We lose out because we willfully allow bit rot and thusly suffer its effects (security issues, and it just gets more spooky over time as memories fade and people move on), and we give this massive impression to potential interests external to our community that we are unwilling to let things be fixed or be improved. I wouldn’t blame anyone who felt estranged before they even got heavily involved because these are terrible things to be impressed by.

A dated participation system. I saved this one for last because it’s the topic of conversation on the developer@ list which spurred this post. Personally, I have a love-hate relationship with webrevs. I hate making them, and I hate maintaining them. I do like reading them, though, but that’s about the extent of it. I’ve tried Review Board and it’s alright, but sometimes I find myself fighting an invisible force to actually use it. Maybe the server it’s running on is a Raspberry Pi, but it’s just always tortuously slow for me.

A recent maneuver by a community member has ignited this topic, centered around the question of “Why can’t we use GitHub?” In retrospect, I think it’s a good question to bring up although I disagree with the manner in which it was. My only misgiving about branch-and-merge contribution styles in Git (and is the default for GitHub) is that the commit logs become absolutely littered with annoying and low-value merge commit messages. GitHub now allows one to merge without those such as using a rebase-to-branch option on a pull request, but that isn’t the default. Make a mistake when merging a PR and the merge log is indelibly etched into the memory of the project. Yuck. Perhaps someone with more GH-fu can explain this better, or change the default behavior on pull request acceptance. Regarding code review systems, FreeBSD seems to have found a comfortable spot with Phabricator. Could this be a compromise?

In the end, I really wish to see illumos continue to succeed and – most importantly – grow. I’m really worried sometimes that the public’s perception of the project is that it’s a ex-SUNW playground and you need an “in” in several ways to be a contributor to it. While I don’t at all believe this to be the case, I can certainly see where we (“illumos”) can easily come close to coloring impressions that way. I’ve observed several community members welcome and thank new submitters for their work and always engage them constructively – I was on such recipient on my first big contribution after all – but we still need to work on the curb appeal quite a bit, I think, and modernize some things and re-evaluate some overly-rigid stances on others.

Fruit Bat hitches ride on the Space Shuttle

I got this forwarded to me from a friend, who got it from a friend, who has a relative working at NASA: 

Although we remained hopeful he would wake up and fly away, the bat eventually became IPR 119V-0080 after the ICE team finished their walkdown.  He did change the direction he was pointing from time to time throughout countdown but ultimately never flew away.

IR imagery shows he was alive and not frozen like many would think.  The surface of the ET foam is actually generally between 60-80 degrees F on a day like yesterday.  SE&I performed a debris analysis on him and ultimately a LCC waiver to ICE-01 was written to accept the stowaway.  Lift off imagery analysis confirmed that he held on until at least the vehicle cleared to tower before we lost sight of him.

And thus is the legend of the STS-119 Bat-ronaut.

Click on the thumbnails to get the full-res photo.

Portugal vacation photos

Photos from my two week trip to Portugal to see the country and attend Boom Festival are up.

Various places in Portugal
Boom Festival 2008 photos

Mercury gets a HBA upgrade

mercury.elemental.org is the server which hosts my $HOME and this website. It’s my Solaris 10 play-box, and I guess you can say that maintaining it is something of a hobby.

Its hardware is a quad core Xeon-equipped Dell PowerEdge 860, a small 1u server. Its pair of internal drives are Seagate SATA2, and were connected to the on-board Intel ICH7-based SATA controller. But there was something fishy about this in that the Solaris ahci SATA driver never attached to it and instead the drives ran in IDE mode. Despite my best efforts, I couldn’t change this. I eventually found out the reason – Dell crippled the SATA controller in the system BIOS to allow only IDE mode!

So this server was sold with “SATA drives”, which would imply a fully functioning SATA controller to drive them… but not quite. IDE mode means there were no benefits of SATA NCA and other niceties.

To fix this, I got a LSI SAS3041E-R controller – a 4x PCIe card that uses the LSISAS1064E chipset and offers 4 SATA ports. In Solaris land, this card would be driven by the mpt driver, a proven driver as the LSI SAS 1064 and 1068 chipsets are used to drive the on-board hard drives in pretty much every current Sun x86 and Niagara-based SPARC systems.

I installed this card in the single 8x PCIe slot in the PE860, and ran a 24″ SATA cable from it to HDD1, and used the existing Dell cable that connected the on-board controller to HDD1 to connect HDD0 to the card. After some fiddling in /boot/solaris/bootenv.rc to tell the kernel the new device path to its boot drive, the mpt driver attached and I was good to go.

I kicked off a SVM mirror resync as a basic test of sequential IO, and I hit 75MB/s reading from one drive and writing to the other. Not bad. A zpool scrub of my mirrored ZFS pool of 66.5GB of data (pool is 444GB in size) took just over an hour.

So if you’re thinking about a 4 or 8 port SAS/SATA card, consider the LSI SAS3041 or SAS3080/3081 cards, respectively. Both come in PCI-X and PCIe flavors and are supported by Solaris (and OpenSolaris) just fine.

/usr/X11/bin/scanpci output:

Kernel boot messages:

OpenSolaris 2008.11 – A Preview For The Storage Admin

Many reviews have been written about OpenSolaris since its release, but all of them barely tread beyond the desktop aspect, with the obligatory screenshots of the GNOME environment and a high-level description of only the major features most are already familiar with, or at least have heard of.

I’d like to take a different approach with this review, one that descends below the GUI to highlight aspects that server administrators in particular would be more interested in.

A new telescope: William Optics Megrez 90

I’ve been really quiet with the astronomy-related blog posts over the past year, but that doesn’t mean that I’ve been straying from the hobby of amateur astronomy – far from it. I’ve signed up with two local clubs and have been brining my scopes out to star parties (or just my back yard) whenever I can.

Up until recently my only two telescopes have been a Orion XT10i, a 10″ dobsonian, and a Coronado PST for viewing the Sun in Hydrogen-alpha wavelengths. This past holiday I treated myself to a new scope, a 90mm apochromatic doublet refractor made by William Optics (WO) named the Megrez 90.


The Megrez 90, as the name implies, is a high-quality refractor telescope that uses calcium fluoride optics. The objective lens is 90mm in diameter and the scope has a focal length of 621mm, which means it has a focal ratio of f/6.9. When WO brought this scope to market, it took it by storm as it was quickly regarded as a high quality instrument at an astonishingly low price, easily comparable in optical quality, fit and finish to long-standing fonts of quality such as TeleVue and Stellarvue.


I found the reviewers to be spot-on with their assessment of this telescope’s construction and features. Its dual-speed (10:1) Crayford-style focuser has made me wish I had it on my big Orion XT10i. The stars are beautiful pinpoints with no detectable (to me at least) chromatic aberration. I have only spent a few nights outside with this ‘scope so I don’t have a full feel of its capabilities… more on that later. But I will say that I have been impressed so far and would at least offer it as a suggestion to anyone who is looking for a telescope in its class.

Along with the telescope, I purchased WO’s EZTouch alt/az mount and wooden surveyor’s style tripod to put it on, as well as their Red Dot Finder instead of a classic finder scope. I found with my XT10i+Telerad that I prefer to star-hop to my target rather than bungle around inside a restricted FOV.


I foresee many nights out under clear skies with this fine instrument.

End of an era, onward to a new one

With a bit of sadness, yesterday marked my last day of work at UMBC where I spent the past 3½ years learning lots of new things. It’s where I developed my deep interest in mass storage and furthered my Solaris knowledge even more, where I delved into kernel programming by participating in the OpenAFS project.

I learned a lot about people there, too, and how different sectors of the IT industry just have sometime inexplicably different mindsets about how to do things. Coming from the .com world to the .edu world was a bit of a whiplash event for me then having grown up around profit-based and customer service-centric organizations. I know I did leave UMBC with some lasting friendships and deeper appreciation skill sets in other people that I gave barely a thought to before.

Onward and upward, I transition to my new job and re-enter the .com world. On Monday, I start with Salesforce.com and will focus on storage (and Solaris) there. It’s an exciting opportunity for me and I’m sure I’ll be immersed in the technology and tasks I enjoy. I’ll work with a top-notch team in helping to keep SFDC at the forefront if its industry. Good times ahead!

Server upgrade time – elemental.org gets modern

After almost 8 years of running elemental.org mail, mailing lists, shell accounts, many websites (such as this one), database servers and essentially being a one-server ISP, the Sun Ultra 2 which ran all those things as lithium.elemental.org was retired and replaced this past weekend with a new server. Say hello to mercury.elemental.org.

Mercury is a Dell PowerEdge 860 with a Intel Xeon X3220 (quad core, 2.4Ghz) and 4GB 8GB of 667Mhz DDR2 RAM. Unlike lithium, mercury’s storage is entirely internal in the form of two mirrored 500GB SATA drives. This is to keep the entire package in 1 rack unit of space to keep colocation costs down.

What really excites me about this new server is that it is running Solaris 10 8/07 (lithium was running a very patched Solaris 8 FCS!). Solaris installed without a hitch and the 860’s onboard BCM5721 NICs are recognized by the bge driver, as are its IPMI baseboard controller by the bmc driver. The chipset on this system is the Intel ICH7 and unfortunately the Solaris ahci driver supports only the ICH6 at the moment, so the drives are running just fine in IDE compatibility mode.

This upgrade wasn’t just a mere update of hardware and OS. I also completely changed how the mail storage works and also make use of ZFS file systems for each user home directory and virtual web site:

  1. Out with uw-imap, in with Cyrus. All mail is delivered to Cyrus, so there are no more maildir-style spools sitting in each person’s home directory.
  2. To take advantage of Cyrus’s features, elemental.org is now operating its own Kerberos realm, ELEMENTAL.ORG. This is my first time running my own Keberos KDC, and I love it. Cyrus and Sendmail, via SASL, now offer GSSAPI authentication. Using Solaris’s pam_krb5_migrate.so.1 PAM module, as people log in with their UNIX passwords, a Kerberos principle is made for them and they are granted tickets. Pine is configured to connect to Cyrus and authenticate with GSSAPI, so shell users don’t have to type in or save their password when accessing their email!
  3. As I mentioned, all user data is now stored on a mirrored ZFS pool. Each user and virtual website gets their own ZFS file system and this will allow me to keep tabs on disk usage (and easily delete a user or site if the need should arise.) The zpool’s net size is 442GB.
  4. All incoming email is goes through greylist, ClamAV, and finally SpamAssassin milters.
  5. I’m more at ease and familiar with Solaris’s SMF facility now, having made a point to write SMF manifests for the services I’m running rather than plain old init scripts.

In addition, I’m now monitoring several aspects and services on the new system using Cacti.

Here’s to another 8 years of hopefully trouble-free operation!

Verizon FiOS with only a Apple Airport Extreme

I’ve had Verizon’s FiOS service for about a year now, and by and large I’ve enjoyed it quite a bit. One thing that has bothered me, though, is the big ActionTec router that they supply. It’s a nice router and all and you do need it if you also have Verizon’s digital cable service. But I have just the internet service and I already have a gaggle of Apple Airport Extreme and Airport Express base stations around the house, so this Actiontec router was just a superfluous thingy and I felt that my Airport Extreme base station could be put to better use in its place. Now, the Actiontec router is what the VZ tech installs. It takes the 100Mb ethernet connection coming into the house from the ONT outside. According to VZ support, only it can be used to terminate the FiOS internet, but I doubted this. I wanted this thing out of the picture and was successful at doing so.What you need to do is the following:

  1. Log in to the Actiontec’s web interface (typically by going to
  2. Select Network, click on “Ethernet (Broadband)” and its edit icon. Down the page, you’ll see a button labled “Release”. It’s important to release the IP address VZ’s network has given the Actiontec, or it’ll refuse to allot one to your Airport Extreme once you bring that up in its place.
  3. Immediatly turn off the Actiontec. Remove the “WAN” ethernet cable from it, and plug it into the “WAN” port of your Airport base station. Turn the Airport on.
  4. The Airport base station should boot up and request an IP from VZ’s DHCP server. Speaking of which, the “Internet Connection” setting in the Airport should be “DHCP” and not “PPPoE”. VZ no longer uses PPPoE on its FiOS lines.
  5. Configure your Aiport wireless network as you see fit and you’re done. No more Actiontec.

Step 2 is muy importante. If you don’t do that, your Airport base station will be sitting there with a blinking amber light because the VZ network is refusing to give it an IP, simply because it still thinks that your (no longer operating) Actiontec has it.[tags]apple, airport, verizon, fios[/tags]