After reading the article in its entirety, a few thoughts were going through my head. First, was why not CentOS? Wikimedia mentions that they are running different versions of Red Hat and Fedora, which is a problem in itself that I’ll also address, and that they could run 100% Red Hat Enterprise Linux. However, licensing costs with Red Hat Enterprise Linux are extremely costly. In fact, what many don’t realize, is keeping updated licenses on RHEL is more expensive than keeping updated licenses on Windows. So, the question arises then, why not just run CentOS? Well, they don’t mention it in the article, but I can make a couple bets as to why not. First, as fine as the CentOS community and distribution are, they walk the razors edge. On average, CentOS is about two to four weeks behind RHEL on patching security vulnerabilities, bug and holes. So, because CentOS is behind RHEL in fixing these bugs, they have a choice to make. Either they can wait for Red Hat to fix the bug, and then compile and ship the patch themselves, or on a more timely manner, they can patch the bug on their own, risking breaking binary compatibility with RHEL. Either case, it’s a lose-lose scenario in terms of security and stability. I doubt this is an attractive option for Wikimedia.
The second thought that went through my mind, was why are they running so many different versions of Red Hat and Fedora? As mentioned in the article: “Over five years, the servers were running a variety of versions of Red Hat Linux and Red Hat Fedora, making it more complicated to install applications and maintain the servers.” Yeah. You think? Doing software development, I have become familiar with different versions of development tools on different platforms causing problems. It’s probably due more to a lack of discipline and not understanding the spec on those tools on my part, but when developing, it’s a pain to keep my code working well on several different platforms. On top of that, directory structures change, standards evolve, and tools disappear and morph as platform versions increase. GNU/Linux is an evolving ecosystem, so there is no guarantee between different versions
Why not standardize, on say Fedora 6? Well, again, not mentioned in the article, but I bet I can make a few bets as to why Fedora wasn’t picked also. Fedora, and its community, are rock solid. It’s a good distribution, and has a lot going for it. However, Fedora places itself to be a test bed for new technologies and changes. Some call this “the bleeding edge” with respect to the newness of software. As such, thing are going to break. That’s just a guarantee. Hang around my office long enough, and you’ll hear the weeping, wailing and gnashing of teeth about X11, wireless, sound, and other things. Fedora, although it strives for stability, never really gets that opportunity. Instead, each release is broken somehow, someway. Even Fedora 6. Further, patches and updates are only provided for 1 month after the following release (this usually means 13 months). After that time elapses, any instability, bugs and holes remain. Upgrading your servers every year, so you keep the latest updates and patches on your system, probably isn’t attractive.
So, it seems then that the Red Hat based distributions are cutting it (you have the same problems with openSUSE and SLES as described above), which really only brings us to the Debian based distributions. Ubuntu was picked as the platform on which to build their infrastructure. There are some good reasons for this. First, with Debian derived distributions, you can upgrade to the latest release without taking your box offline. With a company like Wikimedia, that’s probably a good thing. Second, Ubuntu has the largest, most vibrant and diverse open source community in the world. It’s unparalleled popularity on the desktop coupled with routine updates and a company willing to back it as the “enterprise” platform, make it attractive indeed. You get the updates, security patches and bug fixes for 5 years if you’re running the server version of the LTS release, which I would make a bet they are. On top of that, if support contracts are attractive to you, then Canonical is more than willing to answer the phone.
I guess the only questions left in my mind, is what sort of time and money will be spent moving over to the new platform. Any company making a big transition, such as this, won’t take it lightly. Everything will have to be efficient, executed smoothly, and go off without a hitch. Of course, anyone who’s been in the system administration world for some time, know that this is unlikely. There are going to be headaches, wasted time, and unforeseen glitches. It’s how the company handles those problems, that will be key. However, this was great news awaiting me in my RSS feeds this morning.