Dead Air America

Another notch in the belt of “Bad Technology Decisions in action”…

Air America Radio was dead in the water up through 11 AM yesterday. They went from people being able to visit the home page, click a link, and listen, to requiring “registration” in order for people to get to the stream. Around 11 AM Mountain Daylight Time, they went back to their “old” setup (heh, the network is only a few days old) because their servers crashed and burned.

Consequently, my link to their stream was broken as they went to a “private” URI that you had to log in to see. I registered for an account (or tried to) in hopes I could see the stream URI and be able to update it on my block on the right-hand side of this page, but with everything totally borked, that was just out of the question.

Another notch in the belt of “Bad Technology Decisions in action”…

Air America Radio was dead in the water up through 11 AM yesterday. They went from people being able to visit the home page, click a link, and listen, to requiring “registration” in order for people to get to the stream. Around 11 AM Mountain Daylight Time, they went back to their “old” setup (heh, the network is only a few days old) because their servers crashed and burned.

Consequently, my link to their stream was broken as they went to a “private” URI that you had to log in to see. I registered for an account (or tried to) in hopes I could see the stream URI and be able to update it on my block on the right-hand side of this page, but with everything totally borked, that was just out of the question.

But the biggest mistake is this: they made, IMHO, some poor technology choices for what appears to be an incredibly high-demand site. They are using Cold Fusion running on Microsoft Internet Information Server for their content management system. While these are nominally fine choices for small-to-mid-scale configurations, you have to throw a lot of hardware at the problem in order to handle the massive loads they appeared to be experiencing. You’d need to hefty back-end to drive it regardless of operating system and Content Management, and Cold Fusion is definitely no lightweight. There’s obviously either a few programmatic issues or else a severe lack of capacity at work there.

Bandwidth isn’t their problem: systems management is. They didn’t provide any informative message to users that the system was experiencing high load and unable to service requests, and their registration system was borked (probably) due to load during this big downtime window. My suggestion would by some dynamic offloading of registration through the use of a proxy-caching front-end: figure out the maximum load the system can sustain, track that using a reverse-proxy-caching front-end, and when load exceeds some arbitrary threshold at a point where the designers know it can handle it, throw up a message saying “unable to process your request at this time”, rather than allowing the system to try to process requests until everything’s timing out and nothing is happening.

* Sizeable farm of servers able to handle multiple millions of hits using dynamic content and proprietary technology like Cold Fusion, Oracle, and Microsoft Windows: $2,000,000+.

* Radio personalities to populate your new radio network: $$unknown millions$$/year

* Watching your web servers and streaming audio crash and burn due to poor technology decisions:

Priceless.

10 thoughts on “Dead Air America”

  1. Be nice

    Oh, give them a break. They’ve only been up a week, and they’re doing it on a shoestring budget (despite conservative radio’s accusation that the whole thing is the pipe dream of a few “rich liberals”). I think that they simply had no idea that they’d get the web traffic that they’ve gotten, and were completely incapable of dealing with it.

    They’ve got a bigger influx of money coming in now (McDonald’s and a few other large corporations came on as advertisers this week), and I’m sure a system upgrade is high on their list of priorities.

    Dani and I listened to Air America all of yesterday evening, and enjoyed it a lot, particularly Janeane Garofalo and Stephen Colbert’s interview with the guy who runs The Daily Kos blog, regarding the use of mercenaries in the war in Iraq.

    And by the way, I doubt they’re spending millions of dollars on the radio personalities.

    — Ben Schuman Mad, Mad Tenor

    1. My job 🙂

      Bah, I can’t give them a break. I’m a professional sysadmin; it’s my job to fix systems. What I really want is for them to hire me to fix their problems 🙂

      What makes you think they aren’t forking out the big bucks? J.G. is a reasonably well-known actress, and I doubt she’s doing this for peanuts. The mid-day gal is a very well-known radio personality from Florida. I’m positive they’re not doing this for less than six figures apiece…


      Matthew P. Barnson

      1. heh

        Well, give them a call – maybe they could use you. 😉

        I don’t think they’re forking out the big bucks because I don’t think they have big bucks to spend. Janeane Garofalo may be a well-known actress, but she’s also a liberal activist, and does everything she can for the cause — she hosted MoveOn.org’s award presentation for the “Bush in 30 Seconds” commercial contest, and I’m sure she didn’t get a big check from them either.

        Neither of us can be sure. I may email the station and ask for an estimate of salaries for their on-air personalities – it’s in their interest for the public to know that they’re doing this for the message, not the money (if that is in fact true).

        — Ben Schuman Mad, Mad Tenor

  2. Bad Tech

    I am sorry, but unless you *need* to connect to a windows-only service, using IIS should be considered professional malpractice for technology officers. (Even then, it is questionable. Do you really *need* that windows-only service?)

    Unfortunately, it is mostly people in the trenches that know that. Microsoft is outstanding at selling their products to top brass, who then impose poor decisions upon those below.

    I am personally of the opinion that a multiprocess model is inherently more scalable than a multithreaded model – and it has the additional benefit that it is almost trivial to set up a server farm because the processes act independently anyway. Who cares if the new process is spawned on a different box?

    The setup described by Matt sounds like the type of thing that would be proposed by a “consultant” with only windows experience. If you want scalability, go with Unix or Linux. If you really were concerned about ease of administration, use XServes with Darwin Streaming Server.

    1. Unix geekery

      As far as multiprocess being inherently more scalable than multithreaded…

      It’s a tough thing to say, really. I’ve written multithreaded applications and apps that fork, and worked with both for many years. Really, when I look at a thread, I see it as a “lightweight fork”. A full fork replicates global variables, code, and stack for each process, regardless of the scope of that process. For instance, if your program sorts widgets into whoozits and whatzits, a second fork will include code to sort whatzits regardless of the fact that it’s only dealing with whoozits.

      A thread, on the other hand, shares global variables with the parent, shares code with the parent, and only creates a unique stack for itself. That thread sorting whoozits doesn’t need, and hopefully doesn’t have, code to sort whatzits, or global variables relating to whatzits. This has a memory and instantiation time benefit, at the expense of some code complexity and debugging problems. UNIX supports both forking and threading (in modern kernels, with varying definitions of “supports”), while Windows only supports threading well. A fork() call on Win32 is extremely processor-intensive, and it’s voodoo how Windows does it under the hood. I just know that it’s a good idea not to do it, or suffer painfully slow execution times.

      But as far as any built-in benefit to using forks over using threads, I’m not sure there is one, besides the inherent ease of not sharing the same global namespace. That alone tends to make things more “scalable”: throw more hardware and processes at the problem, and it can go faster, even if your code isn’t aware of what sibling processes are doing. When you’re running a threaded application, debugging can be a bit tougher, and locks when the process is in a state where it needs to address global concerns can make scalability “iffy”. Properly-written, both work very well. Forking, including the full process and separate global namespaces, is just easier to wrap your head around, IMHO. And your aforementioned benefit of working well in a server “farm” — like MOSIX — where long-running processes can be reallocated among lots of machines.

      Threading’s king on big iron, though. When you need highly-optimized tight loops on multiprocessor machines, they’re the way to be. And you can bastardize the model, too: fork off new processes, then have each fork run multiple threads to handle data. Technology is great.

      Except IIS. Ugh. I totally agree with you on that: people that make technology decisions should be the ones in the trenches having to maintain the thing, not ivory-tower managers trying to schmooze people. Unfortunately, political reality sets in, and there are certain “names” management insists that the product has so that it is more “salable”. I should start a business: No-Bullcrap IT consulting…


      Matthew P. Barnson

      1. You forgot locking, tho

        Yes, your comments are on point, but the thing that frequently kills threaded apps are the locking issues. Yes, you share all that stuff. However, every single access to a shared variable or structure needs to be thread-safe. That bottlenecks you in every critical section. (Yes, it is more cache-friendly, but shared libs can get you the same beneficial cache effects while allowing higher scalability on multiprocessor boxes).

          1. Threaded pie

             class pieWorkerThread(QThread): def __init__(self, name, hungry): QThread.__init__(self) self.name = name self.hungry = hungry self.stopped = 0 def run(self): while not self.stopped: time.sleep(rand.random() * 0.3) msg = rand.random() event = QCustomEvent(10000) event.setData("%s: %f" % (self.name, msg)) QThread.postEvent(self.hungry, event)
            
            def stop(self): self.stopped = 1
            
            eatingPie = pieWorkerThread(1) if eatingPie.stopped == 1: if eatingPie.hungry == 1: eatingPie(1) 

            Important note: this code probably won’t execute without a syntax check. You will remain hungry until the thread lock on mother.watching(you) expires.


            Matthew P. Barnson

        1. Global namespace

          And this is why smart programmers avoid modifications to global variables in a threaded application wherever possible, opting instead for alternative methods of signalling between threads.

          But if you are careful to keep as many variables as possible local to the thread, then you’ve just bloated your thread size and eliminated much of the potential savings of threading vs. forking 🙂

          Bit of a catch-22, yeah. It can be done right and done wrong — but now that I think of it, it’s the global locking that bites me in the butt every time on a threaded app 🙂


          Matthew P. Barnson

Comments are closed.