Back by popular demand: GNU/Linux vs. High-End UNIX

In the “Yet Another Repost” category, here’s one of my old articles examining the differences between GNU/Linux and High-End UNIX out of insights gathered from a Slashdot discussion. Yeah, I know a lot of people slam Slashdot for the technical ignorance manifested there, but if you’re familiar with the territory, it’s not that treacherous.

Read more for the repost from March 3, 2003.

In the “Yet Another Repost” category, here’s one of my old articles examining the differences between GNU/Linux and High-End UNIX out of insights gathered from a Slashdot discussion. Yeah, I know a lot of people slam Slashdot for the technical ignorance manifested there, but if you’re familiar with the territory, it’s not that treacherous.

Repost follows — There was an interesting Slashdot discussion today regarding what’s missing from GNU/Linux systems that high-end UNIX systems possess. I made a comment myself (which I’ve pasted in its entirety inside, hey, it’s my web site!), and gleaned some interesting information from the discussion.

Before you get to my cut & pasted entry, here’s some of what I gathered from the discussion…

A) The Price. At first you think “hey, ok, this is a big positive”, but wait a minute: if you install GNU/Linux for free, it’s only free if your time is worth nothing. If you pay for professional installation of your operating system, you don’t lose as much time. Curiously, when I worked at Talk2 a year ago, they had HP/UX systems there that were the original factory install and had been for years. They had patched them regularly, of course, but still, they saved a TON of time by never installing a “fresh” HP/UX system. They simply created a system image when the system was new, and restored from that. It appeared to lower their management costs; at least, they decided they didn’t need admins anymore and canned the two of us 🙂 Meanwhile, Solaris is free on Sparc hardware. Guess you pay that cost up-front.

B) There’s a good summary of the difference in security features between GNU/Linux and conventional UNIX here.

C) Curiously, nobody could seem to come up with any high-end UNIX systems that were microkernel-based. They are all monolithic, with modules, similar to Linux (or, in the case of HP/UX, simply monolithic and you recompile the frigging thing if you want any new hardware). The only good examples of microkernel-based systems anybody could conjure up were GNU/Hurd and BeOS. I’m eager to see how well Hurd penetrates the OS market.

D) Man pages. GNU/Linux seems to have almost abandoned them entirely in favor of Info pages, text READMEs, HTML documentation, or the most popular choice: no documentation at all. Apparently you can get around this in the case of info pages with:

info –subnodes –output – | less

Alternatively, use Debian, where every binary requires a man page, or is considered broken. Interesting, that, didn’t know that before.

E) Great quote from Zeinfeld, and spot-on in my opinion: “The man issue points to the real limitation of Unix which isn’t really a lack of features, the problem is the quality of implementation.” Completely true. Feature-for-feature, GNU/Linux really competes well on all fronts. But the quality of the implementation on GNU/Linux systems lacks considerably, particularly in the first iteration. Yet, over time, they evolve to a point where they are superior to the earlier tools. Witness the 10+ year old GNU tools… they are far, far better than the corporate implementations. Bash, cshell, and ksh are all awesome compared to the posix SH that ships with HPUX or Solaris. GNU’s grep, awk, sed, and other command-line utilities are so much better than the vendor-provided ones that vendors are abandoning their own in favor of GNU. It’s definitely getting there.

My original observations from the Slashdot discussion follow. —– A few things that are very nice about some commercial UNIX variants you don’t have on GNU/Linux systems:

1. Integrated systems management, ala “Sam” in HP/UX. Although I’m first in line to say that systems administration should never be handed over to imbeciles, Sam is easy enough that non-professionals can use it, yet it covers all the bases of systems administration from your hosts file through recompiling a kernel. It seems to be what Linuxconf wants to be, but isn’t quite yet. It also does this without royally screwing up particularly hard-fought configuration files. Just use Linuxconf to configure network interfaces after you’ve set up a beautiful five-lne config and see what it does to /etc/sysconfig/network-scripts/ifcfg-ethX. Red Hat’s config tools are getting there, and YaST seems to have nailed it — but it’s not free software.

2. Transparent X configuration w/3D support out of the box. When the installers get it right (about 75% of the time), Linux + X-windows is just fine. When it gets it wrong, the iterations are ugly: XFree86 -configure (blah blah blah) XFree86 -xf86config /root/XF86Config.new (dumps out, some obscure error) vi /etc/XF86Config.new (ad nausem)

I miss how trivial it is to adjust X on my old Sun. Then again, there, instead of hacking a config file, you had to hack some obscure command options. And setting up dual monitors on XFree86 is much better than on Solaris (or was, back when Solaris 8 was the standard, haven’t mucked with Sun equipment much since then).

3. More on the X server: FAST X services. I’ve run XFree86 on really new, top-of-the-line Nvidia, ATI, and Matrox hardware, and not one of them can even touch the performance of X-windows on my old SGI O2. IRIX X is just amazingly faster. I’m not talking so much about 3D performance, but multi-head, full-window drag type stuff. Watching the ghosting as I wiggle this very screen I’m typing in back & forth on my RedHat 8.0 box at work right now on an Nvidia Geforce4 @ 1280×1024 is just painful. I know people are going to say “it’s the configuration, stupid!” but if optimizing for decent X-windows performance isn’t easy enough for a UNIX veteran of 7 years to do it without serious pain, it’s not easy enough for an admin to want to deal with it. NOTE: I optimized all 686 at home on Gentoo with Nvidia’s drivers. It’s considerably better, but still doesn’t compare. Then again, I don’t have an O2 anymore for real head-to-head comparison, so maybe my memory is playing tricks on me. On the other hand, identical hardware in MS Windows gives immensely better 2D performance.)

Then again, that’s just a graphics professional feature, more than a server-type feature. Comparing any other UNIX to SGI’s IRIX for graphics work is just no contest.

4. Memory fault isolation. On Solaris, I’ll actually get a message telling me which DIMM is bad, and which slot it is in. Admittedly, this is a failure not only of the operating system, but also of the hardware design. When you have 30-some-odd DIMMs in some E10K server, if you didn’t have this kind of isolation, trying to find the bad stick of RAM would be beyond time-consuming. Ditto for HP/UX when replacing faulty RAM. Once again, though, IBM seems to be adressing this with their higher-end servers, and I look forward to about a year from now when it becomes more of a common feature on GNU/Linux servers.

5. Something like “OpenBIOS” or Sun’s OpenBoot (I think that’s the name? Been a while, I forgot). This is great to work with, for instance, on Alpha systems. Fairly complete diagnostics before the OS even boots, and it all gets shucked out the serial port. You can compensate for this by installing some kind of lights-out management board in your PC, but if you ask any UNIX admin that has used the non-PC-BIOS stuff on pro UNIX systems, a PC BIOS just doesn’t compare. For instance, on the Alpha I have at home, I can hook up fibre channel and enumerate all the available partitions, flag one as bootable, mount some filesystems and make changes, force boot to HALT temporarily rather than boot to full, stop the OS, do a memory dump, sync the filesystems and reboot… a whole lot.

GNU/Linux on Alpha/Sparc inherits these benefits, and so it is a non-issue. GNU/Linux on X86 still really, really sucks in this dep’t.

That’s about all I can think of for now. The difference between managing UNIX systems from Sun & HP, versus PC-based GNU/Linux systems, is still large but shrinking. As evidenced above, a BIG chunk of what still sucks about Linux is due to hardware & hardware integration, not the O/S itself, really. GNU/Linux is definitely getting there; I love running it on my Alpha at home, because I get many of those benefits mentioned and still use the operating system I love.

Back by Popular Demand: The Art of Tying Shoes

To skip ahead to what I’ve figured out is the best knot on the planet, go to the bottom of this blog entry.

On the bus, we had a discussion of how to properly tie a shoe. I’ve had problems with my shoes coming untied my whole life, and just assumed it would always be that way. Since I was about six I’ve double-knotted them to keep them from coming untied unexpectedly, and that’s worked great for the last twenty-three years.

To skip ahead to what I’ve figured out is the best knot on the planet, go to the bottom of this blog entry.

On the bus, we had a discussion of how to properly tie a shoe. I’ve had problems with my shoes coming untied my whole life, and just assumed it would always be that way. Since I was about six I’ve double-knotted them to keep them from coming untied unexpectedly, and that’s worked great for the last twenty-three years.

Richard, my neighbor, noticed the double knot, and commented that the reason they were coming untied was that I was tying them wrong. He said this vital information came to him from a former Math major at the University of Utah, who in turn had claimed that was the most important piece of information he’d learned in college. The way Richard explained it, I was simply crossing the laces backwards in the very first step of making the knot.

Try out this much better explanation of the difference in knots, and how to keep your shoelaces from coming untied at all the worst times. This is definitely one of the most important pieces of information you’ll use in your life if you wear lace-up shoes. However, if you read to the end, you’ll find that properly tying your shoes in a square not is not the best way to keep them from coming untied! The ultimate solution is probably cowboy boots or loafers, but until then, the Freedom Knot link at the bottom of this page comes pretty close to the perfect knot.

Do you think you know all there is to know about lacing up shoes? Then check out the mathematical proof for the most efficient lace-up patterns. Spoiler: Turns out man has already selected the strongest patterns through trial and error.

And in case you really want to build the better bow, try this shoe-tying link.

Update 9 May 2003: Since I wrote this page, I’ve come across several more useful shoe-tying links. Since this seems to be the only page on my blog people come for, here is the link list!

  • Zen Elightenment and the Art of Tying Your Shoes. Warning! Pop-ups! My take: I don’t entirely get Zen, but if you’re into it, now there’s a Zen reason for you to appreciate shoe-tying more.
  • Shoe-tying poetry and stories. No, I’m not kidding. These are mostly poems and stories to remind you of the steps to tying your shoes.
  • Understanding Robotic difficulty tying shoes. Yep, it’s hard for robots, too, and this page will help you understand why.
  • The Freedom Knot. Now THIS is the best shoe-tying knot I’ve ever found. Period. This knot does not come undone!. No lie, no joke. This is a 7-step, amazing knot. Click the little feet images to get the step-by-step procedure. It looks confusing at first, and it’s really hard to do if your laces are too short, but man it rocks. This beats the "Build a Better Bow" knot because it is easier to do, particularly for small fingers, and it’s easy to remember which lace to pull — you just pull them both!

Unfortunately, I don’t have any links yet for one-armed shoe-tyers. I’m still looking for the perfect one-handed shoe bow. If you have suggestions, mail me at matthew at barnson dot org.

EDIT by matthew: Since the day I posted this on May 5, 2003, I finally received several links to one-handed shoe-tying information. The best one I’ve found is Jenny Stemack’s Guide to One-Armed Shoe-Tying (scroll to the bottom; she includes directions both for one-handed and one-armed shoe-tying). It’s really quite excellent, and based on the number of requests I’ve received via this site for one-armed shoe-tying information, will probably be helpful for those of us who find ourselves temporarily or permanently one-handed.

images & photos now working

Well, after a good deal of toil and head-banging, I finally got the php “gallery” package up and running. There are a few issues, read more for details.
Important note: There are issues with a few file names in the Klamath Falls folder. Apparently I had some duplicate filenames, which trashed the other files that were already there. So comments may be lost, and icons may be wrong in there right now.

Well, after a good deal of toil and head-banging, I finally got the php “gallery” package up and running. This provides some pretty cool image viewing functionality that I really like. Unfortunately, there appear to be a few problems running Gallery in “embedded” mode like we’re doing.

  • Non-authenticated users cannot add comments to pictures
  • Sometimes the photo album stuff works weirdly on some browsers. Think that’s a problem between Gallery and the browsers, though.
  • Navigation links sometimes take user out of Drupal framework. Grr.
  • Can’t use the “Goofy” theme with it. This really bugs me, because I really liked that theme! But it prints out strange “\n” characters and form elements all over the screen when you use it — ugh.

That’s about it. Hopefully it’s convenient for family and friends coming to visit. If you’re a registered user (register using the link below the Log In link on the right-hand side), know me personally, and want some gallery space, then just email me (matthew at barnson dot org) and I will happily create an area for you to post your photos or drawings, too!

Peace, out, look like my Redhat 8 machine is almost finished an apt-get dist-upgrade to Redhat 9 here at work, so I can go home! Woo! It was a slight pain; I’ll talk about it more in my next weblog I think. However, apt-get has made it possible to easily upgrade my computer from one version of an operating system to the next without losing all my settings and files; this is a HUGE win in my book.

Shoe-tying and other old articles…

From my referrer logs, apparently my old articles on shoe-tying and other inanities were actually a bit of a hit on Google. Anyway, I was able to recover the archives from the crashed hard drive on the old site, and will be posting what I can. I only had a dozen articles or so, but I have had a LOT of people hitting the shoe-tying page! Go figure.

From my referrer logs, apparently my old articles on shoe-tying and other inanities were actually a bit of a hit on Google. Anyway, I was able to recover the archives from the crashed hard drive on the old site, and will be posting what I can. I only had a dozen articles or so, but I have had a LOT of people hitting the shoe-tying page! Go figure.

Postfix, Sender Address Verification, and Spammers

EDIT: This blog entry is quite old, in the world of Postfix. It’s easy to find RPMs for 2.0 experimental releases now, and the word “experimental” (as of August 2003) is starting to sound a little funny for what is becoming the mainstream release people are using day-to-day. If something doesn’t work for you, I suggest you hit the Postfix home page and see if you can figure out what’s up from the mailing list archives.


Yesterday evening (Saturday night. There are a whole lot more fun things to do on a Saturday night than this!) I upgraded the Postfix install on my firewall at work. The primary purpose of my upgrade was to enable “Sender Address Verification”. This stuff is pretty cool. Although the sample-smtpd.cf has wonderful examples of how to use this, I found other information on the ‘net totally lacking.

Now, some important information before you get excited about Sender Address Verification:

EDIT: This blog entry is quite old, in the world of Postfix. It’s easy to find RPMs for 2.0 experimental releases now, and the word “experimental” (as of August 2003) is starting to sound a little funny for what is becoming the mainstream release people are using day-to-day. If something doesn’t work for you, I suggest you hit the Postfix home page and see if you can figure out what’s up from the mailing list archives.


Yesterday evening (Saturday night. There are a whole lot more fun things to do on a Saturday night than this!) I upgraded the Postfix install on my firewall at work. The primary purpose of my upgrade was to enable “Sender Address Verification”. This stuff is pretty cool. Although the sample-smtpd.cf has wonderful examples of how to use this, I found other information on the ‘net totally lacking.

Now, some important information before you get excited about Sender Address Verification:

My environment

  1. postfix-2.0.9-20030424. This is a "pre-release" or "experimenal" version of Postfix. That means you’re not going to currently get this functionality using some pre-packaged binary rpm or something.
  2. db2/3/4-devel. This build of Postfix requires db2,3, or 4. So if you don’t have it installed, you’re not going to be able to build and install Postfix.
  3. pcre.  This is the "Perl Compatible Regular Expressions" library.  They tell you which version of pcre is the minimum for building Postfix on the postfix download page.  I would have a lot of trouble handling all the strange addresses I use without the pcre package; quickly, pcre is becoming a requirement for using Postfix. Postfix has some standard regular expression functionality by itself, and I think that you can build it without pcre, but I didn’t try.
  4. gcc and other building tools. If you’ve never compiled anything before, you’re probably out of your depth at this point.
  5. A TEST SYSTEM. Yes, please, try this out on a test system before you get down and dirty with your production firewall. I did the install on two test systems two days before rolling it out on a production server, to make sure it works.

The basic process itself is pretty darn easy. After downloading postfix-2.0.9-20030424 (or whatever the current experimental release is), rather than the usual "./configure; make; make install" three-step you’ll find on most packages, Postfix follows its own standard.

The Upgrading Postfix Four-step

  1. tar xzvf postfix-2.0.9-20030424.tar.gz
  2. cd postfix-2.0.9-20030424
  3. make
  4. sudo make upgrade

Note that I use, and heartily recommend that everybody else does, too, "sudo" for all my root-privilege stuff. If you’re doing all the above as root, leave off the word "sudo" from the last step.

It’s unfortunate there’s no RPM for the experimental releases, but Postfix is pretty good about figuring out how your previous version is installed (by looking at your old .cf files) and putting its stuff there. One thing the "make upgrade" doesn’t do is copy over all the sample-foo.cf files. I heartily recommend figuring out where those sample files live on your system. On my FreeBSD test system, they live in /usr/local/etc/postfix/. On the RedHat 8 firewall, they live in /usr/share/doc/postfix-1.1.11/samples/.

On the FreeBSD test system, the install was flawless. Postfix worked exactly the same after "make upgrade" as it did before. Of course, that system runs barnson.org and is a slightly less complex install than the Red Hat 8 firewall at work. On my OpenBSD firewall at home, the results were similarly flawless.

However, when I reached my RedHat 8 install on the firewall, I immediately noticed some errors.

Problems with Postfix’s "make upgrade" on a complex Red Hat 8 firewall

  • Postfix has no idea if you are running one instance of postfix, or more than one. We have two copies running at all times for historical reasons: one whos configuration files live in /etc/postfix, and the other lives in /etc/postfix-out. The postfix-out main.cf and master.cf were not updated with the change. I heartily recommend that you redirect output (stdout, put “>somefile.txt” at the end of the make upgrade line) from make upgrade to a file so that you can see what changes it has made (it’s quite verbose) and make the same changes to your secondary install.
  • Postfix has changed the way it handles real-time blackhole lists. Instead of defining a "maps_rbl_domains" parameter and then calling "reject_maps_rbl" from within smtpd_recipient_restrictions, you define the rbls within smtpd_recipient_restrictions (or wherever you use them) itself. This, I think, can provide a great deal of flexibility with which real-time blackhole lists you use at different parts of your config file.

The reason we were running two Postfix instances on one box was because we were running Spam Assassin on mail, and didn’t want mail leaving our network with spamassassin headers attached. We’ve since reduced that to just Anomy, and rely on Bayesian filters for those that want them within the firewall. It works better, and the firewall isn’t pegged at 100% CPU usage 70% of the time anymore. So I turned off the secondary instance in my /etc/init.d/postfix script, and also added the localhost and internal interfaces to postfix’s main.cf. I’ll still have to see how that works.

OK, so you’ve read all this way, and you’re probably wondering about all this Sender Address Verification stuff. It’s really quite simple. In your "smtpd_recipient_restrictions" section, just add this one line:

reject_unverified_sender,

somewhere before your "permit" at the end of the section. Reload postfix, and try this sample mail session. Any line that begins with a number is a response from the remote server. Any line that begins with an all-caps command is what you type. Start by telnetting to port 25 on the remote mail server.

[matthew@localhost matthew]$ telnet some.internet.host 25 Trying 123.45.67.89... Connected to some.internet.host. Escape character is '^]'. 220 220 some.internet.host ESMTP Postfix HELO barnson.org 250 bubba.aib.com MAIL FROM: 

250 Ok RCPT TO:  550 : Sender address rejected: undeliverable address: host mail.barnson.org[209.237.255.54] said: 550 : User unknown in local recipient table (in reply to RCPT TO command) RSET 250 Ok QUIT 221 Bye Connection closed by foreign host.

In the following list, I use the terms “client”, “server”, and “verification host”. The client is the machine attempting to send mail to your Postfix server. The server is your Postfix server. The Verification Host is the first available MX (Mail exchanger) host based on the MAIL FROM: sent by the client. The basic process that just happened was this:

How Sender Address Validation works:

  1. Client sends HELO, MAIL FROM, and RCPT TO commands.
  2. On RCPT TO, Postfix server makes a connection to the MX host(s) for the domain listed in MAIL FROM by the client.
  3. If Postfix is able to connect to that MX (if you’re talking about spammers, that’s always a dubious possibility at best), it then sends its own MAIL FROM: declaring that it is postmaster@some.internet.host, and puts a RCPT TO: of the MAIL FROM: address of the client attempting to mail to itself.
  4. If the server responds with anything but an OK, Postfix sends a RSET and QUIT to the verification host, and then sends a rejection to the client saying "undeliverable address" and the response from the verification host.
  5. If the verification host replies with OK, Postfix sends a RSET and QUIT and then sends an OK to the client. Alternatively, if you have additional rules after reject_unverified_sender, you can configure Postfix to default to DUNNO (meaning "passed this rule, but go on to the next") rather than sending an OK or a REJECT.

So far, reject_unverified_sender is really mostly catching the "corner cases" for me, rather than the majority of spam. However, it has almost eliminated a class of messages that occasionally kill our mail server: dictionary attacks. Often a spammer will send thousands of messages to random addresses in our domain in hopes that someone will read their pitch. Well, our Postfix gateway is configured to blindly relay any address in our domain on back to our Groupwise server behind the firewall. Yeah, I know, it’s not the best thing in the world, but at the moment it’s easier to do that than to enter several thousand email addresses by hand into the "virtual" table for Postfix. Nasty getting those out of Groupwise. Anyway, if a spammer dictionary-attacks us using an invalid return address, when Groupwise bounces the message, those bounces bounce because the return address won’t work. Basically, we’re just checking to see that there’s a real, live email address listed in the return. This prevents those thousands of “double bounces” from ever getting onto our network in the first place, and ending up in my inbox as postmaster of the domain. And that’s really what this is all about: lowering my frustration level with double-bounces.

We’re in the process of moving our organization from an ancient install of Groupwise 5.0 to Cyrus Mail. Yeah, we’re going to lose a little functionality here and there, but my hope is that it’s totally work-around-able for our users. Once that migration is complete, we can actually easily streamline this process by using reject_unverified_recipient, which makes a connection to the ultimate destination SMTP server, validates that a mail address exists and is deliverable, and does pretty much the same thing otherwise. This would still provide us with protection for our internal mail server by using an inbound mail gateway, yet dramatically reduce our double-bounces. Alternatively, we can just maintain a list of users in two places (one on the firewall), but I’m leery of storing any non-essential user names on a firewall, you know?

Eventually, I think the spammers will get wise to thise protection and begin always using valid email addresses as the return address of their promotions. Unfortunately, I think it’s most likely that they will use known-good mailing addresses as the forged source of their unsolicited commercial email. However, this will catch those who are slow to catch on, and at least force fraud to be more quickly exposed. If I were your average mail administrator, I’d certainly want to keep an eye on RCPT TO: requests to my mail server, and if I see too many to a certain individual, temporarily disable their mail account until the attack is over.

OK, now the last thing. I’d planned on getting this implemented by the time I wrote this article, but I just haven’t had the time yet. The final phase in my plan for anti-spam is to create a temporary rejection table called “five-fifties”. This Postfix table would store the IP addresses of clients who do not honor 55x rejection error codes and make multiple mail delivery attempts after receiving a 550/554/whatever. I realize that it’s not really going to be much of a help defending against spam (since I’m already rejecting them with a 55x anyway), but it would be a big help in tracking connection attempts from abusive IP addresses and setting up another daemon to automatically add repeat offenders to an iptables blacklist. Looks like you’ll have to wait for another article to read more about that though, after I have implemented it!

Hope this helps!

***NOTE: Please be aware that I’m still getting used to my chosen blogging software, Drupal, at this point. So there may be strange typographical issues here and there. Please let me know via comment if you see something that needs fixing.***

Trying drupal CVS

I’m going to be checking out Drupal CVS over the next few days. I’m really anti-excited about the whole “being online to blog” thing. Yeah, I know, I can vi a file or something and paste it, but I’d really prefer to be able to use mozblog or something so that I don’t have to be online, and that I can really tweak the heck out of links in my post before putting it up there. Wish me luck.

I’m going to be checking out Drupal CVS over the next few days. I’m really anti-excited about the whole “being online to blog” thing. Yeah, I know, I can vi a file or something and paste it, but I’d really prefer to be able to use mozblog or something so that I don’t have to be online, and that I can really tweak the heck out of links in my post before putting it up there. Wish me luck.

Indiana Jones

So much for my plans of hacking; I’ve spent all day so far working on my in-law’s computer. Not only did they have the usual Gator, Comet Cursor, and other spyware/adware programs installed, they also were infected by three different virusses. I F-PROT’d their computer, but I still…

So much for my plans of hacking; I’ve spent all day so far working on my in-law’s computer. Not only did they have the usual Gator, Comet Cursor, and other spyware/adware programs installed, they also were infected by three different virusses. I F-PROT’d their computer, but I still suspect there’s something amiss there. Eh, well, I didn’t bring any of my software to be able to reinstall.

This seems to be a growing trend; I recently used the Windows XP laptop of a computer-literate (heck, awesome programmer!) friend of mine. I was appalled to find multiple “toolbar” programs installed (spyware deals that change your IE toolbars to gather marketing data), Gator (ugh, I hate that program, the programs that use it should be given the death penalty), and other assorted annoyances. And he thought it was a good, productive PC! This alarming trend towards laziness in personal PC administration appals me as a sysadmin, yet it seems to be the norm. Far more the norm, in fact, than systems that have good pop-up blocking in place, a decent firewall, virus protection, and lack of spyware/adware/malware.

Well, I did my small part. I installed Promoxitron, F-Prot eval version (with a suggestion to buy the full version), and cleaned up a bunch of nastiness on their PC, including manually uninstalling the uninstallable Gator program. They come up with more aliases for that little thing! And then they stick it in your startup folder, registry run keys, and (I’ve heard) win.ini, though I didn’t find it there. Here’s hoping their little Win ME install holds together until they can buy a new PC. I keep trying to convince them to go with GNU/Linux, but I haven’t verified that Personal Ancestral File runs under Wine yet, and there don’t seem to be any free software competitors to PAF3.

This means, though, that I’ve gotten in no Docbook hacking. Dangit.

Next weblog, I hope to try more “linky” posting…

Heading to Idaho, musings on priorities

So I’m heading to Idaho to hang out with my in-laws in a few hours. Now I have a power inverter and can hack while Christy drives, though 🙂
(Note: Dangit, just noticed time on this system is an hour ahead of mine, in Central time. Gotta fix that, my weblogs are showing up an hour off!)

So I’m heading to Idaho to hang out with my in-laws in a few hours. The usual weekend routine is that we pull in some time around 11 PM (it’s 5 hours from home to there), unpack, sit around and talk until 2 AM, and then go to sleep. Well, usually, it’s my wife Christy that stays up and talks till 2 AM; I’m usually sacked out in a sleeping back upstairs in the over-garage family room by 17 minutes after we walk in.

Anyway, that may change a little bit this time. Christy’s planning on driving there, and I plan on hacking Drupal and my Docbook stuff on the trip up if we can. Before we left on our Spring Break vacation last week, we purchased a 75-watt power inverter for our car lighter so that we can use the laptop as long as we like. It was a life-saver on the trip up to Klamath Falls, Oregon, which is a 13-hour drive from our home in Tooele, Utah.

If I can manage to make dialup work while I’m up there (Sisna is a nationwide provider, thank goodness!), I’ll update the site with code and Guide-Goodness. If not, well, I’ll be back Sunday anyway. In case any of you ever need to access a local SISNA phone number while you’re near Idaho Falls or Rexburg, Idaho, here’s the info:

1.208.552-1843

Points to ponder while I’m gone:

  • Do I really need a new computer for home-studio recording? My 933MHz does OK, but I can’t enable as many real-time effects as I’d like, and playback skips once I’m running about 12-13 stereo 16-bit, 44.1KHz tracks. A dual-processor rig with a ton of RAM would be nice, but pricey. And am I going to spend time recording or playing games? Or even hacking PHP, which I can do on my ancient Vaio 300MHz laptop?
  • I’m spread a bit thin with family duties, job (that it’s tough to get motivated for, knowing they are just planning on closing the place down anyway), and my hobbies (mostly hacking together web stuff and recording music). If I spend my evenings recording music, then dishes & laundry don’t get done and I go arond smelling funny with a sink full of dishes and a not-entirely-happy spouse. If I don’t record or hack, I feel as if I’ve lost a hand and am not using my natural talents to their utmost ability. Maybe I can cut down on recording to twice a week, or otherwise just do two or three chores when I get home, stay up for only an hour to record or hack, then exercise and go to bed.
  • Though there’s not enough time in the day, I really need to allocate one hour for exercise every day. I’d planned on finding that time first thing in the morning, but recording, programming, or hacking keeps me up until late in the night. And seeing that it takes me 45 minutes or so to get into the “groove” with a project, only spending an hour a night seems a waste of time as well.

Maybe I’ll just go back to the way I did it when I was growing up: head out to the studio in some out-of-the-way place three times a year for a 3-day weekend of 24-hour recording. You can produce an album in a year that way. Admittedly, the quality will suck because you can’t spend enough time on each track, but at least you’ll get it done.

Here’s hoping.

1 AM, Bugzilla Docs Progress, time to sleep

Progress and problems creating Bugzilla’s annotated documentation system. Not much progress, mostly butting my head against the wall. Read more for details. I’m going to bed.

I realize nobody’s reading this again yet, except for a few lonely search engines casting about for interesting stuff late at night. I’ve spent the last several hours working on my Bugzilla documentation conversion. It’s not as easy as it looks! I’m trying to model it after the way php.net does their annotated docs, but Drupal uses a totally different type of organization than I’m used to dealing with. Drupal assigns each node an “id”, which is really just a number. It makes it so that a particular node never changes its reference (which is really convenient for links sticking around forever-ish), but Bugzilla and other documentation is orgnanized around a heirarchy of pages.

It’s a real bear to figure out, I assure you. It’s looking more and more like I should go ahead and do the reorganization work on the docs first and then put them up as editable nodes. Or else, actually put up the docs in their original, Docbook XML format, and run them through a custom converter which I’d end up writing so that they would format correctly in the Bugzilla tarball as well as on this site.

What fun! I’ll be chewing on this problem much of tomorrow while I drive up to Idaho with my family. I’ll try to enter a blog or two while I’m there.

Annotations

I’ve just added the Annotations, Title, Trackback, and htmltidy modules. Here’s what they do and why I’m using them…

I’ve just added the Annotations, Title, Trackback, and htmltidy modules. Here’s what they do and why I’m using them:

  • Annotations: This module enables pop-up descriptions of text items to be added by users. This is compelling for my vision of the Annotated Bugzilla Guide. In-Place annotations, instead of after-the-article comments, can be a very powerful tool. Comments by themselves are great, too!
  • Title: This module allows one to link to other nodes by the title of the node, rather than node number. This can be very useful in migrating the Guide over, because as we go through subsequent revisions, node numbers are guaranteed to change, and it’s proving very painful to convert all the links within the document to this alternative presentation method.
  • Trackback: OK, this is purely for the geek factor. It allows me to send Trackback notifications to other blogs, and receive them as well. I just dig the functionality, and initially fell in love with it using Movable Type a few months ago. Now that the HD crashed that held my old Movable Type install, and I’m on Drupal, I missed it.
  • htmltidy: Another "geek factor" toy. This one cleans up the HTML and gives warnings to the reader if the HTML is broken. I just like keeping things W3C-compliant wherever possible to keep the display clean. If you’re using IE, you’ve probably already noticed that BARNSON.org in the title bar has a black background due to being a transparent PNG; I’d like to avoid as many cross-browser issues as possible.

Dang, I write a lot of nothing.