2015 Mock Sprint Tri Results

I had some issues with my Garmin 910xt, but eventually I fixed the mock tri file. Woot! Next time, I’ll disable all auto lap functionality before starting the tri, because apparently that’s what interferes with the run data & corrupts the file.

I had some issues with my Garmin 910xt, but eventually I fixed the mock tri file. Woot! Next time, I’ll disable all auto lap functionality before starting the tri, because apparently that’s what interferes with the run data & corrupts the file.

Total moving time (not stopped @ stoplights): 112 minutes (1 hr, 52 minutes). Or more or less totally in line with most average beginner times, with a slightly better bike and a considerably worse run. Not at all unexpected.

* Mock Swim: 7:29. https://connect.garmin.com/modern/activity/715055281 T1: 7:26.https://connect.garmin.com/modern/activity/715055283 . I will do way better than this if I’m not DRIVING from the pool to my house for T1. * Mock Bike: 47:49 https://connect.garmin.com/modern/activity/715055284 T2: 2:05 https://connect.garmin.com/modern/activity/715055285 * Mock Run: 47:07 https://connect.garmin.com/modern/activity/715055286 (This is the totally broken part)

Glad to have the data & compare it to my first super-sprint from last year: * RCStake Swim leg I’m twice as fast (it was 300m 6x50m, not 700m): https://connect.garmin.com/modern/activity/560790985 * RCStake Bike leg 2MPH faster: https://connect.garmin.com/modern/activity/560790991 * RCStake run leg: OK, I was a little slower today than on the run leg last year. But the mock tri is nearly twice the length. https://connect.garmin.com/modern/activity/560790995 .

Observations: * My 910xt is finally recognizing my swim strokes as freestyle instead of backstroke! This means my form work is starting to pay off. And those laps I did do backstroke are almost twice as slow as freestyle, which clearly tells me I need to avoid backstroking if at all possible; a slow freestyle is faster than my fastest backstroke! * I blew up my legs on the uphill bike leg and didn’t work nearly hard enough on the back half of the ride while mostly cruising dowhill. My calves cramped up on the first part of the run, probably from under-use on the second half of the bike ride. * I need to learn to aero, or spend more time in the drops. I spent maybe 25% of my time (or less) in aero on my road bike. Sure, they are just little shorty aero bars, but nonetheless it was windy and I think it would have helped. * Hydration & electrolytes were OK, but I think I’d do better with some timed nutrition: a little EFS electrolyte drink before the swim, a little on the bike, and my energy levels should stay a little more consistent on the run. More mental than physical, I think. * Transitions were rough. Going to optimize them a bit for my first sprint in two weeks. * Too much hotfoot & walking on the run. I should use my metatarsal pads on the bike ride and probably Vibrams instead of my clunky running shoes on the run. My turnover will be quicker, and for such a short duration on the run it should help avoid the hotfoot I often get on longer runs well over an hour.

Excited. Clearly I *can* finish the sprint tri in a reasonable amount of time, and I’m pretty certain there will be at least a few non-DNF people behind me at the end. Which is really all I can ask 🙂 — Matthew P. Barnson http://barnson.org/

ZFS Tricks: Scheduling Scrubs

Content mirrored at https://blogs.oracle.com/storageops/entry/zfs_trick_scheduled_scrubs

A frequently-asked-question on ZFS Appliance-related mailing lists is "How often should I scrub my disk pools?"  The answer to this is often quite challenging, because it really depends on you and your data.

Usually when asked a question I want to provide the answers to the questions they should have asked first, so that I’m certain our shared conversational contexts match up. So here’s some background questions that we should have answers to before answering the "How often" question.

Content mirrored at https://blogs.oracle.com/storageops/entry/zfs_trick_scheduled_scrubs

A frequently-asked-question on ZFS Appliance-related mailing lists is "How often should I scrub my disk pools?"  The answer to this is often quite challenging, because it really depends on you and your data.

Usually when asked a question I want to provide the answers to the questions they should have asked first, so that I’m certain our shared conversational contexts match up. So here’s some background questions that we should have answers to before answering the "How often" question.

What is a scrub?

To "scrub" a disk means to read data from all disks in all vdevs in a pool. This process compares blocks of data against their checksums; if any of the blocks don’t match the related checksum, ZFS assumes that data has been corrupted (bit rot happens to every form of storage!) and will look for valid copies of the data. If found, it’ll write a good copy of the data to that storage, marking the old copy as "bad".

What is the benefit of a disk scrub?

Most people have a lot more "stale" data than they think they do: stuff that was written once, and never read from again. If data isn’t read, there’s no way to tell if it’s gone bad due to bit rot or not. ZFS will self-heal data if bad data is found, so a scrub forces a read of all data in the pool to verify that it isn’t currently bit-rotted, and heal the data if it is.

What performance impact is there to a scrub?

The ZFS appliance runs disk scrubs at a very low priority as a nearly-invisible background process. While there is a performance impact to scrubbing disk pools, this very low-priority background process should not have much if any impact to your environment. But the busier your appliance is with other things, and the more data is on-disk, the longer the scrub takes.

How long do scrubs run?

On a fresh system with little data and low utilization, scrubs complete very quickly.  For instance, on a brand-new, quiescent pool with 192 4TB disks, scrubs typically complete in just moments. There is no data to read, therefore the scrubs return almost as soon as we start them.

On very busy systems with very large pools and lots of I/O, it’s possible for scrubs to run for months before completion. For example, a 192-disk, full-rack 7410 with 2TB drives in the Oracle Cloud recently required eight months to complete a pool scrub. The system was used around-the-clock with extreme write loads; the low quantity of of RAM (256GB/head), compression (LZJB better than 2:1), and nearly-full pool (80%+) conspired to force the scrub to run extremely slowly.

If the slow-running, low-impact scrub needs to complete in a shorter time than that, contact Support and ask for a workflow to prioritize your scrubs to run a little faster.  Realize, of course, if you do so that the performance impact goes up if scrubs run at higher priority!

Should I scrub my pools?

  1. Is the pool formatted with either RAIDZ or Mirror2 configuration? Although these two options offer higher performance than RAIDZ2 or Mirror3, redundancy is lower. (No, I’m not going to talk about Stripe. That should only ever be used on a simulator; I don’t even know why it exists on a ZFS appliance.)
  2. Are unable to absolutely 100% guarantee that every byte of data in the pool is read frequently?  Note that even databases that the DBAs think of as "very busy" often have blocks of data that go un-read for years and are at risk of bit rot. Ask me how I know…
  3. Do you run restore tests of your data less frequently than once per year?
  4. Do you back up every byte of data in your pool less frequently than once per quarter?

If you answer "Yes" to any of the above questions, then you probably want to scrub your pools from time to time to guarantee data consistency.

How often should I scrub my pools?

This question is challenging for Support to answer, because as always the true answer is "It Depends".  So before I offer a general guideline, here are a few tips to help you create an answer more tailored to your use pattern.

  1. What is the expiration of your oldest backup? You should probably scrub your data at least as often as your oldest tapes expire so that you have a known-good restore point.
  2. How often are you experiencing disk failures? While the recruitment of a hot-spare disk invokes a "resilver" — a targeted scrub of just the VDEV which lost a disk — you should probably scrub at least as often as you experience disk failures on average in your specific environment.
  3. How often is the oldest piece of data on your disk read? You should scrub occasionally to prevent very old, very stale data from experiencing bit-rot and dying without you knowing it.

If any of your answers to the above are "I don’t know", I’ll provide a general guideline: you should probably be scrubbing your zpool at least once per quarter. It’s a schedule that works well for most use cases, provides enough time for scrubs to complete before starting up again on all but the busiest & most heavily-loaded systems, and even on very large zpools (192+ disks) should complete fairly often between disk failures.

How do I schedule a pool scrub automatically?

There exists no easy mechanism to schedule pool scrubs from the BUI or CLI as of February 2015. I opened a RFE a few months back for one to be provided, but I’m not certain how far down the development pipeline such a feature is, if it will exist at all. So in Oracle IT, we just rolled our own. 

The below code is an example of how this can be accomplished. It is provided as-is, with no warranty expressed or implied. Use it at your own risk.

It’s been working well for many months for us. Simply copy/paste the below code to some convenient filename, such as "safe_scrub.akwf".  Then upload the below workflow to your appliance using the "maintenance workflows" BUI screen.  The default schedule runs once every 12 weeks on a Sunday. You can tweak the schedule to match your needs either in the source code if you want to adjust the default schedule, or by visiting the "maintenance workflows" command-line interface and adjust the schedule manually after you upload it.

/*globals run, continue, list, printf, print, get, set, choices, akshDump, nas, audit, shell, appliance*/ /*jslint maxerr: 50, indent: 4, plusplus: true, forin: true */

/*safe_scrub.akwf * A workflow to initiate a scrub on a schedule. * Author: Matthew P. Barnson  * Update history: * 2014-10-09 Initial concept * 2014-11-20 EIS deployment * 2015-02-19 Sanitized for more widespread use * 2015-02-19 Multiple pool functionality added by: Adam Rappner  */

/* This program is provided 'as is' without warranty of any kind, expressed or * implied, including, but not limited to, the implied warranties of * merchantability and fitness for a particular purpose.*/

var MySchedules = [ // Offset 3 days (Sunday), 9 hours, 00 minutes, week interval. // The UNIX Epoch -- January 1, 1970 -- occurred on a Thursday. // Therefore the ZFS appliance's week in a schedule starts on Thursday. // Sample offset: Every week //{offset: (3 * 24 * 60 * 60) + (9 * 60 * 60), period: 604800, units: "seconds"} // Sample offset: Every 4 weeks //{offset: (3 * 24 * 60 * 60) + (9 * 60 * 60), period: 2419200, units: "seconds"} // Sample offset: Once every 12 weeks on a Sunday {offset: (3 * 24 * 60 * 60) + (9 * 60 * 60), period: 7257600, units: "seconds"} ];

var workflow = { name: 'Scheduled Scrub', origin: 'Oracle PDIT mbarnson', description: 'Scrub on a schedule', version: '1.2', hidden: false, alert: false, setid: true, scheduled: true, schedules: MySchedules, execute: function (params) { "use strict"; var myDate = run('date'), myReturn = "", pools = nas.listPoolNames(), p = 0; // Iterate over pools & start scrubs for (p = 0; p < pools.length; p = p + 1) { myDate = run('date'); try { run('cd /'); run('configuration storage set pool=' + pools[p]); run('configuration storage scrub start'); myReturn += "New scrub started on pool: " + pools[p] + " "; audit('Scrub started on pool: ' + pools[p] + ' at ' + myDate); } catch (err) { myReturn += "Scrub already running on pool: " + pools[p] + " "; audit('Scrub already running on pool: ' + pools[p] + ' at ' + myDate); } } return ('Scrub in progress. ' + myReturn + '\n'); } };

Happy scrubbing!

ZFS: Doing It Right

ZFS: Doing It Right

Imagine you’re a system administrator, and an email arrives from your boss. It goes something like this:

"Hey, bud, we need some new storage for Project Qux.  We heard that this [insert major company here] uses a product called the Oracle Sun ZFS Storage Appliance as the back-end for their [insert really popular app here]. We want to do something like that at similar scale; can you evaluate how well that compares to XYZ storage we already own?"

So you get in touch with your friendly local ZFS sales dudette, who arranges a meeting that includes a Sales Engineer to talk about technical stuff related to your application. The appliance, however, has an absolutely dizzying array of options.  Where do you start?

ZFS: Doing It Right

Imagine you’re a system administrator, and an email arrives from your boss. It goes something like this:

"Hey, bud, we need some new storage for Project Qux.  We heard that this [insert major company here] uses a product called the Oracle Sun ZFS Storage Appliance as the back-end for their [insert really popular app here]. We want to do something like that at similar scale; can you evaluate how well that compares to XYZ storage we already own?"

So you get in touch with your friendly local ZFS sales dudette, who arranges a meeting that includes a Sales Engineer to talk about technical stuff related to your application. The appliance, however, has an absolutely dizzying array of options.  Where do you start?

Without a thorough evaluation of performance characteristics, there are two scenarios most people evaluating these appliances end up choosing:

  1. ZFS choices that will almost certainly fail, and
  2. ZFS choices with a reasonable chance of success despite their lack of knowledge.

To start with, I’ll talk about Scenario 1: setting up yourself and your ZFS evaluation up to fail: Doing It Wrong.

How Are People Do It Wrong?

I bumped into several individuals at OpenWorld that had obviously already made choices that guaranteed the ZFS appliance they purchased was not going to work for them.  They just didn’t know it yet. And of course, despite my best intentions to help them cope with the mess they made, they remained unsatisfied with their purchase.

Both the choices and outcome were eminently predictable, and apparently motivated by several common factors.

Misplaced Cost-Consciousness

From my point of view if someone isn’t ready to invest six figures in storage, then they aren’t yet ready for the kind of performance and reliability an enterprise-grade NAS like the ZFS appliance can offer them.  The hardware they can afford won’t provide them an accurate picture of how storage performs at scale.

Any enterprise storage one can buy at a four or five-figure price point is still a toy; a useful one, but still a toy compared with its bigger siblings.

It’ll be nifty and entertaining if the goal is familiarize oneself with the operating system and interfaces. It will allow users to get a glimpse of the kinds of awesome advantages ZFS offers. It’ll offer a reasonable test platform for bigger & better things later as you explore REST, Analytics, Enterprise Manager, and the Oracle-specific optimizations available to you.  And perhaps it might serve reasonably well as a departmental file server or small-scale storage for a few dozen terabytes of data.  But it won’t offer performance or reliability on a scale similar to what serious enterprises deserve.

Misunderstanding Needs

Most customers that invest in dedicated storage for the first time don’t yet understand their data usage patterns. IOPS? A stab in the dark. Throughput? Maybe a few primitive tests from a prototype workstation. Hot data volume? Read response latency requirements? Burst traffic vs. steady-state traffic? Churn rate? Growth over time? Deduplication or cloning strategies? Block sizes? Tree depth? Filesystem entries per directory? Data structure? Best supported protocol? Protocol bandwidth compared to on-disk usage? Compressibility? Encryption requirements? Replication requirements?

I’m not saying one has to have all these answers prior to purchasing storage.  In fact, the point of this series is to encourage you to purchase a good general-purpose hardware platform that is really good at most workloads, and configure it in a way that you’re less likely to shoot yourself in the foot.  But over and over the people with the biggest problems were the ones who didn’t understand their data, yet hoped that purchasing some low-end ZFS storage would somehow magically solve their poorly-understood problems.

Lack Of Backups

Most data worth storing is worth backing up. While I’m a big fan of the Oracle StorageTek SL8500 tape silo, not everybody is ready for a tape backup solution that can span the size of a football field or Quidditch pitch.

Nevertheless, trusting that the inherent reliability and self-healing of a filesystem will see a company through a disaster is not a good idea.  Earthquakes, tornados, errant forklift drivers, newbie admins with root access, overly-enthusiastic Logistics personnel with a box knife and a typo-ridden list of systems to move are common.  Backups should be considered and implemented long before valuable data is committed to storage.

Solving Yesterday’s Problems

Capacity planning is crucial in the modern enterprise. While I’m certain our sales guys are really happy to sell systems on an urgent basis with little or no discount in response to poor planning on the part of customers, that kind of decision making is often really hard on the capital expense budget.

A big part of successful capacity planning is forecasting future needs. Products like Oracle Enterprise Manager and ZFS Analytics can help. Home-brewed capacity forecasting is viable and common. A system administrator is at her best when she’s already anticipated the need of the business and has a ready solution for the future problems she understands will arrive eventually, and with an enterprise NAS a modest investment in hardware can continue to yield future dividends as an admin continues to better understand her data utilization patterns and learns to use the available tools to intelligently manage it.

How To Fail At ZFS And Performance Reviews

Here are the options I would pick if I wanted to set up my ZFS appliance to fail:

  • Go with any non-clustered option; reliability suffers. Failure imminent.
  • Choose the lowest RAM option; space pressure will make my bosses really unhappy with the storage as things slow down. Great way to fail.
  • Buy CPUs at the lowest possible specification; taking advantage of CPU speed for compression would make the storage run better, and using CPU for encryption gives us options for handling sensitive data. Don’t want that if our goal is failure!
  • Pick an absurdly low number of high-capacity, low-IOPS spindles, like maybe twenty to forty 7200RPM drives; I/O pressure will drive me nuts troubleshooting, but heck, it’s job security.
  • Don’t invest in Logzillas (SLOG devices). The resultant write IOPS bottleneck will guarantee everybody hates this storage.
  • If I do invest in Logzillas (SLOG devices), use as few as possible and stripe them instead of mirroring them; that kills two birds with one stone: impaired reliability AND impaired performance!
  • Buy Readzillas (L2ARC), but ignore the total RAM available to the system and go for the big, fat, expensive Readzilla SSDs because I think we’re going to have a "lot of reads" without understanding what Readzillas actually do. This will impair RAM performance further, wasting both my money AND squandering performance!

If you do the above, you’ll pretty much guarantee a bad time for yourself with ZFS storage.  Unfortunately, this seems to be the way far too many people try to configure the storage, and they set themselves up for failure right from the start.

So we’ve talked about Doing It Wrong. How do you Do It Right?

Do It Right: Rock ZFS, Rock Your Performance Review

In case you don’t know what I do, I co-manage several hundred storage appliances for a living (soon to be over a thousand, with hundreds of thousands of disks among them. Wow. The sheer scope of working for Oracle continues to amaze me!). Without knowing anything else about the workload except that the customer wants high-performance general-purpose file storage, below is the reference configuration I would pick if I want to maximize the workload’s chances of success.  If I think I need to differ from this reference configuration, it’s important to ask "How does this improve on the reference configuration?"  This reference configuration has proven its merit time and time again under a dizzying array of workloads, and I’d only depart from it under very compelling arguments to do so.

Such arguments exist, but if they are motivated by price, I am always trading away performance for a lower price!

Understanding The Basics

Guiding this reference configuration are the following priorities:

  1. Redundancy. If it’s worth doing, it’s worth protecting; the ZFS appliance is reliable because it’s very fault-tolerant and self-healing, not because the commodity Sun hardware it’s built with is inherently more reliable than competing options.
  2. Mirrored Logzillas (SLOG devices). Balance this with RAM and spindles, though, as too much of any of the three and one or more will be underused.  And for a few obscure technical reasons related to reliability, I strongly prefer Mirrored Logzillas over Striped.
  3. RAM. ZFS typically leverages RAM really well. You’ll want to balance this with Logzilla & spindles, of course, using ratios similar to the reference configuration.
  4. Spindle read IOPS. Ideally, I should have some idea of the total expected read IOPS of my application, and configure sufficient spindles to handle the maximum anticipated random read load.  If this kind of data is unavailable, I’ll default to the reference configuration.
  5. Network. 10Gbit Ethernet is cheap enough these days that any reasonable storage should use it. It’s still a really tough pipe to fill for most organizations since it’s so large, but it is possible.
  6. CPU. It’s almost an afterthought, really; even the lowest CPU configuration of a given appliance that is capable of handling 1TB of RAM per head (2TB per cluster) comes with abundant CPU. But if I want to use ZFS Encryption heavily, or use the more CPU-intensive compression algorithms, CPU becomes a pretty legitimate thing to spend some money on.
  7. Readzilla/L2ARC/Read Cache. The ARC — main memory — is really your best, highest-performing cache on a ZFS appliance, but if there are specific reasons for investing heavily in Readzilla (L2ARC) cache, we’ll know a few months after we start using it. Basically, if my ARC hit rate drops down into the 80% range or lower, I want to add a Readzilla or two to the system. The cool thing is, you can add these any time; you don’t have to put this into the capital expense budget up-front, but it’s something you can do responsively if the storage appliance use pattern starts to suggest you ought to.

Your Best Baseline Hardware Configuration

So here’s the hardware configuration we typically use in Oracle IT. It’s not the biggest, it’s certainly not the most expensive, but it has the advantage of simplicity, flexibility, and stellar performance for the vast majority of our use cases, and it all fits neatly into one little standard 48U rack.  I’ll hold off on part numbers, though, as those change over time.

  • ZS4-4 cluster (two heads).
  • 15 core (or more) processor.
  • 1TB or 1.5TB RAM per head (2TB or 3TB total RAM across the cluster).
  • Dual port 10Gbit NIC per head.  We typically buy two of these for a total of four ports for full redundancy.
  • Two SAS cards per head (required).
  • Clustron (pre-installed) to connect your cluster heads together.
  • 8 shelves. I suggest if you anticipate fairly low IOPS and mostly capacity-related pressures that you opt for the DE2-24C configuration (capacity), but if you think IOPS will be pretty heavy, opting for DE2-24P (performance) is a good alternative but with pretty dramatically reduced capacity.
  • 8x200GB Logzilla SSDs. This is probably overkill, but some few environments can leverage having this much intent log.
  • Fill those shelves with 7200RPM drives as required.  Formatted capacity in TiB as I recommend below will be around 44.5% of raw capacity in TB once spares and the conversion from TB to TiB is taken into account.  Typically in this configuration I’ll have 184 spinning disks, so whatever capacity of disk I buy, I can do the math.  The cool part is that, on average, I’ll roughly double this with LZJB compression on average mixed-use workloads, giving around 67% up to 106% of raw capacity when formatted and used.  Which is, in essence, freakin’ awesome.

Fundamental Tuning & Configuration

Now let’s step into software configuration.  If you’ve configured your system as above, random writes are a breeze. Your appliance will rock the writes. The Achilles’ heel of the ZFS appliance in a typical general-purpose "capacity" configuration as above is random reads. They can be both slow themselves, and they can slow down other I/O. You want to do whatever you can to minimize their impact.

  • I’ll create two pools, splitting the shelves down the middle, and when setting up the cluster assign half of each shelf’s resources to a pool.
  • Those pools will be assigned one per head in the cluster configuration. This really lets us exploit maximum performance as long as we’re not failed over.
  • Use LZJB by default for each project. Numerous technical reasons for this; for now, if you don’t know what they are, take it on faith that LZJB typically provides a ZFS appliance a SERIOUS performance boost, but only if it’s applied before data is written… if applied after, it doesn’t do much.  This speeds up random reads considerably.
  • If using an Oracle database, just use OISP. It makes your life so so much easier from configuration to layout: two shares, and done.  If not using OISP, then pay close attention to the best practices for database layout to avoid shooting oneself in the foot!
  • If using an Oracle database, leverage HCC on every table where it’s practical. HCC-compressing the data — despite the CPU cost on your front-end database CPU initially — usually provides a pretty huge I/O boost to the back-end once again for reads. Worth it.
  • Scrub your pools. In a later blog entry I’ll discuss using a scheduled workflow to invoke a scrub, but for now just use Cron on an admin host, or assign some entry-level dude to mash the "scrub" button once a week for data safety. Around about year 3 of use, hard drive failure rates peak and continue failing at a more-or-less predictable rate indefinitely. There are certain extremely rare conditions under which it’s possible to lose data that is written once and very infrequently read in a mirror2 configuration; if you scrub your pools on a regularly-scheduled basis (at the default priority, this means more or less continuously), your exposure to the risk is dramatically lower to the point of "negligible risk".

Wrapping It Up

There you have it: an ideal general-purpose file server with good capacity, great performance for average loads, and something that in typical Oracle Database or mixed-use environments will really make you glad you invested in an Oracle Sun ZFS Storage Appliance.

Development at Oracle

I really, really like the place that I work. The commute is amazing. The facility is breathtaking. My co-workers are stunningly enthusiastic, skilled, and intelligent. My pay is… well, good enough to get by 🙂 Recently a thread popped up at Slashdot in which a lot of commenters trashed Oracle, and I felt a need to set some of the record straight.

I really, really like the place that I work. The commute is amazing. The facility is breathtaking. My co-workers are stunningly enthusiastic, skilled, and intelligent. My pay is… well, good enough to get by 🙂 Recently a thread popped up at Slashdot in which a lot of commenters trashed Oracle, and I felt a need to set some of the record straight.

Disclaimer: The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

TL;DR: I am an Oracle employee. It’s an awesome place to work with above market pay, superb benefits, and a demanding but rewarding engineering culture. Virtualbox is one project in a large and growing virtualization team, creating and improving some truly amazing cutting-edge technologies that make your virtualization life better.

I’m going to share some facts as I see them, and let you draw your own conclusions instead of drawing them for you.

1. The Oracle VM and Oracle VM Virtualbox teams are one and the same within Oracle. There’s a lot of cross-pollination of ideas and effort, and the virtualization team is frakking huge: HUNDREDS of developers. Not “4”, as some have asserted here! 2. There’s a ton of stuff happening in virtualization at Oracle: https://blogs.oracle.com/virtualization/ 3. There’s a substantial line-up of products that are demo’d to customers as part of “Virtualbox Appliances”. Virtualbox demos are a key strategy for introducing many of our products to customers. http://www.oracle.com/technetwork/community/developer-vm/index.html . Corrollary: I manage a lot of ZFS appliances. I like them; they make my job easier, particularly at the kind of scale at which one begins measuring one’s storage in exabytes. You should download the Virtualbox-based Oracle ZFS Storage Simulator and check it out. Hint: Dig into the REST interfaces and ECMAscript workflows concepts. This kind of thing is Stored Procedures for enterprise-grade storage appliances with absolutely blistering scale, reliability, and performance, and if you don’t yet understand how powerful that idea is, you might be insufficiently experienced in high-end storage and databases. 4. Wim Coekaerts is a smart, friendly, and communicative dude. He also happens to be SVP over our Linux & Virtualization efforts. If you’re really interested in the details of virtualization development at Oracle, you should check out his blog: https://blogs.oracle.com/wim/

Next, my opinions. No longer facts!

VirtualBox is a mature, stable product that’s doing its job and — as a GPL project — seems to me like more a vehicle for showcasing Oracle technology than a revenue generator in its own right. That doesn’t mean development has ceased! It just means that, in general, Oracle engineering teams are laser-focused on how we can make money so we can stay employed so we can keep creating really unique and useful products for our customers. Responsibilities on teams shift as need demands, and with such an enormous knowledge base in virtualization on our Engineering staff, there’s no question that if a product needs a feature to benefit customers, and a good case can be made that it’ll pay off, it gets the engineering resources it needs to give it a try.

The Sun transition was tough for some employees. In advance of the merger, a lot of old-timers split. A lot of younger engineers went looking for somewhere hipper and younger to work than what would become a Fortune 500 company. Many Sun managers, sensing the change in the wind as Oracle’s intensely results-oriented management team integrated with them, split for positions elsewhere.

I know and work with the survivors of the merger every day. And overwhelmingly, those who’ve integrated into Oracle culture, shown they belong here through their productivity and attitude, and produce results consistently have built success upon success, and are valued and rewarded.

They’re also a bunch of brainiacs who routinely blow my mind with deep insights into operating systems, hardware, and performance optimization.

Those who don’t deal well with rapid change, high expectations, and a dogged focus on constantly improving our products at an increasing pace while doing more with less don’t tend to thrive here.

From my point of view, Oracle’s a great place to work. The focus is always on delivering new benefits — not just features! — for our customers. The pace is hectic, every product we work with internally is “eating our own dogfood” to try to figure out why our customers will love or hate it (third content management system in five years, blech!!!), we typically pay above market rate, and we expect a high degree of professionalism, intelligence, cooperation, and problem-solving ability. I’m a busy dad of four with a triathlon habit, and the work-life balance usually comes out just right.

If you think you have what it takes to keep changing the world for the better as fast as you possibly can, check us out at http://www.oracle.com/careers/ .

A Rant About Fitness Trackers

I’ve recently been chiming in on the Garmin forums about my 910xt watch and Garmin VivoSmart. The frustration of users not knowing how to use their fitness trackers — and blaming the devices for not being the magical weight loss talismans they thought they were! — is palpable.

I’ve recently been chiming in on the Garmin forums about my 910xt watch and Garmin VivoSmart. The frustration of users not knowing how to use their fitness trackers — and blaming the devices for not being the magical weight loss talismans they thought they were! — is palpable.

…I should have saved my $200 and just continued using the free myfitnesspal account and entering my estimated calories and got on with my life. I feel like such a sucker!…

First advice: if you felt as if you got more consistent results and inspiration sans a fitness tracker, return the fitness tracker. No need to have buyer’s remorse.

General fitness tracker rant below 🙂

So many buy a fitness tracker having “faith” that it would be accurate. They didn’t do sufficient research before the purchase. Even gold-standard metabolic ward studies are typically accurate to no more than +/- 5%, which for a typical 2000-calore diet can be the difference between gaining or losing about a pound a month, simply due to the margin of error and individual differences between human beings.

Caloric intake on MyFitnessPal is precise, but slightly inaccurate. 100g of pineapple is ostensibly 43 calories. What about the core? The cap? Nearer the skin vs. further? Avocado blended with or without the pit? Which part of that Costco rotisserie chicken are you eating? The nutrition difference from seemingly small differences is enormous! It’s the averages over time for a user’s specific diet that a studious user will care about and use to guide their choices.

Fitness trackers are also fairly precise, but slightly inaccurate. Even the famous “BodyBugg” — widely regarded as one of the most accurate fitness trackers to its indirect calorimetry method based on perspiration and body heat — misses the mark by greater than 10% for many users, particularly as their biology changes over time (as they lose or gain weight, the Bugg is often slow to respond to the change and proffers inordinately-high calorie estimates).

One can use the combination of two precise but inaccurate instruments to still often create useful results. Trackers are a useful feedback mechanism, and the truth is that many people using Fitbits, Jawbone Up, and Nike Fuelband and other trackers gain weight rather than lose it: http://www.today.com/health/my-fitbit-making-me-fat-users-complain-weight-gain-fitness-1D79911176

I would guess it’s usually because they don’t understand very well how to use the information provided by a fitness tracker, combined with under-estimating their consumption and over-estimating their exertion. If users watch their weight and body composition to gauge the accuracy of the device, they’ll figure out these discrepancies within a month and adjust nutrition to achieve goals even using inaccurate measures like pretty much every fitness tracker currently in existence.

For examples of the supportive community surrounding the VivoSmart/VivoFit with MyFitnessPal (mostly VivoFit, because it is both older and vastly out-sells the VivoSmart), look to the MFP boards, rather than these Garmin boards: http://community.myfitnesspal.com/en/group/21248-vivofit

It’s not all bad, but for your personal happiness if this tracker isn’t the right one for you, take it back! Garmin will sell millions more devices to numbers-geeks like me for whom a precise but slightly inaccurate measure is preferable to a straight-up estimate. However, both methods work just fine for weight loss, gain, or body recomposition!

ZFS Tricks: Blinking The Lights

I’ve created a new blog with my work at https://blogs.oracle.com/storageops . The first entry is copied below.

Source:
https://blogs.oracle.com/storageops/entry/zfs_tricks_blinking_the_lights

I’ve created a new blog with my work at https://blogs.oracle.com/storageops . The first entry is copied below.
Source: https://blogs.oracle.com/storageops/entry/zfs_tricks_blinking_the_lights

I’ve had an account on blogs.oracle.com  for several years now, but never really used it. Decided it was about time. Without further ado, let’s dive in!

One of the most powerful facilities of the Oracle ZFS Storage Appliance is "workflows". You can find these in the BUI or the CLI under "maintenance":

ZFS Maintenance Workflows BUI image

For most people, Workflows are kind of a mystery. If you’ve been on a call with Support, you’ve probably been asked to use them, but they seemed like this arcane thing that was only for Support.

In reality, Workflows are for YOU!  So to get you started playing in the land of workflows, here’s a sample script. What this does it turn off all the hard drive lights on your ZFS appliance. This is a useful little utility I wrote; with thousands of ZFS appliances in our domains and dozens of storage admins working with several on-site remote hands, often the blinking locator lights get left on after repairs are complete.  Wouldn’t it be nice to have a way to clear them? Now you have one!

For now, don’t worry about some of the comments. I’ll go into jslint/jshint in a later blog entry. Just save this BlinkenLightsOff.akwf to your computer somewhere handy, then upload it by clicking the plus sign in the "Maintenance Workflows" section of your BUI:

Save the code below as "BlinkenLightsOff.akwf" on your hard drive somewhere memorable.

/*globals run, continue, list, printf, print, get, set, choices, akshDump, nas, audit, shell*/ /*jslint maxerr: 50, indent: 4, plusplus: true, forin: true */

//BlinkenLightsOff.akwf //A workflow to cause your drive locator lights to turn off. //Author: Matthew P. Barnson <matthew.spam.barnson@oracle.com> //Last updated: 2014-10-06

var workflow = {     name: 'BlinkenLightsOff',     origin: 'Oracle PDIT mbarnson',     description: 'Disk Lights Off',     version: '1.2',     hidden: false,     alert: false,     setid: true,     execute: function (params) {         "use strict";         var myBlinkyBlinkBlink = function (params) {             run('maintenance hardware');             var myHardware = list(),                 hardware = 0,                 myComponents = [],                 component = 0,                 myDisks,                 disk = 0,                 myLightStatus = false,                 lightStatus = "Blinkenlights Output:\n";             for (hardware = 0; hardware < myHardware.length; hardware++) {                 run('select ' + myHardware[hardware]);                 myComponents = list();                 //lightStatus = lightStatus + 'Checking ' + myComponents + '\n';                 for (component = 0; component < myComponents.length; component++) {                     lightStatus = lightStatus + 'Checking component ' + myComponents[component] + '\n';                     if (myComponents[component].match(/disk/)) {                         run('select ' + myComponents[component]);                         myDisks = list();                         for (disk = 0; disk < myDisks.length; disk++) {                             run('select ' + myDisks[disk]);                             myLightStatus = get('locate');                             if (!myLightStatus) {                                 lightStatus = lightStatus + myHardware[hardware] + ':' +                                     myComponents[component] + ':' +                                     myDisks[disk] + ':' +                                     " is already dark.\n";                                 run('done');                             } else {                                 lightStatus = lightStatus + 'Darkening ' +                                        myHardware[hardware] + ":" +                                        myComponents[component] + ":" +                                        myDisks[disk] + "\n";                                 set('locate', false);                                 run('commit');                                 run('done');                             }                         }                         run('done');                     }                 }                 //printf('%s\n', myComponents);                 run('done');             }             run('done');             audit('Turning off all blinking disk lights.\n');             return (lightStatus + '\n');         };         return (myBlinkyBlinkBlink());     } };

You can run this workflow either by clicking on it and executing via the BUI, or executing it from the CLI with this command:

maintenance workflows select name="BlinkenLightsOff" execute

Enjoy! Next week, just in time for the holidays, I’ll post a way to make all your hard drive lights turn on; you can enjoy blinking Yule lights in your data center. Those of you with a modest amount of appliance experience will be able to easily figure it out from the above code; it’s very similar code.

Barnson.org Triathlon Team–East Coast Report

I really liked Matt’s play-by-play from his last triathlon, so I thought I’d add my own. I just completed my 2nd triathlon ever, an Olympic (aka International) distance event in Maine.

Garmin Connect logs:
Swim: http://connect.garmin.com/activity/584331856
T1: http://connect.garmin.com/activity/584331858
Bike: http://connect.garmin.com/activity/584331859
T2: http://connect.garmin.com/activity/584331860
Run: http://connect.garmin.com/activity/584331861

I really liked Matt’s play-by-play from his last triathlon, so I thought I’d add my own. I just completed my 2nd triathlon ever, an Olympic (aka International) distance event in Maine.

Garmin Connect logs: Swim: http://connect.garmin.com/activity/584331856 T1: http://connect.garmin.com/activity/584331858 Bike: http://connect.garmin.com/activity/584331859 T2: http://connect.garmin.com/activity/584331860 Run: http://connect.garmin.com/activity/584331861 4:40: Alarm goes off. Out of bed and dressing. 4:55: Kirsti and 5 kids (ages 12 to 4) in the van. Driving to Maine. 6:55: We arrive and are directed to parking. We end up next to the transition area, which is about the best parking spot one could ask for. 7:10: My 3 oldest kids are at their volunteer assignments. They wanted to do this to kill time, and to get free shirts. I pick up my race pack and head to transition. 7:20: My numbers are now on my body, and I’m setting up in transition. I’m still pretty slow at this. I notice an enormous playground next to the finish chute, so I call Kirsti, who I know hasn’t seen it yet. That’s going to make the day for her. 7:50: I’m set up and have chatted with a few coworkers who are also doing the race. They are all more experienced and faster than I will be. No big deal. I make the decision to only carry one water bottle for the bike, because there’s a water bottle hand-out at mile 12. That’s going to prove a fateful decision. 7:55: Out for a 10-minute easy run. 8:05: Back in transition, I put on my wetsuit to head to the ocean. In Maine, the ocean is “warm” this year–66 degrees–but with a wetsuit and a few strokes behind me it feels great. I always forget to breath when starting out in cold water, so I want a good warm-up so that I can acclimate. 8:54: Pre-race meeting complete and 2 waves gone, my wave starts. 1.5 kilometers (which, with drift, I will turn into 0.98 miles–not too much extra). The swim goes very well–I’m generally in the middle of the pack, which helps me feel confident that I’m going the right direction. (It’s hard to see the next buoy right after passing the most recent one.) There is a lot of contact during the entire swim, but I manage to avoid the dreaded “kick to the head.” I don’t sweat my speed too much, but just try to maintain form and breath every 3rd stroke. If I let myself go to every 2nd stroke, I’ll quickly pick up my speed. T1: Have to run up a steep incline to get to transition. (This is a precursor of things to come….) I change quickly, though I have a hard time getting the wetsuit off my ankles. I need to cut the legs an inch or two shorter. Bike: I salute the family just after mounting, drop into my aero bars, and start off in a low gear. The course starts off with a nice half-mile climb at a grade around 6%. Ouch. Racing on a bike is great–the adrenaline makes it easy to exert up to my desired output, so I almost feel like I’m holding back as I stay just below my lactate threshold. I eat some Shot Bloks after 25 minutes or so and am barely drinking. At mile 8 I decide to be more deliberate about my drinking–don’t want to wait for thirst. I have to focus on my exertion level rather than keeping up with or passing people–I can tell that if I get competitive in the moment, I will pay later. Mile 12: I hit the water bottle station at about 20 mph. Ever tried to grab a water bottle at 20 mph? It’s something that needs practice. Which I haven’t got. I leave empty handed, but with a bottle smacked to the ground behind me. I think about going back, but, hey, this is a race! Feeling good. T2: The bike leg went great–19.1 mph, a new record and better than expected. I just might turn in a total time close to 2:40! I’m pretty thirsty though, and my spare bottle at the transition area is now piping hot. I don’t think about that until it’s spraying into my mouth–I feel like Chevy Chase from the Three Amigos. Spit it out–there’s a drink station right at the start of the run. I have to go to the bathroom–hope that feeling passes… Run: I give my “fans” high fives as I head out for the final leg. It’s getting hotter and the sun is now out in force. My heart rate is sky high, even though I’m barely moving. Yes, I’ve experienced this in poorly executed training sessions before–I’m dehydrated. Have to run up the same stupid hill right at the beginning–it feels like this course is all uphill. I feel generally okay though–like I can finish a 10k. I try to drink a lot each mile. Mile 4: Wow, it’s hot. And I feel a little like I can’t quite stay conscious. Maybe I should have turned around back at mile 12 on the bike after all. When I reach some longer hills, I have to run so slowly to keep my heart rate down that I finally just walk for about a 20th of a mile. Some folks have been walking from the beginning, so at least I held on longer than that. In spite of 2 more brief walks, my heart rate never drops below 156, which is pretty high exertion. Still have to go to the bathroom…ouch. Finish: At least I get to run back down the killer hill before the finish chute. I manage to “kick” the last quarter mile, though I don’t think I was going very fast–at least 1 minute below the pace target I had for the ENTIRE run! (Kirsti said I looked much better than the last triathlon.) The volunteer asks me to stop to take my chip off but I tell her I have to keep walking. I feel like I will collapse if I stop. I make it to the water station and she starts urgently handing me cups and asking if I’m going to need assistance. (I walk past a guy on the ground being helped by EMTs. I could be worse, I guess.) I’m good after a minute or so–don’t think I’ll be collapsing.

A few minutes later I felt sort of normalish, though I probably sit on the grass for more than 75% of the next hour.

Total time 2:53. Swim and bike set me up for 2:43, which would have made my day, but I lost 10 minutes on the run. That’s basically what happened to me at my first tri back in June. Slow learner, I guess, but I feel good that I went so much farther this time.

I’ll be back.

First Triathlon: I am 409

Brief version race report for the Riverton Central Stake Sports triathlon today for those who asked. The race was free, and hosted by the local LDS churches in the area. I was number 409: fourth out of 5 waves, 9th registrant in that category.

Garmin Connect logs:
Swim: http://connect.garmin.com/modern/activity/560790985
Bike: http://connect.garmin.com/modern/activity/560790991
Run: http://connect.garmin.com/modern/activity/560790995

Brief version race report for the Riverton Central Stake Sports triathlon today for those who asked. The race was free, and hosted by the local LDS churches in the area. I was number 409: fourth out of 5 waves, 9th registrant in that category.

Garmin Connect logs: Swim: http://connect.garmin.com/modern/activity/560790985 Bike: http://connect.garmin.com/modern/activity/560790991 Run: http://connect.garmin.com/modern/activity/560790995 Slept somewhat poorly. I was confident I’d double-checked my bags, and I hit the sack around 10:45PM, so I knew that getting out the door would be pretty straightforward. I still woke up a couple of times in the night worried that I overslept.

4:45AM: Alarm goes off.

5:00AM: Alarm goes off again. I tumble out of bed, make my typical morning smoothie (digging getting the spinach in there lately), then hit the shower. Because people who don’t shower before swimming in a pool gross me out.

6:00AM: Registration has opened, but I’m just finished loading the car and strapping the bike on the rack, dressed in my tri shorts, a light t-shirt, and some sandals. I don’t panic; the start is less than 10 minutes from my house, so I’m not in any kind of hurry.

6:30AM: After arriving at the South County Pool in Riverton, UT, I found the line for the “4s” at the registration desk under some sun tents. I am, apparently, a “4”, based upon the time I listed for my expected swim. The categories go 1 (fastest) through 5, where 5 is the “family” entry. I am, obviously, not fast. I mingle, and meet quite a few people with names like, “Mark”, “Greg”, and “Gary”. I was directed to locate my bike and my T1 and T2 gear at the T1 transition point, told by a helpful volunteer that anything I didn’t need for T1 would be taken to T2 for the end of the race. There are no bike racks; I carefully lay my bike down in the grass, realizing that I’ll be sitting in dirt later when I change. Oh well.

7:00AM: Race organizers make announcements to the several hundred participants, discuss the course, and — this being a local church-hosted event — have a prayer to request a blessing on the participants. I realize I’m supposed to have dropped my T2 bag at the church. Most people on this ride won’t bother; 3 out of 5 bikes aren’t road bikes. They are mountain bikes or hybrids, many clearly recovered from a garage where they sat since they were bought years ago. A helpful race volunteer grabs my T2 stuff and hauls it to my chair at T2 on my behalf. I also notice I neglected to attach my footpod to my Vibram Five Fingers Bikila shoes. Oops.

7:15AM: The other “Fours” have shoved me to the front of the line of fours, with the excuse “Any man who wears a swim cap is faster than the rest of the Fours”. Thank you so much, Di, for giving me that gift from your Ironman race! Not feeling very grateful at that moment, though. It’s kind of cold in my shorts & no shirt; the organizers promise the pool is fairly warm.

7:26AM: Timer counts down. I press “start” on my GPS watch. The first lap was SMOKING; I pass three people. I realize as I start my second 100 meters that I’ve basically just burned my arms out feeling so good; start to back-stroke through the lap. Not promising. The last guy I passed glares at me as he WALKS past me. I then realized that at any time I can touch the bottom in this shallow 50-meter pool. I take advantage of that for a second to push off again and get some momentum back.

7:34AM: First transition. Panting. Exhausted already. That was only 300 meters? Really? I climb out of the pool, rip off the goggles & cap, grab my sandals and shirt from where I left them by the pool gates, and trot toward T1 where my bike lay. The grass keeps getting caught in between my toes, and I don’t like running barefoot on grass anyway. Too many hidden dangers, too much time spent running barefoot to like running where I can’t see what’s underneath my foot. The volunteers keep trying to tell me to run on the grass because it’s “more comfortable” than running on pavement. Heck, no. I don’t really feel like running to T1 either… I kind of half-trot, half-walk to my bike.

7:35AM: I never realized jerseys and Injinji socks were so freaking hard to put on when you’re not dry. I’m sopping wet, sitting in the dirt and getting my new tri shorts and hands covered with mud. Next time, I’m either wearing a singlet or else using a zip-up jersey; this over-the-head thing is terrible and I feel like I can’t get the back down. I eventually get everything together and roll out. On the plus side, I didn’t have to futz with a heart rate monitor around my chest; my new Mio Link on my wrist next to my Garmin 910XT has been recording the whole time. Yay technology.

Most other racers notably lack any technology. I feel a little weird even having trained for this short race, particularly when so many others around me openly discuss not having done so. It’s also weird to me that there are a ton of people standing around, chatting with volunteers and amongst themselves. I thought this was a race, but for many the transitions are apparently social hour. Many of them will pass me later on the run anyway 🙂

7:37AM: Rolling out of first transition. I forget to hit the lap counter and reset my watch until I’m well into the trail. Oops. Weirdest-looking swim leg ever in my logs. I hit my stride, calling “on your left” several dozen times. This is where I feel at home. The doc told me I wasn’t allowed to just bicycle for my sport anymore, though; I have to participate in other activities, particularly ones that strengthen the core. Swimming and running suit, so Triathlon it is.

07:58AM: Twenty minutes later, the bike leg is done. I’m a little early, and wonder if I missed part of the route. Was supposed to be a 6-mile ride, but was actually a hair under 5 miles; I’m betting the mapping tool they used when designing the course plotted “middle of road” or something like that, rather than actual lines the racers will use. Other than one steep little section of road, the bike ride was cake. My legs were fresh from the swim — though my arms and shoulders ached! — so it was pretty easy. A helpful volunteer steered me away from a gravel section that people had been wiping out on, for which I was grateful.

T2! Easy switch. I’d un-velcroed my triathlon cycling shoes when coming into T2 (I’ve occasionally second-guessed my decision to buy those over more traditional cycling shoes; not today!), so I just step off my bike, leaving the shoes attached. Take off the helmet and clip it to my bike, laying the bike against the chair labeled “409” that was helpfully provided. Grab the bottle I’d put in my T2 bag and guzzle about ten ounces of Nuun-infused water; I don’t want to carry a water bottle on what should be a short 2-mile run.

08:00: Made it out of T2 in 2 minutes. Not bad, not lingering. Big giant row of snacks that I sail right past; I don’t want any food. I had a good breakfast, and I have a morning snack planned already. Besides, muffins make my stomach hurt, and I’ve already had a banana. Blech. No thanks.

08:10: My plan to run 2 minutes, walk 1 minute repeatedly until my 2 miles are up has a major flaw in it: I don’t know how to program my watch to allow that kind of interval reminder during a multi-sport workout. I try to mind the time on my watch, but running is a really weak point for me, so I inevitably run too long, then start walking and gasping for breath, or walk too long and realize my heart rate has slowed down a lot. I’m slow, I’m overweight, and my plantar tendon in my right foot gives me sharp, jabbing reminders if I push too hard. I settle for doing phone-pole intervals of walk-run. It seems to work well, but the run is by far my slowest segment. My bike jersey is really comfortable and non-chafing, but it’s a little warm for the run; next time I think I’ll jump for the sleeveless variety, because the sweat puddling in the small of my back behind my jersey pockets where I’ve stashed my phone & keys is a regular reminder that I’m not quite wearing the right equipment for a run.

08:26: Twenty-six minutes and twenty-seven seconds later, I run past the finish line to the announcer calling my number and time. While it’s not a fast two mile run by any stretch at around fourteen minutes per mile, I console myself that one year ago today, I was flat on my back, laid out with a back in jury and unable to run at all, even if someone held a gun to my head and demanded it. My goal pace for a 5k is currently 12 minutes per mile, and 14 is much better than the 18 I was clocking — running! — just a couple of months ago. I’ll get there.

I grab a drink of water. I go check on my bike and try to figure out where the heck they dropped the T2 bags. A few minutes later, my lovely wife shows up at the course, and we eventually find one another and exchange kisses and hellos, and she tells me how proud she is of me for finishing my first triathlon.

My wife Christy is AWESOME.

She introduces me to a couple of fellows that I didn’t know but that she did, and she grabs all our bags and meets us back at the pool. We chat about the upcoming St. George triathlon on September 13, 2014 as the two other fellows and I cruise down the hill back to the pool. Given that it’s a month away, and is heading SOUTH instead of North as fall closes in, I think I may give that one a try.

Ahem.

A “tri” 🙂

The Value Of Truth

A philosopher named Harry G. Frankfurt wrote a brilliant essay a number of years ago entitled, “On BS”. The title notwithstanding, the essay brilliantly examines a phenomenon you’re going to see your whole life: people who say things for the effect of the saying, not the truth or falsehood of what it is they say.

A philosopher named Harry G. Frankfurt wrote a brilliant essay a number of years ago entitled, “On BS”. The title notwithstanding, the essay brilliantly examines a phenomenon you’re going to see your whole life: people who say things for the effect of the saying, not the truth or falsehood of what it is they say. I’m not going to try to restate Frankfurt’s essay here. He does a fine job on his own. I want to focus on what this means to you.

Words are generally a means to an end. That end may be to convey information, rally support, provoke a fight, fulfill the requirements of an oral exam, or whatever. Those words you use will typically be one of three categories: words you believe to be true, words you believe to be false, and words you believe will have some sort of impact regardless of their truth or falsehood (“BS”).

To be an effective storyteller, you must be an effective BSer. It’s the nature of the business. People know that your words are not necessarily true; they realize those words are intended to create an effect and bizarrely they pay you to BS to them. Many other entertainers must be masters of BS in one form or another to do their jobs.

I believe that many of our biggest problems — both personally and as a society — arise when we persuade ourselves not that what is false is true, but that which is BS is true. The difference is subtle but important. Frankfurt has this to say about it:

Both in lying and in telling the truth people are guided by their beliefs concerning the way things are. These guide them as they endeavor either to describe the world correctly or to describe it deceitfully. For this reason, telling lies does not tend to unfit a person for telling the truth in the same way that [BS] tends to. …The [BS]er ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, [BS] is a greater enemy of the truth than lies are.

I’m not going to advise my older children to never lie. The truth is, lying is an important skill under varying circumstances: to protect others, to protect yourself, to make yourself look better, and to get out of trouble, among other reasons. It’s extremely important to realize that everybody lies. Especially the people who tell you they never lie; that statement itself is guaranteed to be a lie. As long as you avoid being a pathological liar — someone who lies frequently and for no good reason other than compulsion — you’re really no different than everybody else in that regard. It’s important to know that your ability to discern a lie is really no better than random chance; in fact, you are more likely to believe a lie from a loved one than a stranger.

Those you believe to frequently tell the truth usually have some reasons they do so: 1. It’s much easier to remember the truth than a lie. Telling the truth is pretty straightforward; you don’t have to keep multiple stories straight or come up with weak excuses. This is the primary reason I advocate for telling the truth as a pragmatic measure. 2. They have some sort of personal religious, moral, or ethical bias toward truth-telling rather than falsehoods. 3. Gaining a reputation for telling the truth means that when one absolutely, positively needs to lie, that lie is much more likely to be believed.

I encourage my children for all of the reasons above to tell the truth as much as possible even when it is uncomfortable to do so. Telling the truth about yourself — including your failings — typically makes you look BETTER than you would otherwise, as it causes you to appear humble and self-effacing, even when an impressive lie would be more convenient or useful. Telling the truth rather than a lie to avoid trouble convinces others you are ethical and willing to face the consequences of your actions; the consequences of doing something wrong and telling the truth about it are also usually much lighter than the consequences of doing something wrong and lying about it. Sticking to the truth when describing events helps you remember and keep the story straight.

If you must lie, it’s important to know the truth yourself, and the lie should be carefully planned and not spontaneous. Spontaneous lies are extremely easy to crack through. One just needs to look hard enough. A well-planned lie works on multiple levels to obscure the truth. Certain careers in Intelligence, for instance, require such lies to protect lives and prevent tragedy. These jobs are rare, but exist; you should consider carefully whether you’re willing to live that way.

Above all, save the BS for entertaining people. Avoid using BS to try to get your way; that’s the path frequently used by politicians, lawyers, and CEOs. It often works, but it takes a terrible personal and professional cost. There is a reason those in these kinds of positions are typically so widely reviled. I — and H.G. Frankfurt — call that reason BS.

FMC: Body Hair

Dear Children,

Today I touch on a sensitive topic: body hair. If you are easily offended, please go read something else, as we’re going to deal very bluntly with the topic.

Dear Children,

Today I touch on a sensitive topic: body hair. If you are easily offended, please go read something else, as we’re going to deal very bluntly with the topic. Human body hair serves several obvious purposes, amongst others I probably have not thought of.

  • It keeps us warm in the cold.
  • It is decoration. One’s hair — including that on the head — serves much the same purpose as the mane on a horse. The beauty of one’s hair and/or beard causes a reaction amongst the opposite sex, which may be one of revulsion, attraction, or simply an expression of interest.
  • The comparative lack of body hair on humans relative to other animals is thought to be an evolutionary advantage allowing us to run longer distances without overheating, whether for running down game or avoiding predators. There may also be some aspect of sexual selection to it, as being less-hairy has throughout recorded history been believed to be a mark of civilization or better breeding, while those who are perceived as hairier are often assumed to have other Neanderthal traits as well.
  • It serves to trap and preserve bacteria and fungus, and is therefore a scent to be used to mark one’s territory from encroaching would-be competitors or predators.

It is this last bit I want to focus on. In modern society, we care about smells. The dense patches of body hair on modern humans stink. This is an evolutionary artifact, and I submit that scent-marking of our territory — as evidenced upon opening the door to the bedroom of any teenage boy — is a long-expired bit that should be put to rest as well.

If you are human, you stink.

Them’s the breaks.

To mitigate our intense odor, there are several things you can do. We call these “good hygiene”, but it’s really more than that. You don’t wish to offend your family and friends. Here are my tips for all youth aged ten and up that, if followed, will dramatically lower your odor.

  1. Bathe daily with soap, paying special attention to get in the cracks where bacteria festers, and one’s body or scalp hair: behind the ears, the butt, the genitals, the armpits, the feet (if you habitually wear shoes, as most of us do), and the scalp. All of these regions tend to foster smelly bacteria which requires daily removal to prevent strong odors.
  2. Brush your teeth at least twice daily, morning and night.
  3. If you still have tonsils, gargle with a non-alcohol mouthwash. This will help dislodge fetid “tonsil stones” that tend to build up in the folds of one’s tonsils; even warm water gargled several times a day this way can help.
  4. Avoid malodorous foods if you plan to have company that day. Garlic, asparagus, and onions top the list.
  5. Wash your clothing regularly; any individual piece of clothing should be worn no more than perhaps three times between launderings, and underclothing only once. If you sweated heavily in any piece of clothing, it should be washed immediately thereafter and not left to molder on the floor. The principal exception is outer-garments which do not come into direct contact with skin bacteria, such as suit coats, raincoats, snow pants, etc.
  6. If your body hair, as mine, reaches extreme lengths, it’s helpful to trim it every few months if you wish to avoid excessive smelliness. Once or twice a year suffices for mine to keep it down to less than an inch; if left untended, my leg hair grows longer than the typical length of hair on my head. Keeping body hair groomed is nice for you and nice for those around you. I will not comment on the unsightliness or not of having body hair (such standards change over time with sensibilities about body image), but simply suggest that if your armpit hair is longer than your pinky finger, it almost certainly harbors a surprisingly-large colony of bacteria that everyone around you can typically smell.

Thus endeth the lesson.