Laws of Computational Metaphysics

In the post “Welcome to goer.org 3.0”, I mentioned a number of reasons for the redesign. Better permalinks. Better comments. Upgraded feeds. Not hideously green. All sorts of good stuff.

I also upgraded from Movable Type 2 to Movable Type 3. MT3 offers a number of improvements, such as a better web interface and a more sophisticated plug-in system. However, the truth is that I could have done the whole redesign in MT2. And I was reluctant to upgrade, because:

  • I had already paid for MT2, and MT3 would have cost more money.
  • MT2 could operate using flat files, but MT3 requires a database, which would have required me to upgrade my web hosting plan.
  • MT2 was working Just Fine, Thanks.

But eventually all of these became not-true. First, MT3 became free for personal use. Second, my web host made MySQL available for all their plans, even the El Cheapo ones like mine. Third, Jacques alerted me that MT2 was not, in fact, working Just Fine, Thanks. The unpatched security hole was enough to convince me.

So it wasn’t enough to just start creating new posts using a new template — I also had to import all the old posts so I could shut down MT2 permanently. Unfortunately, technology has (surprise!) gotten in the way.

Importing the posts themselves wasn’t too bad. As long as you remember the simple rule of Movable Type upgrades:

  • uploading your posts by FTPing them to the import/ directory: GOOD
  • uploading your posts via the “web upload” feature: BAD

then everything works out alright, mostly. I was pleasantly surprised to discover that even though most of the old posts use raw HTML, and all of the new posts use Markdown, MT correctly formatted them all. Unfortunately this cleverness doesn’t apply to comments, but I’ll take what I can get.

What’s more annoying is permalinks. The old site just had monthly archive pages, not individual entry archives, so it had permalinks that looked like this:

https://www.goer.org/2003/Apr/index.html#29

By contrast, the new site has permalinks that look like this:

https://www.goer.org/2003/04/the_xhtml_100.html

This is pretty yucky for a couple of reasons. First, unless I am reading the documentation incorrectly, MT3 changed its archive formatting syntax so that you have to represent months using numbers. Strings such as “Apr” are right out.
Ok, fine, I can use numbers for months, and I can even fix everything up with mod_alias.

But then there’s the second problem: all my old posts used anchors (the #29 part) in the permalink. What I didn’t know back in 2001 is that the anchor never gets sent to the web server, which mean I can’t use that information to redirect the old posts to its new location. (Oh, you could always try to do the redirect on an individual basis using JavaScript, but the search engines wouldn’t be able to follow, so screw that.)

So I’m doing the next best thing, which is to redirect each old-style link to the appropriate monthly archive page. Straightforward enough, although I am wondering whether I should put in some special cases for the two or three posts that had wider than usual linkage. For example, if someone links to the aforementioned:

https://www.goer.org/2003/Apr/index.html#29

then my poor webserver only sees this:

https://www.goer.org/2003/Apr/index.html

but I know that the person actually meant to go here:

https://www.goer.org/2003/04/the_xhtml_100.html

and not to some other post made in April 2003. I’m not sure what the proper thing to do is here, but I’m tempted to go with the ugly hack that will help most people and annoy the remainder, rather than the cleaner solution that will annoy everybody.

Anyway, this little tale of woe is all just a roundabout way of getting to my Laws of Computational Metaphysics. I used to have one, now I have two. I’m sure someone has stated these laws before, but here’s my formulation:

  1. Information that resides only on a single hard drive doesn’t exist.
    This one is the most important, since this one bites both geeks and non-computer geeks all the time. (Computer geeks: raise your hand if you’re older than 22 and you’ve never lost data.) Among non-computer geeks, only very very very smart people like my kid sister and my mother can be made to understand this problem. So for everyone else my default advice is not, “Get yourself a good backup system,” but, “Don’t store anything important on the computer, ever.”

  2. Permalinks that contain an anchor don’t exist.
    Law #2 has a narrower scope, but I think that amongst the web nerd set, it’s underappreciated.

Feel free to add more Computational Laws of Metaphysics in comments…

How to Convert AuthorIT to DocBook

Because the public demanded it! This is really just an overview of the process, but it should give you a basic idea about what to watch out for.

  1. Convert your AuthorIT book to DITA.

    DITA (Darwin Information Typing Architecture) is one of AuthorIT’s built-in publishing formats. Publishing to DITA results in a folder containing your book’s image files, a collection of *.dita files, and a toc.ditamap file.

    Sadly, you must take this opportunity to wave your index markers a fond farewell. They are apparently too old and frail to survive this stage of the journey.

  2. Download the DITA Open Toolkit.

    The DITA Open Toolkit (DITA-OT) is a collection of Apache Ant scripts, XSL stylesheets, and other goodies that enable you to transform DITA into other formats, including DocBook. For those of you who don’t live in the Java world, Ant is basically make for Java. Newer versions of DITA-OT conveniently include a copy of Ant, so you don’t need to install it separately.

    To install DITA-OT, unzip the toolkit’s files into any directory and run the startcmd.sh script (or startcmd.bat script on Windows) to configure your CLASSPATH and other environment variables. If you forget to set your CLASSPATH, the toolkit will helpfully indicate this to you by bailing out mid-transformation and complaining that the Ant script is broken.

    Before you run any DocBook transformations, edit xsl/docbook/topic2db.xsl and comment out the template that contains “Related links”. The only thing this template does is riddle your DocBook with invalid itemizedlist elements.

    Do not waste time reading the toolkit’s documentation. The manual that ships with DITA-OT 1.3 actually applies to DITA-OT 1.2, so most of the examples are broken. As for grammar and clarity, let’s just say that the manual’s translation from the original Old Frisian leaves much to be desired.

  3. Transform the DITA document into DocBook.

    All the toolkit’s transformations involve running an Ant script:

    ant options targets

    To transform DITA to Docbook, run:

    ant -Dargs.input=path/toc.ditamap dita2docbook

    If the transform fails (and all your environment variables are set correctly), there might be errors lurking in your generated DITA source. This is AuthorIT’s way of telling you, “Don’t let the door hit you on the way out, jerk!”

    • If DITA-OT complains about a missing topic reference, there’s a good chance toc.ditamap is referencing a topic that doesn’t exist. Go back to the original AuthorIT doc and try to identify the missing topic. If all else fails, delete the reference from toc.ditamap and move on. Your readers already knew about the safety hazards of handling lithium deuteride, anyway.
    • If a topic contains a xref with a crazy relative path, this can really confuse DITA-OT. The good news is that the toolkit indicates the path that is causing the problem. The bad news is that AuthorIT dumps its DITA output in UTF-16, which is really annoying to grep through.
    • If you had any “Note” paragraph styles in your AuthorIT doc, these might disappear. Even more strangely, “Warning” paragraphs do make it through.
  4. Clean up the DocBook output with a script.

    Congratulations, your document is now DocBook! Well, more accurately, it’s “DocBook”. Just be happy your tables made it through, sort of.

    Fortunately, you can fix many issues pretty easily by running the document through a cleanup script. This script is particularly important if you’re converting multiple documents. The canonical language for the script is XSLT, but if you’d rather stick it to the W3C Man, Python or Perl would work fine too. Here’s what you’ll want to fix:

    • Remove all id attributes. These generated IDs are duplicated throughout the doc, and nothing points to them. Throw them away and start over.
    • Remove all remap attributes. In theory, these attributes contain useful information about the original DITA element, which in turn could help you design your post-processing script to provide better-quality DocBook markup. In practice… eh, not so much.
    • Remove all sectioninfo elements. They’re often invalid, and always contain nothing useful.
    • Remove empty type attributes. Not sure how those got there.
    • Remove empty para elements.
    • Change sidebar elements to section elements. Like the empty type attributes, these are another mystery guest.
    • Join programlisting elements. If you had any multi-line code samples, you might find that in the transformed DocBook, each line appears in its own programlisting. Join adjacent programlisting elements into a single programlisting (or screen, if appropriate).
    • (Optional) Change the article to a book, if appropriate. Add chapter elements as necessary.
    • (Optional) Try to improve the quality of the markup by changing emphasis role="bold" and literal elements to something more specific. For example, you define a list of commands that appear in your book and wrap each one in a command element. Creating explicit lists of commands, GUI buttons, and so on is tedious, but it’s still better to do these substitutions in the script.

    Finally, there’s the issue of broken IDs and links. Currently, every one of your AuthorIT hyperlinks is now a ulink that falls into one of these categories:

    • The ulink‘s url starts with “mailto:“. Convert these to email elements.
    • The ulink‘s url starts with “http://“, or “ftp://“, or “gopher://“. Leave these alone.
    • The ulink‘s url points to something like “D1228.xml“, a.k.a. nowhere. These are your former internal hyperlinks. They’re all broken.

    But don’t be discouraged, your script can actually “guess” at where many of these links should point. If a given internal ulink contains something like, “Configuring the MIRV Launch Sequence”, there’s an excellent chance that somewhere else in your document there’s a section with a title, “Configuring the MIRV Launch Sequence”! So all you have to do is:

    1. Convert the content of each ulink to a nicely-formatted ID. Replace whitespace with underscores, remove extraneous punctuation, and lower-casing everything.
    2. Convert the ulink to an xref, setting the linkend to the new ID.
    3. For each section element, apply the same ID-conversion algorithm to the section‘s title. Set this value as the section‘s id.

    A healthy fraction of your ids and linkends should now match up, fixing those broken links.

  5. Clean up the DocBook output manually.

    Oh, you’re not done yet! Here’s a non-exhaustive list of what’s left:

    • Fix the remaining invalid ids and broken links that your script didn’t catch.
    • Fix any other DocBook validity issues.
    • Add programlisting and screen elements where appropriate. Remove excess carriage returns as necessary.
    • Make your inline markup consistent. For example, all command-line tools should be consistently marked up as commands (assuming your organization chooses to use that element). You can partly script this, but mostly this is a manual job.
    • Remove any mysterious duplicate sections.
    • Rename your images from “898.png” to something more descriptive, such as “mirv_reentry_trajectory.png“. Embed the images in a figure with a proper title and id.
    • Add any missing front matter.
    • Rebuild your index by hand. By hand. Jesus H. Christ.

    Now put your feet up on the desk and pour yourself a well-deserved gin-and-tonic. If anyone asks you why you look so frazzled, do not under any circumstances tell the truth. Otherwise they’ll just respond with, “Well, why don’t you just move it all to the corporate wiki?” And there’s only one rational reaction to that. Don’t get me wrong, it’s not easy to inflict serious blunt force trauma using a 15″ Powerbook, but somehow, you’ll find a way.

Do Not Push the Red Button!

So this is the time of year when all Nice Jewish Boys (and Girls) should turn their minds to ethical questions. And no field is more fraught with ethical conundrums than… technical writing. For example: is it better to document every API method or option, no matter how obsolete or dangerous? (Otherwise known as the “Give them rope and explain how they can tie their own noose” approach.) Or should you try to hide all the bad stuff for the user’s own good? (Otherwise known as the “Allanon School of Technical Writing.”)

Usually I advocate the first school. Basically I figure that people are grownups, and if you do your best to explain things, hey, it’s their lookout. This is not to say some projects shouldn’t go the other way, but for the most part, I think more information is good. Plus, the first approach means more work to do, which theoretically means more employment for myself and my fellow tech writers! Solidarity, my brothers and sisters!

Anyway, while I do prefer the first school, this approach sometimes leads to amusing results. For example, my group maintains a certain internal tool that has a few dangerous command-line options. Most of these fall into the category of, “only use this if you really know what you’re doing,” which is fine. But there’s one that I had to document like this:

“[blah blah blah descriptive text.] CAUTION: Using this option can completely destroy your system. Do not use this option.”

I ran across this description again just a few days ago, and man, it never fails to crack me up. Trust me, we technical writers are really hilarious once you get to know us.

Why Oh Why Does Documentation Software Suck?

I find myself this Saturday in the possession of a half-full pitcher of mojito. This is something of a problem, given that I need that very pitcher to make mojitos for tomorrow‘s Sunday barbecue. So I have been doing my best this afternoon to rectify the problem. I only bring this up so that if this post seems less coherent than usual, it’s because of the Demon Rum. In vino veritas, and all that.

So. In the course of my job, I need to produce documentation that falls into these basic types:

  • API documentation: a terse reference for the classes and methods available for a particular C++/Java/PHP/whatever library.
  • Man pages: a terse reference for the commands and options available for a particular command-line tool.
  • User guides: conceptual information and examples, written around the relevant API documentation and man pages.

And I need to produce said documentation in the following formats:

  • HTML: the primary format for modern documentation. At my very first job, we produced our documentation as very nice perfect-bound 7″x9″ manuals using Framemaker. That era is long gone.
  • PDF: in case someone needs to print the documentation.
  • troff: man page format, suitable for installing into /usr/share/man/ or wherever man pages go. To be honest, I’m somewhat confused about the difference between troff, nroff, and other *off variations. But I suppose I shouldn’t worry my pretty little tech writer head over such things.

For engineering documentation, I don’t think these types and formats are all that shocking. There are thousands of writers and engineers who are faced with the same problem every day. And yet there is no documentation technology that can handle all of these documentation types and output formats seamlessly. None.

AuthorIT, Framemaker + Webworks, and other mid-range tech writing tools can at least produce output HTML and PDF. All of these tools are Windows-only. All use a proprietary binary format. None handles man pages and source code-generated API documentation. (We won’t even mention Microsoft Word, which still hasn’t figured out how to do ordered lists consistently, or handle documents longer than 100 pages.)

The only toolchain I’m aware of that even comes close is Docbook. It’s text/xml, so it plays nicely with UNIX. It doesn’t require an expensive client to edit. It can produce output in myriad formats, including HTML, PDF, and man pages. It’s open source. It’s modular (with XInclude). It is the only documentation tool chain that even approaches the holy grail of user guides, API guides, and man pages.

Except… There’s no such thing as “out-of-the-box” Docbook: you need to pick your editor, XSLT processor, FO processor, and template customizations, and there is very little guidance on how to do this.

Except… the default HTML output looks like something out of 1993. Basically, the output is nicely-marked up semantic HTML with no CSS whatsoever. Which is fine, except that this means you’re going to have to sink some time into making the HTML look pretty.

Except… PDF output is really buggy, mostly because the major open source FO processor is still in beta status. Not that I blame them — XSL-FO is hard, and typesetting in general is really hard. But the alternative is to buy a commercial FO processor for $4000/CPU… grrrr…

Except… in general, source code documentation generators do not integrate with Docbook. For Java code, there’s a Javadoc doclet that produces Docbook (yay!). For PHP code, phpdocumentor can generate Docbook natively (yay again!) But for C++, Perl, Python, and other languages, you’re screwed.

Why oh why does documentation software suck?

Yahoos Are Surprisingly Polite!

As I mentioned earlier this month, our group just moved into a new building. One of my coworkers, who we’ll call “Dave”,[1] had a meeting right after the move-in. It struck him that all the whiteboards in the new conference room were completely pristine… and this situation could not stand!

So Dave drew a diagram on the whiteboard. It consisted of:

  • A cylinder, with an arrow pointing to…
  • a box, with an arrow pointing to…
  • a cloud labeled, “The Internet”
  • a stick figure person or two
  • several other boxes surrounding the diagram labeled, “Deliverables”
  • and finally, a big “DO NOT ERASE” message next to the diagram.

Lo and behold, the diagram is still there weeks later. Isn’t it is amazing how respectful people are of the “DO NOT ERASE”? It really restores one’s faith in humanity, doesn’t it? Unless… well, the alternative theory is that nobody has had time to erase the diagram because they’re too busy polishing up their resumes, pronto. (“My God, honey… did I tell you what I saw on a whiteboard today? I’m surrounded by total idiots!”)

1. Because his name is “Dave”.

Can We Please Get Some ‘Quality of Service’ Around Here?

Let me go on record to say that I agree with the telecoms that network neutrality should be abolished. After all, it isn’t AT&T‘s fault that the original architects of the Internet chose to design the Internet in a manner that prevents AT&T from maximizing its revenue and delivering increased shareholder value. Hell, AT&T fought the invention of the packet-switched network all the way. So, let’s cut them a break, eh?

First, the telecoms really do deserve to be able to extract more rent from Google and my employer and other ingrates who have figured out how to make large amounts of money using their precious infrastructure. ‘Cuz how fair is that? It’s like, I build a road for all kinds of people, and then you use that road to make a fortune in the lucrative asparagus-shipping market, and all you do is pay me a pittance for road maintenance. What a bastard you are! Of course the telecoms could try to extract this money directly, which would obviate the need to shell out all that extra cash to Washington lobbyists and PR firms and whatnot. But trust me, that money is well-spent. Just think how embarrassing it is to call up your top customers and say, “Look, I realize that you’re buying a lot of my stuff, and I realize that under ordinary circumstances this would mean you should get a bulk discount… but see, the thing is, I’d actually like to charge you a lot more than anyone else, because, well, you can afford it. Right? Guys?” Even a stone-cold telecom exec can’t stomach making that sales call. They pay telecom executives well, but not that well.

Second, the telecoms also face a deadly threat from their users. Current pricing models for DSL and cable assume that users only make occasional requests for bytes. The telecoms can “guarantee” a certain minimum download speed to all their customers because on average, no one customer is actually using anywhere near the bandwidth that the company agreed to deliver. That model was a swell idea a few years ago, but now things have gone crazy. Cray-ZEE! People are downloading giant video files! Listening to streaming audio! Watching streaming video! Playing MMORPGs! Joining peer-to-peer networks! Bandwidth usage is going, up, up, up. And the telecoms can’t just raise rates, because ordinary people tend to get really angry when you start charging them more for the same service, particularly when the service has historically always decreased in price.

So the only sensible solution is to enable the telecoms to filter out and degrade quality for certain websites as necessary, so that the telecoms can A) extract higher rates from wealthy businesses on the high end and B) stamp out bandwidth-sucking startups and other wastes-of-time on the low end. This requires abolishing the basic standards on which the Internet was founded, but hey, you gotta break some eggs to make them omelets. Well, okay, that’s not the only sensible solution. Sam has an alternative plan — he says, “Maybe they can charge the NSA for our phone records if they’re hard up for cash.” That’s my Sammy, always thinking outside the box!

That’s Not Gypsum You’re Smelling, That’s Brimstone!

Must I thus leave thee, Paradise? — thus leave
Thee, native soil, these happy walks and shades?

Platform Engineering’s fall from grace has been ignominious indeed. At the height of our powers, we had a commanding view of the campus from the top floor of Building A. Then they moved us down to the second floor of Building A. Then the second floor of Building B, Building A’s poor cousin. And finally, tomorrow we move across the street to the newly-reclaimed Building E. Somebody up there hates us.

A couple of weeks ago, several of us went on an exploratory mission to Building E. The place was gutted — walls stripped to the studs, pipes exposed, workers welding, the smell of gypsum everywhere. We trooped up the stairs to check out our floor. Ryan opened the stairwell door, looked out at our floor, closed the door, and said, “It’s raining in there.” We thought Ryan was kidding, but sure enough, water was streaming down from a ceiling pipe and pooling on the new carpet. The puddle was large enough to comfortably support several full-grown koi. As we gawked, a construction worker with no hard hat snapped at us, “This is a hard hat area.” Nothing to see here, move along…

Anyway, it could be worse — at least they didn’t shuffle us off to the satellite campus at Mission College. I mean, we’re not total losers.

Money Down the Drain

Goddamnit. I wasted almost ten minutes today figuring out why s/</&lt;/g wasn’t doing what I wanted it to do. Duh.

How I was allowed to graduate college without having sed fundamentals burned into my brain, I will never know.

Dumb and Dumber

My college buddy Brad dropped by this weekend. He had asked to see World of Warcraft right before he took off. Unfortunately, my speakers mysteriously stopped working. Everything was connected properly, the speakers were powered on, the light was green… but there was no signal whatsoever on the line out. Whatever was wrong with my speakers, there wasn’t time to fix it, so that was the end of that.

The next day, I conducted a rigorous analysis of the malfunctioning equipment and determined that… the volume was turned all the way off. Good thing I went to Engineering School.

I’ve noticed that my brain is getting less and less trustworthy when it comes to mathematical issues. I thought the decay would stop at, oh, solving simple PDEs, but no. Just today, Mom asked me a straightforward math question for the next edition of her book: “What are the odds of getting seven heads in nine coin flips?” The answer leaped to mind: “(9 choose 7) / (2^9)“. But the scary thing was, I didn’t know why. My brain is cluttered with mathematical machinery that can occasionally lurch to life and spit out answers, but it’s become disconnected from the rest of my thought processes. I might as well have determined the odds through Divine Revelation.

Since this was going into my Mom’s book, I went through an exercise to convince myself that (9 choose 7) really is the right way to count the possible combinations of heads. I then confirmed that by searching on the web. Whew. Which brings us to an even sadder tale: the first search result I got was not the legitimate Drexel University Math Forum site… but an impostor, Bonus.com.

The impostor’s home page is a cheesy blinky flashy games portal, so it’s not obvious at first glance why they would want to errr, mirror the Drexel Math Forums. You would expect to see a blinky banner ad over the borrowed content, but none appears. Actually, if you view source, there is a banner ad at the top… but the link to the image is broken! As Columbo would say, “Dis is a puzzler.”

A little more poking around uncovers the reason for our confusion — we were looking at the wrong page. Unfortunately, the site has been designed to defeat deep links, so I can’t provide a direct link. To get to the page we were supposed to see in all its blinky flashy glory, you need to search the site for “math”, scroll down to the bottom and click on “Ask Dr. Math”. Below that is a mirror of the entire Dr. Math site, framed and lookin’ fabulous.

Drexel’s Terms of Use are reasonably liberal, but Bonus.com still chooses to violate Drexel’s “Credit and copyright notice” and “Links and Framing” policies. Note that Drexel conveniently links to their Terms of Use on each Math Forum page, but Bonus.com has responded by cleverly removing the underlying URL (while leaving the link text itself intact). Just for chuckles, here’s the cache of the unframed mirrored page, courtesy of MSN Search[1]. The banner ad isn’t visible because of the aforementioned broken image link, but if you view source, you can see the detritus left by Bonus.com (and MSN Search) at the top of the page. I wonder what Drexel University thinks about this?

Let’s find out.

1. You can’t help but wonder how Bonus.com fundamentally differs from the MSN Search cache. I think the answer is that the MSN Search cache A) provides the URL to the cached site, and B) makes it clear that the content doesn’t belong to MSN, but came up in the context of a search. If Dr. Math had chosen a design that did not display “Drexel” on every page, we would have no way of knowing that the pages belonged to Drexel U., not Bonus.com.

No Trees Were Harmed In the Writing of This Novel

As an experiment, I’ve decided to write The Book entirely on my computer. This might not be a revolutionary move for some people, but it’s a revolutionary move for me. Ordinarily when I write, I have all sorts of paper flotsam — notes from interviews, printed-out specs with more notes from meetings, sketches, and so on. However, for The Book I’m trying to do everything in software.

On the face of it, this is a pretty stupid idea.

iBook vs. Pencil and Paper
Characteristic 12″ iBook Pencil and Paper
Cost $999 $1.79
Weight 4.9 pounds 5 ounces
Battery Life 5 hours Infinite
Resolution 1024 x 768 @ 106 dpi 600+ dpi at typical viewing distances
Uptime 99.99% 99.999% (with hot-swappable backup pencil)
Operating Temperature 50 to 95 F -459 to 451 F

But despite the obvious advantages of paper, I am trying to do everything on the iBook anyway.[1], [2] The main reasons are:

  • Backups and archiving.

  • Organizing. (I have a much easier time keeping virtual things organized than paper things.)

  • Hyperlinking.

  • Searching.

In particular, I think the first and fourth items are what will make the effort worthwhile. And there is, I believe, a qualitative change between going 99% digital and 100% digital. It’s the difference between knowing that everything related to The Book is archived, versus “everything except for my story maps, and those scribbled notes from the coffee shop last October, and …”

Of course even if you buy all that, there are still major tradeoffs to consider. One drawback is that you lose some spontaneity, because you have to have your laptop to write. This would be a huge pain for writers whose modus operandi is to scribble huge piles of notes whenever inspiration strikes. Unless you’re a whiz with your PDA or smartphone, you’d have to resign yourself to a lot of transcription.

Another drawback is that certain writing techniques don’t translate all that well over to the digital world. For example, I find that while storymapping can be very useful when using pen and paper, it basically sucks on the computer. In my next post, I’ll describe a couple of things I tried to make storymapping on the computer suck a little less. At least on Macintoshes. As for making things suck less on other platforms, I wouldn’t really know where to start.

1. Despite the title of this entry, I am not actually doing this to “save trees.” At this point in my career, I’ve killed far too many trees for me to start worrying about them now. See, the first time you print out a huge manual only to discover that the global template was screwed up and you have to junk the whole thing — yes, yes, you feel horribly guilty. The second time this happens — you still feel pretty guilty. But by the seventeenth time, you feel nothing but cool professional detachment. Presumably, contract killers progress psychologically in much the same way.

2. And lest you dismiss the whole enterprise as the gimmick project of an effete, technology-obsessed Silicon Valley man-child, I should state for the record that I cannot possibly be any of these things, as I do not own a cell phone or even a working[3] television set. So there.

3. The key word being “working”. I actually do own a TV, but I have no cable subscription, and the antenna reception is awful, so the television sits unplugged in the corner of my office, serving as an ugly and not very practical end table. I keep it around mostly because I think that at some point in the future, I might want a TV. That’s the theory, anyway. Unfortunately, the truth is that I am, in fact, an effete, technology-obsessed Silicon Valley man-child… and so when the time comes, I know quite well that I will go out and get a fancy super-thin high-definition whizbang TV, and my current TV will remain an ugly and impractical end table.