Sunday, September 14, 2008

Technology as Dribble Glass

[I wrote this in 1997, after I had read "The Unix Haters Handbook," which at one point compares the "curses" terminal handling library to a dribble glass for the programmer. This is a little bit of a rant, but I still think it's wrong, not just unnecessary, to apologize to a bad design. --Steve]

Once in a while I find myself trying to explain to someone
that the fact that they didn't adhere to some stupid rule
of some stupid piece of technology
(often stupid in a subtle way, like the fact that
the VCR must be OFF or it won't record X-Files tomorrow night,
or the way that Dilbert, intending to delete an embarrassing
phone message, instead sends it to all his coworkers),
means the piece of technology is stupid,
not them.

Listen, it's a dribble glass.
It's designed in such a way that it dribbles on you.

"Yes, but I knew that."
It's still a dribble glass.
Dribble glasses work by fooling you into thinking they're something sensible,
fooling you into doing something perfectly sensible with them,
then doing something to you that looks like it was your fault.
Hah, hah, you trusted me.

"You'd think I would learn."
The point is that this is a retrograde, damaging kind of "learning."
It is unlearning the expectation of civilization--the expectation that people won't hand you a dribble glass. We are in denial. Instead of admitting the horrible truth that we are being handed dribble glasses left and right, we take the blame ourselves, eroding the idea that technology should not dribble on us, that it should be adapted to us.

First maybe learn to be a human being,
then, when you're done that, in your spare time,
you might take up the hobby of learning
the obeisance before fetish-objects,
the nervous glance,
the stepping over cracks,
the counting to ten,
the throwing salt over your shoulder,
the obsessive triple-checking and hand-washing,
the knees-bent, arms-outstretched walk,
the saying-the-opposite-of-what-you-mean, but winking obscurely,
in short the behaviors
that are adapted to the fun house--or funny farm--around you.

There is a moral point here.
Acquiesce to barbarism implies support of barbarism.
By saying "I should learn," one implies that
we all should learn to adapt to bad designs,
that it's okay for people to hand us dribble glasses.
We shouldn't. It ain't.

It's okay, as a provisional, practical matter, to learn the ways of the idiotic mutant sheep-monkey some piece of equipment was designed for, but don't make it sound like a measure of a person.

Measuring ourselves by dribble-glass standards is not a game I want to be included in, implicitly or not.

Monday, July 28, 2008

Just Tools

One of the tools people sometimes bring to bear on the future is the idea that technology is, or ought to be, ought to be treated as, or ought to be thought of as, just tools. I want to give a plug for a little sophistication and clarity around this idea-form. Tools are never just tools, but it's good to try to make them as close as possible to that, but that's really hard in more than one way.

If you run across the just tools idea, it's worth pausing to note what's not being, threatening to not be, not being treated as, or not being thought of as, just a tool, and how.

Maybe someone is talking about Technology with a capital T as an autonomous force. Maybe someone is treating a particular technology as a way of life, or getting too involved with it. Maybe a tool encroaches too much. Maybe there's just too much ado made about it.

And then, whether it's you or someone else talking, try to decide whether "Technology is just tools" is meant as a declarative statement, a reason not to worry, or more of a moral statement, for instance implying that it's (only?) when we treat tools as more than tools that we create problems. Then here are my two pence:

I should probably come out in favor of tools as tools first. I don't like tools as lifestyles. I don't like lifestyle products. I don't like products that force me to bend and shovel my whole life to fit them. I don't like that people treat technological change--in particular the development and adoption of standards--as a kind of ongoing fait accompli. I don't like how willing people are to treat technology as something you have to learn and become facile with, on pain of appearing an idiot, like multiplication tables or professional jargon. I don't like passivity in the face of how other people design things.

In particular I don't like tools that won't sit down and act like tools. To cite one trend, a lot of technological change is about putting computers into things, then putting more and more software into the computers, offering a growing grab-bag of features decided on by some sordid, faraway and out-of-control process that has not nearly enough to do with making a tool that acts like a tool.

To be just a tool, a tool mustn't be mysterious or unpredictable. Immediately most computers and computer-containing devices fail this test. We might add that just a tool needs to be immediate, apparent, sensible, clear, under control... fail, fail, fail, fail, fail...

Here is the good news about how tools should be tools: there are people who can design and build tools. It is possible to improve designs or produce better competing designs. It is possible to get funding to do so. It's possible to market, sell and be successful at it. To get other technologists and technology companies to change their ways. To show the public they don't have to settle for less, or for more-stupid; to change the world.

And the bad news is that every one of those steps is hard, hard, hard. In particular, designing something simple, making something be just a tool, is hard. The less stupidity and non-toolishness, the tougher it is. Simplicity isn't simple, it's the hardest thing. I don't think anyone gets by with just a knack for it; anyone who tries to make something simple and useful eventually grinds against the question of just what kind of simplicity matters. Then there are all the steps of getting something good out into the world.

One way to deal is to use tools that the world shovels out, but choose the best ones, of course, customize perhaps, and use them only in tool-like ways. I haven't found a clear-cut method to do this. Bad tools, I mean typical tools, will fight you, and if you engage a tool too strenuously in that fight, for instance by extensively customizing it or using it too much against its grain, then you are fighting it like an enemy rather than using it like a tool.

Technogenesis is a disgusting process involving slimy spasming organs like politics, business, marketing, geekery, an uncritical public and a pile of relevant historical accidents and mistakes that gets ever taller, never shorter. Saying so is only fatalism if you think there's no hope in this.

So now I will argue against tools as tools, by use of an example.

All of the genes of any organism code for proteins, but most of those proteins are enzymes. Enzyme just means a catalyst made out of protein, and a catalyst is a molecule that somehow wedges or nudges other molecules into or out of place, but isn't used up in the process. Often the process happens on its own without the enzyme, but having the enzyme around makes it go faster.

Most of the things your genes make are molecules that sit around helping chemical processes to happen. They are tools. Your genome is a catalog of tools. If you ask how any of the structures or chemicals in our bodies got there, most of them were produced with the use of enzymes. How did the enzymes get there? The main molecules of the genetic machinery--DNA, RNA, ribosomes--are all catalysts. How did the raw materials, the energy-carrying chemicals get there? Through processes carried out by the body's enzymes.

We are huge bags of tiny tools each owing its existence to large numbers of other tools, each helping to produce large numbers of other tools. Mutations change those tools, and when the tools are changed species change.

This pattern of mutually-supporting piles of tools is repeated at all scales of life, up through organelles, cells, tissues and organs. Also for ideas, habits, attitudes, all the blebs of learning. Up through social conventions, ethics, languages and professions. And sideways to the things we usually call tools and technologies.

Look from a tool to the pile of things it's used for, it's part of some semi-random collection of things it's suited for one way or another, and an obstruction in other places.

We pick up a tool and it's helpful for some things and not others. We didn't pick those two sets. Parts of our lives, not completely of our shaping, are made easier by having the tool. And by adopting the tool, we make some things get harder or go haywire.

Specific usefulness has never been refined in pure form. In fact I doubt a pure specific use, a pure end, can be identified. Everything's always on the way to something else, and facilitating one thing facilitates the set of things it's on the way to. Nor is there a hard boundary between the user and the used: our tools become parts of the collections of tools that are us; a tool changes us in ways defined by the interaction of that tool with the rest.

I mention this view of tools as soup-ingredients with unpredictable evolutionary effects to put a bracket around my view that we ought to wrestle tools into their rightful subservient place. Here is the heroic character, the Tool User, in an opera of meaning, complexity and history. The critical tool-using point of view is one element in the metabolism of our lives. The I/Tool distinction is always being pushed back at chaotically, and "I" has to pick its battles and admit that life isn't simply driving a station wagon and checking off of items on a shopping list. Tool Specifier, Tool Demander, Tool Builder, Tool Wielder, has an important dignity- and clarity- promoting job, an identity-defining job, but what is it in this larger picture?


So, without meaning to, I've set up one of my radical/reasonable tensions. In this case saying to hold both that maelstrom and that hero in mind side by side and ask, what is it we find so right about this particular hero? Why have we been underappreciative and why is she so unsung? Where do we find her?

And it's the point of this sputtering blog: to try to follow the plot and figure out where we're supposed to make our entrances. The Future is not a TV show on a screen, we are actors with parts. We are (from the point of view of this particular post) singing, gesticulating, hyperkinetic bags of tools. At some point or points we become protagonists; where are those points?

Monday, May 19, 2008

Time's Conveyor

Today, driving through Concord, I passed a funny-looking house with a sign: Octagon Farm. It reminded me of the huge octagonal barn at Linvilla Orchards, where we used to get corn and other rustic produce in the summers when we were teenagers. I thought I might like to drive back through that area with my siblings and see the place again.

That brought back a vague memory of being prisoners in the back seat as my father and one of his siblings dragged us kids through some old familiar stomping ground of theirs, droning on about this and that detail, absent landmark, or memory. How ridiculous they seemed, concerned with things so far in the past.

I guess I didn't understand time then. The life of an old person is the same length as that of a young person; it's just made of shorter years. Time is a conveyor belt that comes out of your chest and proceeds down a track that is only maybe a hundred feet long. The far end is a little harder to see but still well in sight. As things happen they emerge onto the belt and roll away. But as they roll they get more and more flattened till they're something like a stack of cardboard cutouts and stage backdrops.

As an example, I'm sure that, when I was a teenager, my father's and uncle's memories were as close to them as Linvilla Orchards is to me now. See? Not at all receded into irrelevance and ridiculousness the way I had imagined from the back seat. My umpteen years of memory were stretched out to an extent that would place their memories out of reach.

My father's reminiscences in turn reminded me of another backseat torture as my father drove his father through long country roads to enjoy the farmland scenery and autumn foliage. How indescribably boring. I didn't understand how the new stuff, like the shopping mall my childhood self would have preferred, can become so tiresome, and that sometimes only old places have freshness.

Linville's barn (which when I knew it was already a retail store, a Country Living Experience) has burnt down, but they intend to build a replica on the spot. I'm sorry you can't see it, it was something to behold, right in that area there.

Thursday, May 15, 2008

Kevin Kelly's Technium

From Kevin Kelly's introduction to the book in progress he's blogging on his site The Technium:
For the past year and a half I have been studying the history of technology, the arguments of technology's critics, projections of its future, and the tiny bit of technic philosophy that has been written, all with the aim to answer a simple question: How should I think about new technology when it comes along?

Kelly was editor of The Whole Earth Review and then Wired, and wrote the Bible-chapter-level Out of Control. He reviews Cool Tools at his site.

Sunday, April 20, 2008

Generation Z(ero)

Damn! I forgot to worry about the weather!
--a friend looking out the window on the morning of a trip.

Worries about the long-term future of the human genome are ironic on a couple levels. In short, DNA will become easy, and then obsolete, in the space of one generation (Z) or two.

One version of legacy-worry is C.S. Lewis's notion, expressed in his The Abolition of Man (and correct me if I've misinterpreted) that any action that affects the future is immoral because, basically, the future has no say in it. In my reading of the book, he seems to gloss over some things: That every action or inaction has an effect on the future. That even prosaic choices (of mates, food, childrearing practices, etc.) effect the gene pool. That no one chooses to be born or into what family, or even species, they will be born. That if people in the future have a right to make their decisions, then there must be some decisions that are ours to make as well. These seem important points in any discussion of our effects on future people. Lewis is either putting forth an important subtlety that I don't understand, or a brazen absurdity I don't know how to deal with. I mention The Abolition of Man because I've heard it upheld as a paragon of thinking about the future.

Then there's the more prosaic and reasonable concern that popular adoption of methods that change the gene pool more directly could have catastrophic, perhaps irreversible, unintended consequences.

The concern about irreversibility cancels out in a way: just the idea that we would make widespread genome changes, and not see the results until too late, implies a technology that can deploy genetic changes among large numbers of people quickly. It's hard to imagine such a technology without the ability to make backups of individuals' genomes, and without the ability to reverse the changes it makes. (Your entire genetic makeup is about as much data as is on a CD or keychain flash drive. The portion of that that isn't duplicated in every other human is much less, about one percent.)

The larger irony is a larger version of trying to imagine genetic technology advanced enough to cause widespread problems but not advanced enough to make backups and restore from backups.

Try to imagine a technology that gets to the point of making dangerous changes to everyone's genome, and stops there. In particular, try to imagine people with that level of technical flexibility...stopping at DNA and protein as materials to make people with, when we already have materials with better properties, and are close to building things at finer levels of detail than DNA and protein can.

When I think about this, I throw in another factor: if being based on our inherited DNA is essential to what makes life meaningful, then it seems to me we should think in terms of the time scale that self-run DNA evolution takes: thousands and millions of years.

Here are some scenarios I do and don't see coming out of this mix of constraints:

Unlikely: that we cause catastrophic widespread effects, without having progressed far, such that civilization collapses and we don't get farther. In other words, most of us set out on a narrow bridge which snaps before we get across. My optimistic doubts about this are that, on the one hand, we won't be able deploy big changes widely before making very big advances in technology, and on the other, that we wouldn't be that reckless if we had the chance. This is the most reasonable version of genome worry: that we might make big changes far in advance of the knowledge that would make them safe. But it's a short-term worry.

Possible: that technological progress stops really soon, like within a year, and stays stopped for millennia. This could happen by a physical catastrophe, war, or by some ideological catastrophe much more serious than what we call the Dark Ages.

Not possible: that technology progresses, that we make only very conservative changes to the human genome, and that AIs do not take over and obsolete us, for more than a hundred years. The problem is that very quickly, abandoning DNA becomes a perfectly conservative thing to do.

Not possible: that technology progresses, we completely refrain from directly changing who we are, and AIs don't take over. Besides the problem of controlling AI without mandating a dark age and without cyborging, the problem here is imagining everyone getting to the point of understanding what steps are perfectly safe, but steadfastly refusing to take them. I see this as more likely than the above scenario of reasonable changes stopping with DNA, but not much more.

Not likely: that technology progresses and we leave our genome alone, but become chewy centers within cyborgs, or emotion centers within AIs, such that we think of our human core (on those infrequent occasions when we do) as a small part of ourselves--but untouchable. I find it hard to imagine the combination of yuck factors that would permit this without allowing transcendence of that particular awkward core technology. On the other hand the CPU in all modern Windows and Mac OS computers, and almost all Linux computers, is essentially a large well-designed computer pretending to be a very small, badly-designed 8086.

But is anyone who is concerned with the long-term future of the human genome, concerned that it be preserved carefully in this chewy-center, appendix or spandrel scenario?

So the irony is that while the long-term future of the human genome might be important, that's only in the case where civilization dies or goes brain-dead from some non-genetic cause, and we're hoping that having current-design humans around will increase the chance of civilization emerging from the catastrophe hundreds of years later, rather than waiting for a new worthy species to evolve. But maybe there are better ways to provide for that possibility, rather than restricting the options of the majority of the population in the short term.

My arguments are I-fail-to-imagine arguments. But what I am failing to imagine is that the strange, tragic and absurd corner-cases are the ones people are thinking of when they worry about our genome. More likely, it seems to me, they haven't thought through the middle- of- the- road cases.

The wider-still irony is that being concerned about the long-term future of the human genome is failing to understand the magnitude of changes that are going on. It's a failure to think ahead. Maybe a better worry is to ask, how do we ensure that our essential values, information, properties, abilities, and so forth, are being preserved at each step as we change the details of ourselves and our technologies?

Sunday, April 13, 2008

It's the Spiritual, Stupid!

Half a bee, philosophically,
Must ipso facto half not be.
But can a bee be said to be,
Or not to be... Do you see?"
--Monty Python, "Eric the Half-a-Bee"

"A thousand plastic flowers
won't make a desert bloom."
--Fritz Perls
The topic of my sermon today doesn't exactly fit the "It's the X, Stupid!" prototype but close enough. Also, the term "spiritual" may be a stretch for some; in fact I'm thinking of something closer to "Existential" myself.

A lot of the qualms people have about future technologies or their effects, don't have much to do with the technologies themselves, but rather, underlying issues in people's lives that thinking about new possibilities stir up. Often someone writing about the future will express a qualm in general terms that seems embarrassingly close to an admission of the author's own rug-swept angst.

I mean issues in a person's life like: what's the meaning of it? Or, what to do with it?

The example I'm really thinking of is longevity. "But what in the world," conservatives ask, "will people do with all that time in their lives?" I can't reproduce the horror, rebuke and utter puzzlement with which this question is asked, but that's the mix.

Deciding what life means and what to do with it is a tough one. The Existentialists came up with descriptions of the human condition that sound like horror movie scripts, captions for The Scream, or lists of imbalances of the bodily humors. The Void is scary enough that it's possible to understand why, when the prospect of more healthy lifespan is opened up, some people's first focus is on the associated voidspan.

But people horrified in this way probably haven't solved (any more than I have) the problem for life as they previously knew it, either. Technology, change and the future aren't central to the question, they merely disturb the rug. That cloud of dust, it's old dust-- it's the spiritual.

The X in "it's the X" was originally Content. Amidst new things like interactive software, hypertext, web pages, animated GIFs-- no wait-- Flash! and so forth, it's easy for a designer, an author, a company, a culture, to get lost in the technology aisle, shopping for and pasting together bits of glitter. Sometimes communicators need to be reminded that they're supposed to be getting Content across. Lately a mutant ideal says that Web 2.0 is about Community, stupid. This requires a shift from Content gear, but you see the pattern.

So by analogy, technology is only stuff to support, or stuff that distracts from... life, but what is life supposed to be?

This post isn't about answering questions like that, it's to say that existential or spiritual questions belong to us as individuals and belong in our lives. Facing The Void isn't a mistake, an imposition, a fault, or a wrong turn. It's having a seat at the table, being a player in a game whose score isn't settled. Neither technology, nor a lack of technology, nor external restriction on technology answers spiritual questions.

What I'm getting at is that "we," in any collective way, can't make spiritual choices for us, the individuals. If the issue is spiritual, then "we" have no business making decisions for us. Nor are we as people who steer technologies or policies of any sort responsible for including spiritual solution-packs with each new turn. Spiritual questions are real but personal.

The horror at new choices implies that life is a cruise that technologists and policymakers are in charge of, and that it's unfair to change the options available once the cruise is already in progress, since the new situation might not match the passengers' expectations. But life was never for anyone to plan for-- or promise to-- anyone else. The teeth-gnashing reality of spiritual uncertainty should remind us that finding meaning, or creating meaning in life is our right as individuals, one that no one else can give, take away or buffer.

The fact that nonaggressive things you do in your life may open up unexpected possibilities in my life, is part of God's own cruise package for a world with more than one passenger. I don't think the trip would be worthwhile if it weren't so.

Saturday, February 9, 2008

Star Trek vs. The Jetsons

In a previous post I said the 1960s vision of a question-answering typewriter has come true with Google. Then I thought, look at the freakin' format of this blog, it's a display from Star Trek: The Next Generation, fer Roddenberry's sake.

A couple of the big items from Star Trek have come true. Mainly the cell phone. Wall-sized flat TVs that double as computer displays. Transporters on a one-particle level. Sliding doors are pretty common in supermarkets anyway. Hand phaser not, but truck-sized, repeatable laser weapons are in the bidding stage at the U.S. Defense Department. Not the tricorder exactly, but many phones contain a movie camera, day planner, calculator and GPS.

Everyone makes fun of 1950s visions of the future, the domed cities and flying cars. Sometimes the conclusion is drawn that we're uniformly very bad at predicting the future, or that the future is impossible to predict. But Star Trek proves that Gene Roddenberry's kind of prediction-- that if people need something enough and it's obvious enough, they'll work till they get it-- makes some sense.

The Star Trek communicator was simple and obvious. The domed city and the flying car, in contrast, seem to flout their strangeness. They extrapolated the most modernistic things around at the time, while the communicator extrapolated down the middle of the road.

The lesson obviously isn't that only plain things happen. Maybe plain things are safer bets, though.