Sunday, April 20, 2008

Generation Z(ero)

Damn! I forgot to worry about the weather!
--a friend looking out the window on the morning of a trip.

Worries about the long-term future of the human genome are ironic on a couple levels. In short, DNA will become easy, and then obsolete, in the space of one generation (Z) or two.

One version of legacy-worry is C.S. Lewis's notion, expressed in his The Abolition of Man (and correct me if I've misinterpreted) that any action that affects the future is immoral because, basically, the future has no say in it. In my reading of the book, he seems to gloss over some things: That every action or inaction has an effect on the future. That even prosaic choices (of mates, food, childrearing practices, etc.) effect the gene pool. That no one chooses to be born or into what family, or even species, they will be born. That if people in the future have a right to make their decisions, then there must be some decisions that are ours to make as well. These seem important points in any discussion of our effects on future people. Lewis is either putting forth an important subtlety that I don't understand, or a brazen absurdity I don't know how to deal with. I mention The Abolition of Man because I've heard it upheld as a paragon of thinking about the future.

Then there's the more prosaic and reasonable concern that popular adoption of methods that change the gene pool more directly could have catastrophic, perhaps irreversible, unintended consequences.

The concern about irreversibility cancels out in a way: just the idea that we would make widespread genome changes, and not see the results until too late, implies a technology that can deploy genetic changes among large numbers of people quickly. It's hard to imagine such a technology without the ability to make backups of individuals' genomes, and without the ability to reverse the changes it makes. (Your entire genetic makeup is about as much data as is on a CD or keychain flash drive. The portion of that that isn't duplicated in every other human is much less, about one percent.)

The larger irony is a larger version of trying to imagine genetic technology advanced enough to cause widespread problems but not advanced enough to make backups and restore from backups.

Try to imagine a technology that gets to the point of making dangerous changes to everyone's genome, and stops there. In particular, try to imagine people with that level of technical flexibility...stopping at DNA and protein as materials to make people with, when we already have materials with better properties, and are close to building things at finer levels of detail than DNA and protein can.

When I think about this, I throw in another factor: if being based on our inherited DNA is essential to what makes life meaningful, then it seems to me we should think in terms of the time scale that self-run DNA evolution takes: thousands and millions of years.

Here are some scenarios I do and don't see coming out of this mix of constraints:

Unlikely: that we cause catastrophic widespread effects, without having progressed far, such that civilization collapses and we don't get farther. In other words, most of us set out on a narrow bridge which snaps before we get across. My optimistic doubts about this are that, on the one hand, we won't be able deploy big changes widely before making very big advances in technology, and on the other, that we wouldn't be that reckless if we had the chance. This is the most reasonable version of genome worry: that we might make big changes far in advance of the knowledge that would make them safe. But it's a short-term worry.

Possible: that technological progress stops really soon, like within a year, and stays stopped for millennia. This could happen by a physical catastrophe, war, or by some ideological catastrophe much more serious than what we call the Dark Ages.

Not possible: that technology progresses, that we make only very conservative changes to the human genome, and that AIs do not take over and obsolete us, for more than a hundred years. The problem is that very quickly, abandoning DNA becomes a perfectly conservative thing to do.

Not possible: that technology progresses, we completely refrain from directly changing who we are, and AIs don't take over. Besides the problem of controlling AI without mandating a dark age and without cyborging, the problem here is imagining everyone getting to the point of understanding what steps are perfectly safe, but steadfastly refusing to take them. I see this as more likely than the above scenario of reasonable changes stopping with DNA, but not much more.

Not likely: that technology progresses and we leave our genome alone, but become chewy centers within cyborgs, or emotion centers within AIs, such that we think of our human core (on those infrequent occasions when we do) as a small part of ourselves--but untouchable. I find it hard to imagine the combination of yuck factors that would permit this without allowing transcendence of that particular awkward core technology. On the other hand the CPU in all modern Windows and Mac OS computers, and almost all Linux computers, is essentially a large well-designed computer pretending to be a very small, badly-designed 8086.

But is anyone who is concerned with the long-term future of the human genome, concerned that it be preserved carefully in this chewy-center, appendix or spandrel scenario?

So the irony is that while the long-term future of the human genome might be important, that's only in the case where civilization dies or goes brain-dead from some non-genetic cause, and we're hoping that having current-design humans around will increase the chance of civilization emerging from the catastrophe hundreds of years later, rather than waiting for a new worthy species to evolve. But maybe there are better ways to provide for that possibility, rather than restricting the options of the majority of the population in the short term.

My arguments are I-fail-to-imagine arguments. But what I am failing to imagine is that the strange, tragic and absurd corner-cases are the ones people are thinking of when they worry about our genome. More likely, it seems to me, they haven't thought through the middle- of- the- road cases.

The wider-still irony is that being concerned about the long-term future of the human genome is failing to understand the magnitude of changes that are going on. It's a failure to think ahead. Maybe a better worry is to ask, how do we ensure that our essential values, information, properties, abilities, and so forth, are being preserved at each step as we change the details of ourselves and our technologies?

Sunday, April 13, 2008

It's the Spiritual, Stupid!

Half a bee, philosophically,
Must ipso facto half not be.
But can a bee be said to be,
Or not to be... Do you see?"
--Monty Python, "Eric the Half-a-Bee"

"A thousand plastic flowers
won't make a desert bloom."
--Fritz Perls
The topic of my sermon today doesn't exactly fit the "It's the X, Stupid!" prototype but close enough. Also, the term "spiritual" may be a stretch for some; in fact I'm thinking of something closer to "Existential" myself.

A lot of the qualms people have about future technologies or their effects, don't have much to do with the technologies themselves, but rather, underlying issues in people's lives that thinking about new possibilities stir up. Often someone writing about the future will express a qualm in general terms that seems embarrassingly close to an admission of the author's own rug-swept angst.

I mean issues in a person's life like: what's the meaning of it? Or, what to do with it?

The example I'm really thinking of is longevity. "But what in the world," conservatives ask, "will people do with all that time in their lives?" I can't reproduce the horror, rebuke and utter puzzlement with which this question is asked, but that's the mix.

Deciding what life means and what to do with it is a tough one. The Existentialists came up with descriptions of the human condition that sound like horror movie scripts, captions for The Scream, or lists of imbalances of the bodily humors. The Void is scary enough that it's possible to understand why, when the prospect of more healthy lifespan is opened up, some people's first focus is on the associated voidspan.

But people horrified in this way probably haven't solved (any more than I have) the problem for life as they previously knew it, either. Technology, change and the future aren't central to the question, they merely disturb the rug. That cloud of dust, it's old dust-- it's the spiritual.

The X in "it's the X" was originally Content. Amidst new things like interactive software, hypertext, web pages, animated GIFs-- no wait-- Flash! and so forth, it's easy for a designer, an author, a company, a culture, to get lost in the technology aisle, shopping for and pasting together bits of glitter. Sometimes communicators need to be reminded that they're supposed to be getting Content across. Lately a mutant ideal says that Web 2.0 is about Community, stupid. This requires a shift from Content gear, but you see the pattern.

So by analogy, technology is only stuff to support, or stuff that distracts from... life, but what is life supposed to be?

This post isn't about answering questions like that, it's to say that existential or spiritual questions belong to us as individuals and belong in our lives. Facing The Void isn't a mistake, an imposition, a fault, or a wrong turn. It's having a seat at the table, being a player in a game whose score isn't settled. Neither technology, nor a lack of technology, nor external restriction on technology answers spiritual questions.

What I'm getting at is that "we," in any collective way, can't make spiritual choices for us, the individuals. If the issue is spiritual, then "we" have no business making decisions for us. Nor are we as people who steer technologies or policies of any sort responsible for including spiritual solution-packs with each new turn. Spiritual questions are real but personal.

The horror at new choices implies that life is a cruise that technologists and policymakers are in charge of, and that it's unfair to change the options available once the cruise is already in progress, since the new situation might not match the passengers' expectations. But life was never for anyone to plan for-- or promise to-- anyone else. The teeth-gnashing reality of spiritual uncertainty should remind us that finding meaning, or creating meaning in life is our right as individuals, one that no one else can give, take away or buffer.

The fact that nonaggressive things you do in your life may open up unexpected possibilities in my life, is part of God's own cruise package for a world with more than one passenger. I don't think the trip would be worthwhile if it weren't so.