Damn! I forgot to worry about the weather!
--a friend looking out the window on the morning of a trip.
Worries about the long-term future of the human genome are ironic on a couple levels. In short, DNA will become easy, and then obsolete, in the space of one generation (Z) or two.
One version of legacy-worry is C.S. Lewis's notion, expressed in his The Abolition of Man (and correct me if I've misinterpreted) that any action that affects the future is immoral because, basically, the future has no say in it. In my reading of the book, he seems to gloss over some things: That every action or inaction has an effect on the future. That even prosaic choices (of mates, food, childrearing practices, etc.) effect the gene pool. That no one chooses to be born or into what family, or even species, they will be born. That if people in the future have a right to make their decisions, then there must be some decisions that are ours to make as well. These seem important points in any discussion of our effects on future people. Lewis is either putting forth an important subtlety that I don't understand, or a brazen absurdity I don't know how to deal with. I mention The Abolition of Man because I've heard it upheld as a paragon of thinking about the future.
Then there's the more prosaic and reasonable concern that popular adoption of methods that change the gene pool more directly could have catastrophic, perhaps irreversible, unintended consequences.
The concern about irreversibility cancels out in a way: just the idea that we would make widespread genome changes, and not see the results until too late, implies a technology that can deploy genetic changes among large numbers of people quickly. It's hard to imagine such a technology without the ability to make backups of individuals' genomes, and without the ability to reverse the changes it makes. (Your entire genetic makeup is about as much data as is on a CD or keychain flash drive. The portion of that that isn't duplicated in every other human is much less, about one percent.)
The larger irony is a larger version of trying to imagine genetic technology advanced enough to cause widespread problems but not advanced enough to make backups and restore from backups.
Try to imagine a technology that gets to the point of making dangerous changes to everyone's genome, and stops there. In particular, try to imagine people with that level of technical flexibility...stopping at DNA and protein as materials to make people with, when we already have materials with better properties, and are close to building things at finer levels of detail than DNA and protein can.
When I think about this, I throw in another factor: if being based on our inherited DNA is essential to what makes life meaningful, then it seems to me we should think in terms of the time scale that self-run DNA evolution takes: thousands and millions of years.
Here are some scenarios I do and don't see coming out of this mix of constraints:
Unlikely: that we cause catastrophic widespread effects, without having progressed far, such that civilization collapses and we don't get farther. In other words, most of us set out on a narrow bridge which snaps before we get across. My optimistic doubts about this are that, on the one hand, we won't be able deploy big changes widely before making very big advances in technology, and on the other, that we wouldn't be that reckless if we had the chance. This is the most reasonable version of genome worry: that we might make big changes far in advance of the knowledge that would make them safe. But it's a short-term worry.
Possible: that technological progress stops really soon, like within a year, and stays stopped for millennia. This could happen by a physical catastrophe, war, or by some ideological catastrophe much more serious than what we call the Dark Ages.
Not possible: that technology progresses, that we make only very conservative changes to the human genome, and that AIs do not take over and obsolete us, for more than a hundred years. The problem is that very quickly, abandoning DNA becomes a perfectly conservative thing to do.
Not possible: that technology progresses, we completely refrain from directly changing who we are, and AIs don't take over. Besides the problem of controlling AI without mandating a dark age and without cyborging, the problem here is imagining everyone getting to the point of understanding what steps are perfectly safe, but steadfastly refusing to take them. I see this as more likely than the above scenario of reasonable changes stopping with DNA, but not much more.
Not likely: that technology progresses and we leave our genome alone, but become chewy centers within cyborgs, or emotion centers within AIs, such that we think of our human core (on those infrequent occasions when we do) as a small part of ourselves--but untouchable. I find it hard to imagine the combination of yuck factors that would permit this without allowing transcendence of that particular awkward core technology. On the other hand the CPU in all modern Windows and Mac OS computers, and almost all Linux computers, is essentially a large well-designed computer pretending to be a very small, badly-designed 8086.
But is anyone who is concerned with the long-term future of the human genome, concerned that it be preserved carefully in this chewy-center, appendix or spandrel scenario?
So the irony is that while the long-term future of the human genome might be important, that's only in the case where civilization dies or goes brain-dead from some non-genetic cause, and we're hoping that having current-design humans around will increase the chance of civilization emerging from the catastrophe hundreds of years later, rather than waiting for a new worthy species to evolve. But maybe there are better ways to provide for that possibility, rather than restricting the options of the majority of the population in the short term.
My arguments are I-fail-to-imagine arguments. But what I am failing to imagine is that the strange, tragic and absurd corner-cases are the ones people are thinking of when they worry about our genome. More likely, it seems to me, they haven't thought through the middle- of- the- road cases.
The wider-still irony is that being concerned about the long-term future of the human genome is failing to understand the magnitude of changes that are going on. It's a failure to think ahead. Maybe a better worry is to ask, how do we ensure that our essential values, information, properties, abilities, and so forth, are being preserved at each step as we change the details of ourselves and our technologies?