Re: (OT) The desktop supercomputer has arrived!




"tlmfru" <lacey@xxxxxxx> wrote in message
news:qkcji.228355$dC2.204040@xxxxxxxxxxxxxxx
Pete Dashwood wrote:
Aah, a call for the 'Borg. I doubt if useful bio-mechanoids will be
available this century (or at all).

No, it isn't SF. Nanotechnology is already being developed that will use
cellular compounds for energy, exactly as cellular mitochondria do. The
main
limiting factor on Nanobots at the moment is not the ability to make such
tiny machines, rather the capability to power them. There are several
possible avenues (including use of a radioactive film to power the
devices,
which I shouldn't think anyone would want injected into them, except
maybe
in ultimate extremis :-)). I believe the power problem will be solved
within
7 years and we should see working nanobots within 15 years after that.

If you couple nanotechnology with the advances in computer programming
and
ever-increasing understanding of processes in the brain (and
elsewhere...),
it is VERY likely that "bio-mechanoids" will be developed within the next
30
years, never mind "this century".

Tiny machines that will cleanse our arteries, correct faulty DNA, destroy
specifically targeted disease, and even replace certain cells that are
currently biological, are already on the drawing board. (I read a paper
the
other day that was discussing the idea of replacing blood with nanobots
that
could do all the functions that blood currently does, but never get
Leukaemia or any other blood related cancers).

Outside of Fortress COBOL, even the world of computing is doing some very
exciting things...

http://www.newscientisttech.com/article.ns?id=dn12192&print=true

The research described by the link above would have been impossible even
15
years ago. The difference is that knowledge is expanding at an
exponential
rate which some pundits believe is itself increasing. There is a
different
attitude. Experiments which could not have been carried out in, for
example, Einstein's lifetime, are now done routinely as a matter of
course.
(And they are confirming things like Quantum theory...which the Old Boy
was
never happy about...) Breakthroughs in Medicine and Physics are leading
inevitably to a fusion of sciences which make your "bio-mechanoids" an
inevitability. The question is not "if", it is "when".



I dunno, Pete. Everything you mention is feasible and there are going to
be
people who will push the research forward. But do we have a science of
side
effects to go with the science of development?

This is such a really good question :-) You are absolutely right that nearly
everything developed to date has had unfroeseen side effects (sometimes
beneficial, but more often than not, undesirable...)

Is there somewhere a
"Cassandra" Institute that will lay out all the bad possibilities so as to
offset the enthusiasm of the proponents?

Experience has taught me that there is no shortage of people who will rain
on any parade they feel threatened by, and people usually feel threatened by
stuff they don't understand. :-) I used to get despondent about this but I
don't any more. I realise the nay-sayers have their part to play in the
great plan and are a necessary balance. One of the thngs that makes it a bit
more reassuring is that we now live in an age when information is on tap.
Anyone who wants to know, can access information on virtually anything they
feel uneasy about. Not only that, they can put their counter argumentsd to
an audience of millions, and that in itself stimulates further debate. The
gung-ho mad scientists haven't got it all their own way.

But we need people of vision, imagination, and innovation too. And they
should be supported and encouraged, even if they are off the beam sometimes.
What they should NOT be given is a free hand to do whatever they want
without answerability to anyone. Unfortunately, Governments often manage to
achieve this...

From my browsings round the web I have found that many people are raising
questions and they are good questions. There are many sides in the debate
about the Future.


Given that every (? not sure that is entirely fair... :-)) development
throughout human history has had undesirable side effects, is there the
ability and the will to do a stone-cold evaluation of the pro's and con's
and determine the likelihood of each? Is there also the political will to
shut down guaranteed "evil" technology?

This is the age-old dilemma Humans have always faced. It caused the rift
between Science and Religion. What is "evil" to one person can be
"desirable" to another. (One man's meat is another man's poisson...sorry,
bad multilingual pun... :-))

Stem cell research is a perfect example of this. There are powerful
arguments on both sides, but if you have a kid who could be saved by it, you
are less likely to be persuaded by the arguments against it.

Of course, there are things that most reasonable people would agree are
"undesirable"... nuclear, chemical, and germ weapons, f'r'instance, but then
others will argue we need them to protect ourselves from the "bad guys"...
And there are ALWAYS bad guys... real and imagined.

In short, having had trouble enough
to date with techniques and products that are (in principle) controllable,
how can we safely accept things that are operating, changing, evolving at
the nano-level - wholly beyond any sort of control and undetectable to
boot?

Yes, it is a staggering prospect. Many of my friends expect me to be the
fount of all things computer and to fill them in on where things are going;
the ensuing discussions are often very interesting. I was explaining to a
friend who has a PhD and is a surgeon, about the principles of some of the
new techniques like Genetic Algorithms which I outlined here. When I made
the comment that we now have software that can achieve a result and we don't
know how it did it, his eyes lit up... He was really excited by the idea.
Other friends have been terrified by it and asked immediately how it would
be controlled.

So the reaction seems to be divided along lines established by the
personality of the person hearing about it. How much do you need to have
control? How much randomity can you tolerate? How secure/insecure are you?
(A totally secure person would say: "Let 'er rip. If they turn out to be
bad, we'll deal with that at the time..." A more cautious person would be as
terrified by that approach as by the original principle under discussion.)

(I should 'fess up at this point that the idea of software achieving results
without my control of it, is very exciting to me. I don't feel threatened by
this. I learned many years ago that none of us has total control of our
lives anyway, and the best we can do is work within the constraints imposed
by society, government, law enforcement, family responsibility, and duty.
Why should uncontrollable software be a threat? If it was, we'd have to
write something to counter it... We have uncontrollable bacteria and viruses
that we deal with every day, to the best of our ability. I haven't seen
anyone commit suicide because they might get AIDs or be hit by a car when
crossing the road.)

The bottom line on this is that the diversity of Humans, and our reactions,
is a good check and balance on what research we allow.

(An example: to cope with global warming it's been suggested that carbon
dioxide should be "sequestered" in the sea or in salt domes. I shudder to
think of the possibilities. Water is pumped into oil fields now to
prolong
their useful life: micro-earthquakes start to occur almost immediately).
It
may well be true that heuristic software may be able to design itself at
levels of quality and effectiveness beyond the capacity of heuristic human
beings; will it have reliable and consistent controls assured?

Another excellent question. The answer: I don't know.

It is a bit like raising a child. You do your best to prepare your children
for life; instill values and morals in them and encourage them to make
decisions based on those rules and their world view. You help them grow
without stifling them; you support them, to build their confidence and you
watch in wonder as they evolve into fine human beings. But the result was
always in doubt, no matter how much work and wisdom you put in. You could
never know they would turn out all right; all you could do was give them
your best and hope.

I believe the same principles apply to genetic software.


I guess an overriding question is: if something is as inevitable in
prospect
as you describe, is it something that must be allowed to happen as
inevitably? "Inevitable" is not the same as "desirable".

Absolutely. But "desirable" is also a subjective conclusion.


Everything has
to have an on/off switch!

Certainly, living organisms do.. it's called "Death". I see no reason why
smart software shouldn't be programmed at its roots with a specific
lifetime, or somethng like a "post hypnotic suggestion" that would freeze it
on a certain stimulus. But, until we see what evolves it is premature to
decide how to deal with it.

Thanks for a thought provoking post, Peter. You've raised some excellent
points and it made me think when responding to them.

Personally, I believe the events described will come to pass, and I believe
it will not be 100 years from now. (Although I am saddened by the
realization that I probably won't live to see some of the REALLY cool stuff
that is likely to evolve...)

We should be thankful that this time, research is continuing by free will
and subject to scrutiny. In the past it was provoked by War and answered to
no-one.

Pete.


.