Welcome to the Second Life Forums Archive

These forums are CLOSED. Please visit the new forums HERE

The future of AI

Hamlet Linden
Linden Lab Employee
Join date: 9 Apr 2003
Posts: 882
12-07-2004 17:25
Just thinking out loud, it seems like a group could run a live Turing test in-world. Put two residents on a stage. One is typing out responses by their human owner. The other is actually automated, churning out text derived from XML calls to some Turing database. The audience (who feeds them questions) has to guess which is which.

JUST THINKING OUT LOUD, NOW.
Torley Linden
Enlightenment!
Join date: 15 Sep 2004
Posts: 16,530
12-07-2004 17:46
From: Hamlet Linden
Just thinking out loud, it seems like a group could run a live Turing test in-world. Put two residents on a stage. One is typing out responses by their human owner. The other is actually automated, churning out text derived from XML calls to some Turing database. The audience (who feeds them questions) has to guess which is which.

JUST THINKING OUT LOUD, NOW.


Sure. Easy way to tell: the AI is more consistently the neater typer -- unless you get it to simulate typos and l33tsp33k. (I have seen a few, on old BBSes, that emulated this behavior.) ;)
_____________________
Gwyneth Llewelyn
Winking Loudmouth
Join date: 31 Jul 2004
Posts: 1,336
12-10-2004 06:17
Well, we can get easily tricked in SL. I have a "ventriloquist version" of a Lee Linden cardboard avatar, and use a private channel for making "cardboard Lee" talk. Yes, I do perfect spelling. Almost invariably, everyone says "wow, what a cool AI you've got, how did you manage to program it?"

The same principle was staged often by two guys working in tandem in Ahern. One of them would pose as the super-cool geek working on a new AI. The other one had a robot AV, and would, from time to time, give out some random answers. The "geek" would always lament, and say, "oh no, not another bug, I have to reset the script" and ask for volunteers to ask his "robot AI" some questions. The other guy, of course, sometimes would answer, sometimes not. The "geek" would get frustrated and lament that he couldn't do the programming right! Now since the answers came out with a delay (all pre-arranged heh), the geek would sometimes say things like "oh, the interface with my external server is tricky, due to all limitations that we have, still, it works sometimes, not really good for real time conversation, but it's getting better... now if I only could get the bugs corrected..."

This was a hilarious act, it was done for hours! Of course, many newbies would be fooled (uggh, I had some doubts in the first five minutes as well... then I remembered that you can't really "script" avatars and make other people talk, but for the "scam" to be perfect, this guy could just use the same "ventriloquist" trick I used :) - but remember, newbies don't know that and may come from other MMOGs where it's easy to "script conversations";). Many of them were very helpful, telling the "geek" that they did some programming in RL, and offer to help him figure out the bugs. Most praised his excellent work and encouraged him to go on, they thought it was way cool...

It's soooo strange. In the RL, if people did the same trick, everybody would think exactly the opposite - it's a scam.

My point is, we accept so much in Second Life already. We may say the contrary in the forums, but in-world, with so many fantastic things already around, we find it easy to believe that you could have "near-AI" in-game. Aww, there were even some people discussing "AI rights" in forums :) ... as if we were near that goal.

I haven't programmed anything AI-related lately in the past ten years or so, after leaving college. I remember, at that time, that the major focus was on building AIs for upcoming computers. They had amazingly cool stuff to implement, but there simply wasn't enough processing power available. All, however, were quite confident that "processing power" would be secondary, due to Moore's Law, and worked quite happily on their projects, even if they knew they couldn't implement their work immediately, but only in the future. Meaning that we shouldn't really care if things "look impossible" today, because tomorrow we'll use tomorrow's tools, which will certainly be faster and better.

So, the biggest issue here is knowing if Moore's Law will hold always or peak sometime in the near future. I fear something I call the "Microsoft paradigm". This means that even this year's computers will run Word as fast as last years', even if in the mean time, we got (nearly) twice as much processing power. Since over 80% of all computers in the world run Windows, this will probably mean there is no point in developing faster CPUs, since you can't really get an advantage from them on your desktop computers - and it's desktop computers which are pushing the technology right now. This means that we need "excuses" to get faster and faster computers (which are getting more and more difficult to build, the CPU builders are hitting at so many barriers at the same time, and while alternate technologies do exist conceptually, they are still not technologically viable right now - but they'll be in a future, if the need for faster computers is stronger than ever). Virtual reality and the advent of AIs may be the necessary "excuse".

This is the same argument about if Second Life will feature "total immersion" with photorealistic images and a sustained 24 fps in 2014. My answer to that is "surely it will, by 2014 we'll have the necessary resources for that - both fast enough CPUs and GPUs and enough residential bandwidth for the same price we pay in 2004, which is all you need.

The metaphysical answer to the question that "but we cannot create anything as smart or even smarter than us" is linked to Goedel's theorem, who proved early in the 20th century that a system cannot create another system that is smarter than itself. However, we can certainly create systems that learn by themselves, thus circumventing Goedel's theorem. And the question about "are they really intelligent and sentient?" is actually quite easy to answer. AIs will be sentient if they say so :) and if you can't disprove their statements. Any human being (well, assuming perfect metal health, of course) can convince other fellow human beings that he/she is sentient. If you get an AI doing the same - it'll be sentient as well. It's unimportant to know how much computing power is needed for that (ie. is it really necessary to have as much "computational power" as a human brain, or do we need more or less than that?). The Turing Test and its variations, while flawed, tend to go towards that direction - don't worry if your "artificial brain" is a piece of junk. If it fools other human beings in convincing them that it is sentient - then, by definition of sentience, it is.

Like Philip and other LL employees (but, apparently, not Mitch Kapor :) ), I truly believe that this will be much easier to accomplish in virtual realities like SL. The examples I gave before show how easily most SL residents will be "tricked" in recognizing "sentience". The major reason I give for that is two-fold: a lack of body language, and relying upon written text to communicate. A very limited sensorial environment which means the AI will not need to "emulate" so many human expressions. Of course, with the advancement of "total immersion" technology (to the point that you even can get twitches correctly replicated in-world...), AIs will certainly have to adapt to those as well, but it will be much easier when starting from a simplified model. This is usually the last argument "against" sentient AIs - they need "bodies" to become sentient. The metaverse shall provide!

Skeptical about Strong AI? Naah. I'm as skeptical about Strong AI as I was about mobile phones or the Internet 15 years ago :) Or about SL right now :)
_____________________

Jake Cellardoor
CHM builder
Join date: 27 Mar 2003
Posts: 528
12-13-2004 20:19
From: Gwyneth Llewelyn
The metaphysical answer to the question that "but we cannot create anything as smart or even smarter than us" is linked to Goedel's theorem, who proved early in the 20th century that a system cannot create another system that is smarter than itself.


This is a misinterpretation of Godel's theorem. Godel's theorem applies to systems of axioms, and it is not at all clear that theorems about axiom systems apply to brains.
Gwyneth Llewelyn
Winking Loudmouth
Join date: 31 Jul 2004
Posts: 1,336
12-14-2004 03:08
Good point, Jake. I stand corrected. Of course, I'm part of the group of people that do think that theorems about axiom systems apply to brains. However, it's quite clear to me that this has never been "proved", will take a long time to prove, and probably never will.

My reasoning is actually simple, and goes like this... I assume that all "parts" of the human brain are physical interactions. If they are, there will be a set of rules to describe these interactions - either by straightforward rules (less likely) or approaches similar to the ones used by quantum mechanics and chaos theory (more likely). So Goedel's theorems will apply.

I'll certainly admit that this is a "weak" reasoning. We don't know if all "parts of the human brain" are physical interactions, while most scientists would agree they were. And we don't have any "set of rules" to describe how the brain works - we don't even have a good mathematical formulation of the weather on Earth, which should be "easier" to describe!

So, like the super-string theory, which is a lovely theory which would explain everything if we only knew the formulas, I think that my own conviction about appliying Goedel to the brain is very similar... it'll work only if we can "prove" that the brain is not much more than a physical "machine" which can be "described" by a set of rules.

Thanks for pointing it out!
_____________________

Torley Linden
Enlightenment!
Join date: 15 Sep 2004
Posts: 16,530
12-14-2004 03:45
From: Gwyneth Llewelyn
Well, we can get easily tricked in SL. I have a "ventriloquist version" of a Lee Linden cardboard avatar, and use a private channel for making "cardboard Lee" talk. Yes, I do perfect spelling. Almost invariably, everyone says "wow, what a cool AI you've got, how did you manage to program it?"


One of my fave SL moments... the time you interacted, through "cardboard Lee", with the newcomer who was firing off guns in Morris Sandbox (now "Building Area", heh). I felt, while ethically ambiguous, you did get him to follow the rules quite obediently.

Remember, Gwyn, and later on when we were in the sandbox, and you accidentally projected some words that were meant for you (as in the non-Lee you, LOL) through the pseudoLee? As I was leaving too! The newcomer appeared puzzled, but you made up for it with a smooth move. Hahaha... that was fantabuloustic. :D

This also kind of reminds me of those fake "machine chess" games I once read about, where some dude would hide in the cabinet and play the game, so it wasn't any robotic automaton really moving rooks around and shiznit.
_____________________
Eggy Lippmann
Wiktator
Join date: 1 May 2003
Posts: 7,939
12-14-2004 04:36
You mean the Turk?
http://en.wikipedia.org/wiki/The_Turk
Azelda Garcia
Azelda Garcia
Join date: 3 Nov 2003
Posts: 819
12-14-2004 05:34
> My reasoning is actually simple, and goes like this... I assume that all "parts" of the human brain are physical interactions. If they are, there will be a set of rules to describe these interactions - either by straightforward rules (less likely) or approaches similar to the ones used by quantum mechanics and chaos theory (more likely). So Goedel's theorems will apply.

Ok, I dont know what Goedel's theorem is, but sometimes something can be argued perfectly logically and still be wrong.

Theoretical bit
----------------

Actually, I think the counter-argument is very simple, though I might be being naive here:

- imagine hypothetically that a brain is just a bunch of discrete small things bundled together.
- that it doesnt matter how they are arranged etc: you just dump them together in a container, and they do their thing
- further suppose that the computational capacity, "intelligence" and IQ of the system increase with the number of these things.

Now, we have a brain which might have many billions of these discrete small things, so one discrete small thing is significantly less computationally complex than us, and therefore we can create it.

Of course, if we can create one of these discrete small things, we can create billions of them, or billions of billions.

Hence, we can create:
- a brain, and
- a brain more powerful than our own

Now, yes, this supposes that you only have to dump the discrete small things together, not worrying about their connections etc. Nevertheless this seems to be sufficient to refute at a theoretical level the assertion that one cannot create a device more computationally complex than oneself?

Practical bit
-------------

Now, the original point of my post, which is somewhat related to the first: it is generally held that some of the most important perceived properties of the brain are what is termed "emergent properties". That is, that if you take something very simple and put lots of them together, what comes out of hte system is significantly more complex than the sum of the individual parts. It is also different at not just a quantitative level, but also at a qualitative level.

So, you dont actually need to understand something more complex than yourself in order to build it, you just have to be able to create a system which favors the emergence of, well, Emergent Properties.

Azelda
_____________________
Alicia Eldritch
the greatest newbie ever.
Join date: 13 Nov 2004
Posts: 267
We'll never [I]really[/I] be able to predict the weather.
12-14-2004 08:04
Great points, everyone.

I don't know if complex self-altering systems CAN be mathematically formalized, other than through a basic set of axioms that the initial "cells" follow, perhaps.

Nevertheless, as Azelda pointed out, we don't have to.

I've stared at CA for a long long time, and they always fool you into thinking that the pattern makes sense, but it's really quite non-reversible, in that way. The "pattern" you see is emergent from the cell rules, but it in and of itself has no direct relation to the initial cell rules. But it's not random.
(sorry, it's early, so I might not have explained that correctly)

Suffice it to say that consciousness might be an epiphenomenon but that doesn't make it less "real."
Anshe Chung
Business Girl
Join date: 22 Mar 2004
Posts: 1,615
12-14-2004 09:44
Mmmm, I think when it comes it comes suddenly. Because before AI is as smart as human being nobody will take it serious. But when people see one AI that is intelligent like one human and slowly begin to discuss, then in some laboratory somebody will already have one AI that is 100 times as powerful.

Since in evolution most intelligent being always took power it will only be natural that evolution will not stop here. Of course those more powerful AI will take over control by manipulating much inferior humans. It will depend on those AI's ethical values and economic reasoning what they do with human race. But chance is that they might remove ineffective human beings and allocate resources to more effective artificial lifeforms or machines in order to achieve whatever goals they might pursue.
_____________________
ANSHECHUNG.COM: Buy land - Sell land - Rent land - Sell sim - Rent store - Earn L$ - Buy L$ - Sell L$

SLEXCHANGE.COM: Come join us on Second Life's most popular website for shopping addicts. Click, buy and smile :-)
Azelda Garcia
Azelda Garcia
Join date: 3 Nov 2003
Posts: 819
12-14-2004 10:50
Anshe,我爱你.
_____________________
Torley Linden
Enlightenment!
Join date: 15 Sep 2004
Posts: 16,530
12-14-2004 11:37
From: Eggy Lippmann


Yeah! That's what it was! :) Thanks Eggy... LOL... weird thing... I thought I already posted a reply to you... guess not!
_____________________
Jake Cellardoor
CHM builder
Join date: 27 Mar 2003
Posts: 528
12-14-2004 11:53
FYI, here's a quick explanation of Godel's theorems.

He first proved that, using a (sufficiently strong) formal axiom system, it was possible to construct statements that could never be proved either true or false relative to the axioms. The axioms underlying basic arithmetic are sufficiently strong.

He then proved that the statement "this formal axiom system is consistent" is a statement that could never be proved true or false for arithmetic. This means that one cannot use mathematics derived from arithmetic to prove that arithmetic is consistent.

You can use a stronger axiom system to prove that arithmetic is consistent. However, that stronger axiom system will be unable to prove its own consistency.
Azelda Garcia
Azelda Garcia
Join date: 3 Nov 2003
Posts: 819
12-14-2004 12:18
Ok, that sounds like a theoretical/mathermatical formulation of the whole why.. because thing?

ie, you can only explain bits of the universe in terms of other bits: you "see" atoms by bouncing bits of light off them etc, or bits of atoms. You can never "see" inside the bits themselves, unless you can find smaller "bits" to bombard them with.

A way of escaping from this situation could be to look at our universe from another universe... but then your system becomes both universes and you didnt really get too much further.

Does that sound roughly similar?

Azelda
_____________________
Anshe Chung
Business Girl
Join date: 22 Mar 2004
Posts: 1,615
12-15-2004 04:31
From: Azelda Garcia
Anshe,我爱你.


你真好玩。为什么爱我?
_____________________
ANSHECHUNG.COM: Buy land - Sell land - Rent land - Sell sim - Rent store - Earn L$ - Buy L$ - Sell L$

SLEXCHANGE.COM: Come join us on Second Life's most popular website for shopping addicts. Click, buy and smile :-)
Gwyneth Llewelyn
Winking Loudmouth
Join date: 31 Jul 2004
Posts: 1,336
12-15-2004 10:50
From: Azelda Garcia
A way of escaping from this situation could be to look at our universe from another universe... but then your system becomes both universes and you didnt really get too much further.

Does that sound roughly similar?


Yes, something like that, Azelda. I personally don't have enough maths know-how (not anymore, at least :) ) to understand how Goedel was able to "prove" his theorem. BTW, Jake formulated it quite well.

For mathematicians, this mostly means that systems cannot prove their own consistency. This has also been interpreted as meaning that, if you can describe the workings of your own brain in a set of formulae, you won't be able to reproduce a brain with a different set which is a superset of these formulae. In English: you cannot create a smarter brain than your own :)

This result unfortunately is consistent with about all mathematical systems that we know about. You idea of being impossible to fully describe the universe unless you look from the outside of it sounds reasonable to me (however, as said, the superstring guys would disagree with that view :) ). There are other examples in information theory as well.

On the other hand, the great thing about science is that you can question everything and throw things away if you get better answers :) Perhaps one day someone proves Goedel wrong...
_____________________

Azelda Garcia
Azelda Garcia
Join date: 3 Nov 2003
Posts: 819
12-15-2004 11:19
Well, I was busy brushing up my Chinese to respond to Anshe, but...

> This has also been interpreted as meaning that, if you can describe the workings of your own brain in a set of formulae, you won't be able to reproduce a brain with a different set which is a superset of these formulae.

Ok, there are two refutations I can think of for this:

The first one I came up with is:

Even without refuting your assertion, there are 4 billion people in the world, and they can all be considered part of your mathematical system, so even without refuting your assertion, it should be possible to create a brain at least 4 billion times more powerful than any single human brain.

Then, once you have one of those, you can build a billion of those, and repeat the step. Rinse and repeat.

The second one is:

Think about what it means to describe your brain by formulae. What would those formulae look like? Lets think of something simple, like a perfect gaz in a jar. What formulaes would represent that gaz? Probably something like Boyles law at a macro level, and Neuton's laws of elasticity and so on at a more microscopic level?

In other words, the formulae that we would come up with to describe the system are simply the laws of physics, and doubling the amount of gaz, or adding new gazes, does not change those formulae, since theyre fundamental.

Applying this to brains, we can imagine that there is no requirement that a more powerful brain uses a superset of the formulae of a simpler brain, because the formulae are identical in each case: the fundamental physical laws of nature.


I think my first refutation is pretty solid, doesnt involve directly refuting your assertion, and is simple, so I'm comfortable with it. The second refutation is more an academic exercice to address the Goedel's theorem issue directly.

Azelda
_____________________
Jake Cellardoor
CHM builder
Join date: 27 Mar 2003
Posts: 528
12-15-2004 15:58
Extrapolating from Godel's theorem to statements like "you cannot create a smarter brain than your own" is a bit of a leap; it's sort of like using Heisenberg's uncertainty principle to justify why you can't get the spare change out from between your sofa cushions, because the further you reach your hand in, the more you spread the cushions apart and the farther down the coins slip. The relationship is more metaphorical than literal.

A more detailed argument has been offered by physicist Roger Penrose as to how Godel's theorem implies that human-equivalent AI cannot be implemented by any computer algorithm. His argument, in ultra-brief form, goes like this: Godel's theorem shows that algorithmic systems cannot prove certain statements to be true. However (he argues), we as humans are able to discern the truth of certain unprovable statements by informal methods. He concludes that human intelligence is non-algorithmic. Since algorithms are able to model any classical physical interaction, human intelligence cannot arise from classical physics. He hypothesizes that human intelligence arises out of quantum effects within the microtubules inside the brain's neurons.

Of course, many people disagree with Penrose on this.
Azelda Garcia
Azelda Garcia
Join date: 3 Nov 2003
Posts: 819
12-15-2004 20:20
> Of course, many people disagree with Penrose on this.

Hmmm, but I have read some of Penrose and have a lot of respect for his opinions; but on the other hand I dont really see any compelling reason to hand-off to some sortof unknown quantum effect.

The big issue that many people have with AI is the idea of consciousness, and Roger Penrose seems to go down this path in a big way.

Personally I believe that consideration of consciousness is a red herring. Most of the brain functions at the sub-conscious level (see Appendix), and we really overestimate the importance of this consciousness thing.

Once you take away the whole consciousness issue, all you're left with is the issue of synthesizing patterns from massive amounts of information, which is trivial at the conceptual level, whilst currently unfeasible to actually implement, simply because we dont have the horsepower.

Trying to build AI today is like trying to make a steam engine fly.

Azelda

Appendix: demonstration of importance of subconsciousness

Film a guy reading a passage of text. Film him twice: the first time, ask him to read in a happy, optimistic way; the second time, ask him to read in a negative way.

Take two groups of students, and show one of the films to each group.

Clearly, the group shown the optimistic person will tend to like the person and text; whilst the group shown the negative person will tend to dislike the person and text. That much is quite predictable.

The interesting bit is: now you ask: "Why?", and the students say things like, for example talking of the negative person:

- his clothes sense really sucks
- his tie is really nasty
- his hairstyle is awful
- the text was really boring
- he has a really ugly face

Whilst, and you know whats coming, the group shown the positive person said things like:

- he has awesome clothes sense!
- what a great tie he is wearing!
- he has the best hairstyle!
- the text was excellent!
- his face is really good looking

But... this was the same guy! Wearing the same clothes, same hairstyle, same text, same face, the only difference was his attitude.

A conclusion one can take away from this experiment is: we know what we like and dont like, but we dont actually know why. Our subconsciousness makes decisions for us, and we can only guess why it made those decisions, and often we're wrong: we dont even know why we've made a certain decision!

I picture our brain like an iceberg: maybe 95% of the brain is the subciousness making the decisions in our life, and there is just a thin veneer on top of what we term consciousness, which doesnt do a whole lot really, it just thinks it does.


P.S. If anyone is wondering why I am playing SL etc rather than building AI, its essentially because I dont believe it is possible today, so Im just hanging around doing other stuff for 20 years until it does become possible. Theres no particular way of predicting what actions will cause breakthroughs in the future, so just doing what you like doesnt seem any worse an approach than any other, and also, you get to do what you like :)

Edit: except building AI of course :-/
_____________________
Oblique Arbuckle
Registered User
Join date: 17 Nov 2003
Posts: 18
12-15-2004 22:11
While we aren't "there" yet, there has been alot of interesting progress in various areas of AI research.

One of the most interesting (IMHO) is the Cyc project, headed up by AI pioneer Doug Lenat. It originally started as a project at MCC back in the mid-1980s, to code the entirety of human common sense into a computer. It was eventually spun-off into a company, Cycorp (http://www.cyc.com/) in 1994. They've spent an estimated $50+ million and numerous decades of effort working on the Cyc engine and expanding its knowledge base (currently containing roughly 1+million axioms about various aspects of the world).

Cyc includes a natural language parsing component, and newer versions are capable of engaging in plain-english conversational dialog with a domain-specific expert (say, a chemist) to learn new information. If Cyc doesn't understand something, it asks the user questions. Cyc is also capable of detecting when newly learned information conflicts with what it already knows, ie when it's being lied to, using an internal "truth verification" system. Also, through the use of a knowledge base organizational tactic called "microtheories", Cyc can reason about sets of facts that conflict with one another (ie, that there can be vampires in movies and stories, but that vampire's aren't actually real).

Cycorp expects the natural language parsing facilities to grow to the point that in a few years, Cyc will be capable of being "unleashed" onto the WWW, reading information in online publications, scientific journals, etc and automatically incorporating that information into its knowledge base. At this point the learning process becomes almost automatic, allowing for Cyc to grow very fast.

Cyc is considered to be the only "real" attempt to codify the entirety of human knowledge into a computer. They've made some significant strides already.

They're slowly flowing some of the Cyc technology into an Open Source project called OpenCYC (http://www.opencyc.org/). A version of the Cyc inference engine and a small subset of its overall knowledge base are currently available. In the 1.0 release some of the conversational dialog tools, natural language components, etc should be included.
Azelda Garcia
Azelda Garcia
Join date: 3 Nov 2003
Posts: 819
12-19-2004 04:11
I guess the thread died. Meet back here in 20 years? Say, 19 December 2021?

Azelda
_____________________
Azelda Garcia
Azelda Garcia
Join date: 3 Nov 2003
Posts: 819
12-20-2004 10:14
Oooo, one other thing before we all disappear for twenty years or so...

Realistically I think that getting decent AI within 20 years is a best-case scenario, and best-case is just that: best case. If you consider that best-case is probably 2-3 standard deviations from the average of probably dates for this, and that a standard deviation here is maybe 5-10 years(?), then probably we'll have AI realistically around about 2050. That ties in roughly with my feelings of when it will be around realistically, and also correlates with what Andrew said in his first post to this thread.

Sooo... I'll be dead by then probably.

Soooo.... either I'll never see AI, or we need to do something prior to that so I can.

There are a few possibilites for this - maybe we can brainstorm for them? Maybe in a different thread?

Anyway, here's one that I like a lot: transcend.

Transcend can mean lots of things, so here's what I mean specifically.

Use nanos or similar - whatever works - in order to allow read/write to/from the brain, or in any case a very high bandwidth with computers / comms networks.

Create communications networks with very high bandwidth.

Enable telepathy between people at such high bandwidth that largish groups of individuals function effectively as a single individual.

Now, "you" no longer exist as a single consciousness: your brain is simply a part of this larger entity, just as your left and right brains are both parts of your current brain.

That means, "you" will no longer care about "your" own death, since you no longer exist as a single entity. Your body can die and the entity will continue.

Some major advantages for this:
1. we all want to transcend, hence religion etc
2. we essentially eliminate death
3. we can become even more powerful
4. theres no reason why computers couldnt transcend with us, and we exist together; which would eliminate significantly the humans vs computer thing. Sure we wouldnt be really useful for very long, but its better than being completely useless, and anyway we wont care any more since "we" wont exist, only the whole entity, which comprises "us" and the computer". "We" would get to experience what the computers experience.

Sooooo.... what can we do to further this vision?

Well, playing Sl or similar games is not bad, since they require bandwidths from Hell; and because they encourage VR technology, which presses forward in the direction of increasing brain <-> computer bandwidths.

Azelda
_____________________
Eggy Lippmann
Wiktator
Join date: 1 May 2003
Posts: 7,939
12-20-2004 10:18
Confucius say:
Man with wild dream have big disappointment.
Azelda Garcia
Azelda Garcia
Join date: 3 Nov 2003
Posts: 819
12-20-2004 10:43
They also get in Wired. I can live with that.

Azelda
_____________________
Cross Lament
Loose-brained Vixen
Join date: 20 Mar 2004
Posts: 1,115
12-20-2004 10:56
Quick-and-dirty "AI":
  1. Develop a device that exactly mimics the behaviour of a single neuron, and at approximately the same scale
  2. Surgically replace a neuron in a human brain with this device
  3. Wait a while for it to be integrated into the brain's neural network
  4. Repeat the above two steps, until all the neurons are replaced


Hehe. :D
_____________________
- Making everyone's day just a little more surreal -

Teeple Linden: "OK, where did the tentacled thing go while I was playing with my face?"
1 2 3 4