The future of AI
|
Andrew Linden
Linden staff
Join date: 18 Nov 2002
Posts: 692
|
12-04-2003 09:02
I think Ellen is extrapolating the failures of AI research too far. If we instead extrapolate its success, then perhaps we could get an idea of when we'll have interesting results. Today we could probably model the intelligence of an ant, or maybe even a small fish. But with Moore's law going stronger now than when it was observed and the recent advances in inderstanding how the parallelism of our brains work some people think that smarter-than-human machines will arrive a decade before the halfway mark of this century. And I'm somewhat convinced.
|
Camille Serpentine
Eater of the Dead
Join date: 6 Oct 2003
Posts: 1,236
|
12-04-2003 09:16
I definately think they will be here before mid century. Too soon to tell if it will turn the world into something like the Battlestar Galatica, Terminator, or Matrix, or Berserker (saberhagen) ideas. I can remember my dad playing with the old Brainmaker AI software that came out in the early 90's but I haven't played with anything since. With the new faster computers I think a real working AI is something we'll see soon. Of course if one takes over my computer I'd rather it had a 'sane' personality - not something like Saturn 5.  I can see great possibilities for people who are handicapped and cannot type - it would mean voice recognition programs/interfaces could be a real viable option for most folks. Fortunately for me, I grew up with a dad who built his own computers - I've played with the old tape driven ones, used an old monster Burroughs computer (scraped it too), played on appleII's, and many pc's. Seen games from pong to the new stuff. These past 25+ years have seen amazing things, so an AI will be around soon.
|
Maxx Monde
Registered User
Join date: 14 Nov 2003
Posts: 1,848
|
12-04-2003 09:20
I've worked on systems that try to do financial forecasting and modeling. The buzzword a while ago was neural networks, but that was a bust. For some things, it worked really well (ie, the problem space didn't have 'local minima' or places where the discovery of a solution got 'stuck') but for others the network would overtrain and just be useless for real-world data, only the test set.
I'm a fan of bottom-up view of emergent systems. Make something that obeys simple rules, layer these rules together, and watch what happens. Really interesting behavior (like flocking of birds) can happen with some really simplistic rulesets.
I feel it is coming, but it will be some complete nobody in a garage that gets it before the university people do, I think.
Or better yet (scenario for a sci-fi book) one of these distributed networks used for solving things like proteins suddenly starts to talk back by modifying the results. Damn, wouldn't that be something....
|
Camille Serpentine
Eater of the Dead
Join date: 6 Oct 2003
Posts: 1,236
|
12-04-2003 09:30
From: someone Originally posted by Maxx Monde Or better yet (scenario for a sci-fi book) one of these distributed networks used for solving things like proteins suddenly starts to talk back by modifying the results. Damn, wouldn't that be something.... Read: Blind Lake by Robert Charles Wilson TOR, 2003 The main story is about organic computers that evolve into advanced life. Slow to build but an interesting read.
|
Sean Rutherford
^_^
Join date: 25 Oct 2003
Posts: 88
|
12-04-2003 09:52
I personally do not feel Smarter than Human AI will be around within the next 100 years. Here's why in my quick rambling way.
We don't yet fully understand how the human mind works, nor are we really close to grasping it 100%. To boot, living intelligence is electro-chemical based and is thus not exact like digital, but rather more analog.
What makes our level of intelligence unique and interesting is our curiosity along with pattern and chemical initiated thought. The randomness of parts of our mind that motivates our thoughts into different directions and assists us in finding solutions to complex problems, has yet to be duplicated to my knowledge. We have yet to fully understand how patterns relate to our brains ability to store information or assosicate memories, which will be necessary to build something that works in a similar fashion.
Now, how do our experiences as living brains effect its ability to process? Can we duplicate that in an artificial environment or Intelligence? Raw Intelligence processing ability is lost if the AI has no understanding of our existance/world/reality, or the ability to attain that understanding.
I have listed many gaps that I feel need to be bridged before we can accomplish the above goal. I simply do not feel we can do this in the next 100 years. Why? Not enough poeple researching it. Most researchers and developers I have met in my lifetime are working towards commercial goals that are much more simplistic in nature. I have only met a handful of people that are working on projects as ambitious and interesting as this one, and I was quite envious of their work, but not their paycheck....hehe
Now remember, in 1955 we thought we would be making daily trips to the moon by 1986. =]
I so wish I am incorrect and that I am around when it is done, along with the discovery of a proven Unified Theory, but that's another topic.
^_^
-sean
|
Aaron Perkins
Registered User
Join date: 14 Nov 2003
Posts: 50
|
12-04-2003 11:10
We aren't going to be able to implement AI anytime soon. The human brian is a massive trillion cell neural network. Impossible to emulate in software, even when calculating in Moore's law. It's hard to imagine a hardware solution where physical connections between elements can be made dynamically and , at the same time , scale to a trillion components.
I say that's good...
I honestly hope we never see AI. Or at least in my lifetime. I just don't think we are ready for it mentally, socially, or spiritually.
Mentally, I don't think we've thought through the full ramifications of bringing a sentient, self-motivated, yet non human being into the world. Non-human is very key here because that means it will have no real vested interest in the human species.
Socially, we think of computers as mindless slaves here to do our bidding. Our new AI friends will be as smart, or smarter than us and I don't think it would be right to treat them as slaves. Also think about the massive job losses. There are not many job out there that a AI could not handle. Most humans would instantly be turned into blue collars workers doing the physical jobs that the AI can't do... Well that's until we start giving ther AI bodies...
Spiritually, I think it would be a big shock. Personally I don't subscribe to any one faith but I do consider myself spiritual. Meaning that I think there is some sort of soul or whatever you want to call. I think we all hope that we are more than the mere wiring of our brains. And I also feel there is some sort of afterlife. If we can recreate human intelligence from scratch that disproves all of that in my mind. You have this self-conscious, yet artificial being that can be turned on and off. Where does it go when it is turned off? Simple, it doesn't exist when it's turned off. Disheartening...
|
Jake Cellardoor
CHM builder
Join date: 27 Mar 2003
Posts: 528
|
12-04-2003 11:15
In one of his stories, the writer Greg Egan has said about AI, "If you need consciousness, humans are cheaper."
A weather simulation is only faster than actual weather if it's at a much lower resolution than reality. We see the same thing in SL: toppling a pile of blocks in SL takes a long time, whie in the real world, it takes a fraction of a second. I think the same will be true of AI.
(One could argue that eventually computers will get fast enough that you could, for example, simulate weather at exactly the same resolution as reality, only faster. That's a separate argument.)
|
Camille Serpentine
Eater of the Dead
Join date: 6 Oct 2003
Posts: 1,236
|
12-04-2003 11:32
Maybe with the advent of cloning, human computers would be faster and cheaper than regular computers. Could do something like :
Antibodies (Isaac Asimov Presents) by David J. Skal
Would be kind of creepy maybe.
|
Dusty Rhodes
sick up and fed
Join date: 3 Aug 2003
Posts: 147
|
12-04-2003 11:34
still waiting for proof of natural intelligence 
|
Hamlet Linden
Linden Lab Employee
Join date: 9 Apr 2003
Posts: 882
|
12-04-2003 11:48
Heh!
Remember folks, Ellen said she'd be willing to answer some follow-up questions by e-mail, so if you put some of these points in the form of a question, I'll send them her way!
|
Carnildo Greenacre
Flight Engineer
Join date: 15 Nov 2003
Posts: 1,044
|
12-04-2003 14:57
From: someone Originally posted by Jake Cellardoor (One could argue that eventually computers will get fast enough that you could, for example, simulate weather at exactly the same resolution as reality, only faster. That's a separate argument.) True, but only if your computers are as large as reality. One of the basic axioms of simulation is that you cannot completely simulate a system with a simulator that is smaller than the original system.
|
Jake Cellardoor
CHM builder
Join date: 27 Mar 2003
Posts: 528
|
12-04-2003 17:25
From: someone Originally posted by Carnildo Greenacre True, but only if your computers are as large as reality. There are arguments claiming that this may not actually be required. As I said, it's a separate discussion from the question of AI's feasibility.
|
Devlin Gallant
Thought Police
Join date: 18 Jun 2003
Posts: 5,948
|
12-04-2003 22:54
An AI might be closer than you think. Israeli scientists have been experimenting with using DNA as a computer storage device. Once that gets off the ground how far behind might a DNA computer processor be? Or, even computers constructed from human brain cells, or networks of cloned human brains?
Also, I read an article on the internet, or in a science magazine a couple of years ago that made an argument that the human brain in and of itself, while complex, is NOT powerful enough to account for intelligence. It theorized that intelligence operates on the quantum level. If true, might the quantum computers being experimented with now be capable of some kind of intelligence?
_____________________
I LIKE children, I've just never been able to finish a whole one.
|
Eggy Lippmann
Wiktator
Join date: 1 May 2003
Posts: 7,939
|
12-04-2003 23:49
Our professor here in Lisbon, a grizzled old veteran by the name of Helder Coelho, who has published over 25 books on AI and stuff, begins his AI classes each year by stating that those fancy things we see in the movies are nowhere near, and in fact, one shouldnt expect to see remotely realistic androids in the next 50 years. I happen to believe him. Actually I think anything exponential is unsustainable in the long run, so scientific and technological progress is bound to plateau sooner or later.
|
Ama Omega
Lost Wanderer
Join date: 11 Dec 2002
Posts: 1,770
|
12-05-2003 00:25
The Brain: the original gooey quantum computer.
The real trick is the invisible little gap in the knowledge we have and the perceived goal. We think 'oh, well, we only need to fill this little gap of information and we will know how to make AI'. The problem is everytime you fill in some of that gap, the remainder gets bigger, there is more you realize you don't yet know how to do or why it works. Have you ever been in one of those huge caverns they do tours in? You can see the rocks on the otherside and yeah they look like decent size rocks, maybe a little bigger than you or maybe a little smaller. You could walk over there pretty easily and touch it right? Nope, the guide will tell you that rock is about half a football field away and almost as big as a small house. The reason for this illusion is the lack of perspective. We can see the goal, we can see where we are, we can even see most of the stuff in between. For all of that we still have no idea just how frickin huge and far away the object actually is. The closer we get, the more ground we travel, the more perspective gets added. We start to realize that its further than we thought, AI won't take over the world by the 21st century etc. However that remaining gap is still full of unknown, its unkown in size and length.
_____________________
-- 010000010110110101100001001000000100111101101101011001010110011101100001 --
|
Eggy Lippmann
Wiktator
Join date: 1 May 2003
Posts: 7,939
|
12-05-2003 02:23
Dude, I FORBID you from leaving SL. Your post was the most *cough*only*cough* insightful thing ever to grace these forums.  I've always had thoughts like these but never could quite put them into words. I have noticed that sometimes it seems like solving a problem, particularly a programming one, seems pretty darn easy in a high-level analisys, but when we actually get to coding, it's a whole different can of worms, all sorts of annoying little details crop up, and what we thought would be a 5 minute hack turns out to need a month of research, brainstorming, testing and debugging... like the multidim lib we're working on  If only I could get a job designing and planning systems instead of actually coding them 
|
Maxx Monde
Registered User
Join date: 14 Nov 2003
Posts: 1,848
|
12-05-2003 04:55
Zenos paradox? (ie, any space can be sub-divided into infinite amounts) So, we're (Here)-------------->  AI) there... And we keep finding more things in the 'gaps'. Makes sense, actually in a nonlinear fractal kind of way.
|
Tiger Crossing
The Prim Maker
Join date: 18 Aug 2003
Posts: 1,560
|
12-05-2003 08:23
We don't want a humanlike AI. Not for silly take-over-the-world (or even try-to-steal-your-girlfriend) sci-fi reasons, but because that's just not what we NEED. Computers, and tools in general, are never one-does-it-all. Everything is specialized for one task or a group of related tasks. The design effort to pack more and more functionality into a single unit isn't worth the result, not to mention the inherent reliability problem of putting all your eggs in one basket. (If the power supply in your DVD-VCR combo fails, you just lost TWO devices.) Where practical AI (as opposed to R&D pipedream AI) is going is towards advances in expert systems that really CAN be smarter than a human in a particular range of knowledge. But with more advanced computing speeds and techniques, like massively parallel processing and quantum computing, expert systems will become autonomous, not only knowing how to do a particular job, but empowered to actually do it and, in time, know when and where it is needed with predictive capability. Your Talky Toaster(tm) will be VERY good at making your english muffin, but it's not going to be discussing the metaphysical aspects of morality in a virtual environment... (Unless there's a market for it, of course.) 
_____________________
~ Tiger Crossing ~ (Nonsanity)
|
Camille Serpentine
Eater of the Dead
Join date: 6 Oct 2003
Posts: 1,236
|
12-05-2003 08:35
From: someone Originally posted by Tiger Crossing Your Talky Toaster(tm) will be VERY good at making your english muffin, but it's not going to be discussing the metaphysical aspects of morality in a virtual environment... (Unless there's a market for it, of course.) But for some reason it was self aware and 'thinking' it just might discuss morality with you. We tend to think of what a computer does as a process based on the input and programming. This is similar to what people do - we get information from our various senses, process the information with our brain based on past events and occurances and extrapolate what will happen. Our constant thinking (okay lack of thinking in some people) could be explained as constantly rechecking data as new input comes in and as probabilities are explored. With the right programming it just may happen with computers. I think the science out there is close but not quite there yet.
|
Malachi Petunia
Gentle Miscreant
Join date: 21 Sep 2003
Posts: 3,414
|
maybe closer than you think
12-05-2003 08:50
Current neurobiology is beginning to show that our glorious, self-aware, oh so intelligent brains consist of a collection of nifty parlor tricks cobbled together in an extremely short time (evolutionarily speaking). The more we find out about selection the clearer it becomes that selection is a dreadfully inefficient engineer and what looks at first glance to be outrageously complex may well be simply wasteful. There are documented cases of severe hydrocephlia where dim but otherwise functional humans were found to have a cm or so of cortex lining their skulls with the rest filled with fluid.
In fact, in my workshop.... well, enough procrastination for now.
|
Maxx Monde
Registered User
Join date: 14 Nov 2003
Posts: 1,848
|
12-05-2003 09:02
The way we are heading, I really think the complexity of interaction in the networking arena coupled with some unlikely outside force, like a computer virus, could result in some kind of rudimentary AI.
It is like the net is a petri dish, with all the things running across it, infected hosts, specialized applications, etc... wouldn't it be something if we just had something 'emerge' from existing technology? It sounds sci-fi, I know, but think of all the nodes out there, the fact that network topology on an internet scale looks like that of a growing organism, etc..
Anyway, I wouldn't be scared of it, I'd love to see it.
|
Mars Zircon
Junior Member
Join date: 8 Dec 2003
Posts: 10
|
12-10-2003 12:59
Personally, I have been (slowly) working on my own AI-ish system. It is an attempt to create a system which demonstrates the same 'generative grammar' cabilities and properties as many linguists theorize, us humans have when we are first rez'd.
The concept is to create a database driven system which, given a dictionary of words with their associated grammatical possible meanings (noun, verb, pronoun etc) will learn the rules of a given grammar based on user input. It is not a complete nerural net, but portions of the project will have to implement a one.
Through this, I hope to prove (or at least allude to ;D ) that inherant human thought patterns which fabricate through our use of words and grammar are to a certain degree directly related to how one would diagram the sentences generated by those thoughts.
This system, if successful, could then be implemented in a larger AI project as the symntactic component through which the system could communicate and learn.
Because the theory is based on linguistic theories which state that all possible human grammars have a solid and definite underlying structure (a least common denominator, if you will) the language which this system is taught is irrelevant. (although, I can not begin to ponder the garbled gibberish that would come out if both french and english words were stored in the same database of words. We are able to handle such conditions because we have conscious control over our syntactic component preventing us from unknowingly using french words in english speech. This could theoretically be handled in my proposed system, but that'll come later =D )
|
Azelda Garcia
Azelda Garcia
Join date: 3 Nov 2003
Posts: 819
|
12-11-2003 10:25
"Asking whether computers will be intelligent is like asking whether submarines can swim."
|
Corwin Weber
Registered User
Join date: 2 Oct 2003
Posts: 390
|
12-12-2003 02:27
Will we develop fully sapient AI? I'm not sure. Do we need to? Nope.
Think about it. How intelligent does an AI actually need to BE? What we're looking at is something that can replace us, for things we don't want to do. (Like work. We're rational on that score.)
How much of your total intelligence do you use at your job?
Odds are, a tiny fraction.
For most jobs.... all we need is pseudo-intelligence. It has to be smart enough to understand what's expected of it and be able to follow basic instructions. In general, that's about it.
|
Maxx Monde
Registered User
Join date: 14 Nov 2003
Posts: 1,848
|
12-12-2003 04:46
The biggest lie of techno-futurists is that enhanced technology would result in a *shorter* workweek. So, either:
A) It could, but corporations are greedy omnivirous bastards
B) Technology only increases the need to have humans mediate the interactions between tech and the real world.
We really need more automation - but the social upheaval in RL would be pretty bad for a while.
|