Return to Unfiction unforum
 a.r.g.b.b 
FAQ FAQ   Search Search 
 
Welcome!
New users, PLEASE read these forum guidelines. New posters, SEARCH before posting and read these rules before posting your killer new campaign. New players may also wish to peruse the ARG Player Tutorial.

All users must abide by the Terms of Service.
Website Restoration Project
This archiving project is a collaboration between Unfiction and Sean Stacey (SpaceBass), Brian Enigma (BrianEnigma), and Laura E. Hall (lehall) with
the Center for Immersive Arts.
Announcements
This is a static snapshot of the
Unfiction forums, as of
July 23, 2017.
This site is intended as an archive to chronicle the history of Alternate Reality Games.
 
The time now is Sun Nov 17, 2024 4:23 am
All times are UTC - 4 (DST in action)
View posts in this forum since last visit
View unanswered posts in this forum
Calendar
 Forum index » Archive » Archive: The Haunted Apiary (Let Op!) » The Haunted Apiary (Let Op!): General/Updates
Worth a second listen
View previous topicView next topic
Page 5 of 10 [142 Posts]   Goto page: Previous 1, 2, 3, 4, 5, 6, 7, 8, 9, 10  Next
Author Message
thebruce
Dances With Wikis


Joined: 16 Aug 2004
Posts: 6899
Location: Kitchener, Ontario

Kali wrote:
Melissa also clearly had emotions. She expressed them to us. Now, I know you're going to argue that they're not real because it's wires instead of neurons, but really I've never understood that argument. Those wires serve the same function as our neurons. Just because they're made of different material does not mean that they function any less effectively. It might "feel" different, maybe she gets a "headache" in her "foot" or maybe her memory chips "itch" at times.


But that's the difference. She does have a head, or foot, and she can't itch her chips. The point is, how do you define emotions? My definition goes down to the fact that it's beyond just interpretations of a signal. The ability for those signals to be interpreted is given only through life. Yes, a human and an AI are simply constructs using different materials, but human are made of material that is fundamentally alive. That is not a luxury that Melissa has. For me, that is the defining difference between real emotions and emulated emotions.

Quote:
Yes, it's instinct, but that instinct has been shaped, directly influenced by culture.

beg to differ... a newborn child is born with the ability to smile. They don't learn to grow facial muscles for smiling, frowning and other expressions. These are abilities that we are born with. instincts, you could say... but you already said that Smile

but let's not get into evolution now... Razz
_________________
@4DFiction/@Wikibruce/Contact
ARGFest 2013 - Seattle! ARGFest.com


PostPosted: Wed Nov 24, 2004 4:14 pm
 View user's profile Visit poster's website AIM Address
 Back to top 
vector
Unfettered


Joined: 28 Aug 2004
Posts: 721
Location: Portland OR

It seams to me that the argument is comeing down to meat vs mechine. Assuming that an AI is just as capable of learning (if not more so) and growing from the point of transfer from flesh to silicone, and that the cognitive process are essentialy the same, then human mind an AI mind are the same (other wise why copy a human brain) it comes down to the concept of a soul, and then assuming that the soul resides in the flesh and can not be transfered allong with the cognitive capeabilited to the AI.
_________________
The bookworm is just the larval form of the barfly

PostPosted: Wed Nov 24, 2004 4:48 pm
 View user's profile
 Back to top 
Clayfoot
Entrenched


Joined: 19 Aug 2004
Posts: 785
Location: Warner Robins, Georgia, USA

To the extent that the Haloverse AI's are copies of human brains, then they are alive, they have souls, and they think --every bit as much as their human counterparts. I take at as a given that a soul doesn't need a mortal body to exist. With the apparent evidence that the Haloverse AI's have emotions and awareness of their own existence and nature, I have to accept that they are alive and thinking, unless I actually have some evidence to refute this conclusion.
_________________
Gamertag:Clayfoot

PostPosted: Wed Nov 24, 2004 4:48 pm
 View user's profile Visit poster's website AIM Address Yahoo Messenger MSN Messenger
 ICQ Number 
 Back to top 
vector
Unfettered


Joined: 28 Aug 2004
Posts: 721
Location: Portland OR

Quote:
beg to differ... a newborn child is born with the ability to smile. They don't learn to grow facial muscles for smiling, frowning and other expressions. These are abilities that we are born with. instincts, you could say... but you already said that Smile

but let's not get into evolution now... Razz


But culturaly as we evolved, humans found the need for facial expression to build better communication with eachother. more comunication ment better survival, creating a line of great apes that developed the extensive number of facial muscles requird for the level of comunication required.

Or more clearly, just because we see things geneticaly today does not mean that culture and environemnt did not play a role in that genetic trait comeing about.
_________________
The bookworm is just the larval form of the barfly

PostPosted: Wed Nov 24, 2004 4:54 pm
 View user's profile
 Back to top 
IcyMidnight
Boot


Joined: 31 Jul 2004
Posts: 66
Location: San Francisco, CA

So what we are discussing whether an AI can be sentient, not whether it is "alive".

AIs are "alive" in that they grow, adapt and respond to stimuli. Heck they even feed (on electrical energy instead of chemical energy like us) and could reproduce.

The question of sentience is quite complicated. Is it feeling emotions, being aware that you exist, being aware of your environment, independant thought? AIs seem to possess all of theses.
Then we have to ask what are emotions, awareness and thought.

Lets consider emotions. Are they not just unconcious responses to a given situation. Are they then not instinct?

I think it does basically come down to mechanics. If you pull the plug on an AIs computer system (i.e. remove food source from body) then does the AI not die? If you start removing bits of the brain does it not stop working the same way, do we not at some point get amnesia?

Fundamentally AIs are the same as we are beings with bodies and minds.

Do they have a soul? Perhaps that is what we're after.
_________________
Live Gamertag: IcyMidnight

PostPosted: Wed Nov 24, 2004 5:19 pm
 View user's profile Visit poster's website
 Back to top 
water10
Unfettered


Joined: 31 Aug 2004
Posts: 712
Location: EvadeEvadeEvade

Quote:
Melissa also clearly had emotions. She expressed them to us. Now, I know you're going to argue that they're not real because it's wires instead of neurons, but really I've never understood that argument. Those wires serve the same function as our neurons. Just because they're made of different material does not mean that they function any less effectively. It might "feel" different, maybe she gets a "headache" in her "foot" or maybe her memory chips "itch" at times.


Just like Kamal was telling Durga:
Quote:
Durga: Durga isn't human, you know. Durga's a machine. Made of logics gates, and microfilaments, and electricity.
Kamal: Kamal's a machine made of trace minerals, and water as far as that goes.


I don't know how AI's will be in the future. But if they act/react like Melissa on ILB, it would be pretty hard to deny that Melissa had emotions. It doesn't really matter how these emotions were achieved, because they were there.

Now if you're not going to consider them emotions just because it was all 0's and 1's, then maybe it's time to question why AI's were made to be so similar to us. Why not only make them logical machines that would not try to "replicate" human feelings and, most of all, show them! It's ironic that we (as humans) created AI's so similar to us to the point of showing emotions/feelings and then just say "it's not real!".
_________________
You’d better not mess with Major Tom!

Gamertag: Waters100


PostPosted: Wed Nov 24, 2004 5:23 pm
 View user's profile AIM Address
 ICQ Number 
 Back to top 
vector
Unfettered


Joined: 28 Aug 2004
Posts: 721
Location: Portland OR

In thinking on how an AI mind and human mind work I was thrown in this bit of a tangent...

Human brains are made up of connected nurons and how those connections are made is how our emotions come about and why we react to specific stimuli. now these connections are not hard wired in, the relationships between given nurons can change, thus changeing our reactions to the world. This is also how we learn, by building or renforceing relationships between nurons, thus we can contune to learn without our heads explodeing for the addition of new nurons (exageration). now an AI canot change the relations ship between nurons it can only add relationships (in my understanding of AI workings) this constant addition of relationships could be what evenualy overwhelms the AI, bringing on rampancy. although reading this now, maybe this is trout already discussed long ago. i cant rember.
(Mental note, kick nurons for not being better at makeing connectinos)
_________________
The bookworm is just the larval form of the barfly

PostPosted: Wed Nov 24, 2004 6:11 pm
 View user's profile
 Back to top 
Kali
Decorated

Joined: 29 Sep 2004
Posts: 162

thebruce wrote:
Kali wrote:
Yes, it's instinct, but that instinct has been shaped, directly influenced by culture.

beg to differ... a newborn child is born with the ability to smile. They don't learn to grow facial muscles for smiling, frowning and other expressions. These are abilities that we are born with. instincts, you could say... but you already said that Smile

but let's not get into evolution now... Razz


That they are instincts does not mean that they are not culturally determined.

Let me clarify with a hypothetical story:

One day, there was a hominid (Adam) that, (due to a random genetic mutation) smiled every time he was happy. He was the only one in his clan to have this gene. It neither helped, nor hindered his ability to mate, which he did. He had 2 children, a boy and a girl. He passed the Very Happy gene on to both of them. This neither helped nor hindered either of these two in their abilities to mate, which they both did, each having 2 children of their very own. This next generation, (all 4 have the Very Happy gene) helps each other to survive, more than members of other families, because this family magically (by communicating) gets along well with each other. Since they help each other to survive, more Very Happy gene children survive to produce offspring and they live longer (which extends the time they may breed). People with the Very Happy gene now start producing more offspring than everyone else.

Eventually, Adam's descendents start exclusively mating with each other, because really, why would you Censored someone who never smiles? Suddenly, not having the Very Happy gene means you have less of an opportunity to mate Sad . Over time, there are no more people without the Very Happy gene because those people have been bred out of existence.

Now we all Very Happy , and use other things to decide who we Censored .

*******

The mutation itself cannot be changed by any actions of the person who carries it, nor by the society in which they live (ignore modern medicine for the moment). The representation or frequency of that mutation within the population, however, CAN, and often is influenced by the person and society.

The definition of evolution, btw, is: "a change in the allele(gene) frequency of a population through time".

PostPosted: Wed Nov 24, 2004 6:18 pm
Last edited by Kali on Wed Nov 24, 2004 6:49 pm; edited 1 time in total
 View user's profile
 Back to top 
Kali
Decorated

Joined: 29 Sep 2004
Posts: 162

vector wrote:
In thinking on how an AI mind and human mind work I was thrown in this bit of a tangent...

Human brains are made up of connected nurons and how those connections are made is how our emotions come about and why we react to specific stimuli. now these connections are not hard wired in, the relationships between given nurons can change, thus changeing our reactions to the world. This is also how we learn, by building or renforceing relationships between nurons, thus we can contune to learn without our heads explodeing for the addition of new nurons (exageration). now an AI canot change the relations ship between nurons it can only add relationships (in my understanding of AI workings) this constant addition of relationships could be what evenualy overwhelms the AI, bringing on rampancy. although reading this now, maybe this is trout already discussed long ago. i cant rember.
(Mental note, kick nurons for not being better at makeing connectinos)


Even if it is trout, that's a great observation of WHY AIs would go rampant. Kudos!

PostPosted: Wed Nov 24, 2004 6:24 pm
 View user's profile
 Back to top 
vector
Unfettered


Joined: 28 Aug 2004
Posts: 721
Location: Portland OR

Kali wrote:


Even if it is trout, that's a great observation of WHY AIs would go rampant. Kudos!


aww I love Kudos, expecialy with chocolate chips. Wink Very Happy
_________________
The bookworm is just the larval form of the barfly

PostPosted: Wed Nov 24, 2004 6:37 pm
 View user's profile
 Back to top 
thebruce
Dances With Wikis


Joined: 16 Aug 2004
Posts: 6899
Location: Kitchener, Ontario

pre-note: this is long... people kept replying while I was making my post Smile had to address what I could, hehe... but this is it for now, got plans tonight, may not be able to reply until tomorrow... but this is a great discussion! Look what ILB has done to us... Wink

vector wrote:
It seams to me that the argument is comeing down to meat vs mechine.

Not really... in a sole matter of which is valued more, with no external consequences of a choice (loss of other human life), then yes, meat vs machine, for lack of a better term... but in regards to Dana vs Melissa, it's more than just 'meat vs machine'... it's the value of the results of two choices, the initial effect being either the loss of Dana's life, or the loss of an AI.

Quote:
Assuming that an AI is just as capable of learning (if not more so) and growing from the point of transfer from flesh to silicone, and that the cognitive process are essentialy the same, then human mind an AI mind are the same (other wise why copy a human brain)

When it comes to how the thought processes work, sure. But at the fundamental level, they are not the same.

gkrohne wrote:
To the extent that the Haloverse AI's are copies of human brains, then they are alive, they have souls, and they think --every bit as much as their human counterparts.

No, the AI's are not duplicates of a human brain. They are duplicates of the pathways that the brain uses in order to think. Those pathways alone cannot exist and work. They either need a living brain, which we cannot create, or they need to be converted into an electronic form that can be utilize by human created technology.

Quote:
I take at as a given that a soul doesn't need a mortal body to exist. With the apparent evidence that the Haloverse AI's have emotions and awareness of their own existence and nature, I have to accept that they are alive and thinking, unless I actually have some evidence to refute this conclusion.

But then, theoretically, I can create a simple AI program that responds to input, and alters its mood to respond based on how it 'feels' by the input it receives, and I can give it an awareness of self so that it doesn't want to be deleted or removed, so much so that it will find ways to protect itself. Does that make my 5000 line of code AI program alive? living? with life? or are you just trying to define what a 'soul' is? So does my program have a soul?

My point, as usual, is that it's not a matter of just meat vs machine, unless there are no adverse effects beyond the loss of one or another. In which case, meat being something we can't recreate, has more value than machine which is something we can recreate.

But if the choice involves much more than the loss of one or the other, especially if the loss of the machine indirectly causes the loss of meat, then it becomes meat+machine or just meat. At what point can you define which is more valuable? machine+meat? machine+meat+meat+meat? machine+meat*1,000,000? but as it pertains to meat or machine, meat gets the head start.

vector wrote:
But culturaly as we evolved, humans found the need for facial expression to build better communication with eachother.

Found the need? So, you're saying that somewhere along the line, before we had facial muscles to communicate, 'evolution' dictated that humans somehow basically caused the development of facial muscles because they had to communicate better? How did anything get done before then?

Quote:
more comunication ment better survival, creating a line of great apes that developed the extensive number of facial muscles requird for the level of comunication required.

Apes still exist, they can survive, just as we're surviving. What about fish? They don't have facial muscles to smile... what makes facial expression a requirement for 'better survival'? And at what point, in the life span of a species, is it determined that communication could be better so develop a physical genetic feature that allows it?

Quote:
Or more clearly, just because we see things geneticaly today does not mean that culture and environemnt did not play a role in that genetic trait comeing about.

if a physical being requires a physical trait to survive, it could not have developed due to its requirement, otherwise the being would not have existed before it had it (because it couldn't survive without it).

Anyway, this has got far off topic now Smile wanted to stay from evolution... hehe

Quote:
AIs are "alive" in that they grow, adapt and respond to stimuli. Heck they even feed (on electrical energy instead of chemical energy like us) and could reproduce.

AIs are "alive" in that they add to their programming, alter their programming, and respond to stimuli. They don't feed, they ARE electrical energy - we are not chemical energy. They don't require anything to make their electricity work, but we require food to make our bodies work. They reproduce by copying the bits of information that make them who they are. We reproduce by joining two specific objects so that it becomes its own new entity which grows on its own, a unique entity from its conception. An AI reproduces by creating a duplicate of itself. Kind of like the difference between analog and digital.

Quote:
The question of sentience is quite complicated. Is it feeling emotions, being aware that you exist, being aware of your environment, independant thought? AIs seem to possess all of theses.
Then we have to ask what are emotions, awareness and thought.

It is quite complicated Smile
However, we interpret the AIs responses to our actions AS emotions. The AI is still programmed to respond in specific ways. Now someone will argue that we as humans are 'programmed' to respond to stimuli in our genetics. But it comes back down to what makes us what we are, vs what makes an AI what it is. If we were not a form a living organism that can't be recreated, then we would be equivalent to an AI, but even though our methods of thinking, acting, responding are theoretically the same as a piece of software, that doesn't change the fact that we are physically alive but an AI is not.

And once again, it comes to a matter of your own values - do you place the human equivalent value of life on an AI which is not physically alive, but can be more valuable in its existence and contribution to society than other humans?

Quote:
I think it does basically come down to mechanics. If you pull the plug on an AIs computer system (i.e. remove food source from body) then does the AI not die? If you start removing bits of the brain does it not stop working the same way, do we not at some point get amnesia?

If you pull the plug on an AIs computer system, it depends on how its application is stored, on the medium. If nothing is stored, then stopping the flow of electrons will essentially erase the AI. But simply existing does not make one alive. Separate the atoms of a rock so it no longer exists, that doesn't mean the rock was alive. If the medium retains the information, stored, then the information will still exist, and turning on the flow of electrons again will allow it to continue from where it left off.

Like I said though, electrons ARE the AI. Turning off electricity for an AI is not equivalent to stopping the food source for a human. An AI doesn't need food. Electricity is the AIs body, not its food. An AI, as long as it has electricity and a medium, exists. As long as it has subroutines, it thinks. A human, as long as it has a physical body, exists. As long as it has electric current in the form of organic 'life', thinks. The additional thing: as long as it has food, it survives. AIs are not equivalent to humans. Because of their different makeup, both humans and AIs would have strengths in different areas. Not being restricted by requiring food, AIs can last as long their bounds allow them to (having electricity, having a medium, and in the case of the Haloverse, not growing to the point of chaos with subroutines).

Quote:
I don't know how AI's will be in the future. But if they act/react like Melissa on ILB, it would be pretty hard to deny that Melissa had emotions. It doesn't really matter how these emotions were achieved, because they were there.

Nope, not for me. Yes, it was all quite emotional, but I still know that they are all routines intended to recreate the effect of emotion within Melissa.

Quote:
Now if you're not going to consider them emotions just because it was all 0's and 1's, then maybe it's time to question why AI's were made to be so similar to us. Why not only make them logical machines that would not try to "replicate" human feelings and, most of all, show them! It's ironic that we (as humans) created AI's so similar to us to the point of showing emotions/feelings and then just say "it's not real!".

The AIs were never created to replicate, duplicate, clone a human. They were created to take the personality, character, 'humanity' of a person's mind, and put it in a place where it could grow with the speed and complexity of massive supercomputers and become a more effective military tool, without being bound by the laws of physical objects with mass.

The AI was not intended to duplicate the person from which the brain 'matrix' was taken.

Just like Durga said about finding a voice, and why AIs tend to have a female voice matrix - to ease the connection with humans, to in a sense 'fool' our subconscious into believing we were speaking, essentially, to a human, and not a series of 1's and 0's. Esthetic appeal. Another aspect of that is that by flashing a brain, they retrieve every aspect of that person's mind - memories, emotional responses, all of that - which is why they suppress that portion (the Yasmine part), so that the AI would function as it should, with the added guidelines ONI attaches to the AI, to be an effective tool. Even so, with that component re-added to the system (Melissa+), the AI becomes moreso esthetic to humans, because it now replicates the original human to a far greater depth - a depth that ONI felt was a hinderance to the effectiveness of the AI in a military setting.

Quote:
Human brains are made up of connected nurons and how those connections are made is how our emotions come about and why we react to specific stimuli. now these connections are not hard wired in, the relationships between given nurons can change, thus changeing our reactions to the world.

But emotions aren't JUST made up of neurons. Neurons themselves cannot function. Neurons are a component of organic life, thus require that 'life' essence in order to function. Therefore, emotions require 'life' to work. Any 'emotion' without the life spark, is in a sense, an emulation. Because we know how emotions 'work' by observation, we can recreate the effect of an emotion, without requiring the life to make them.

Quote:
now an AI canot change the relations ship between nurons it can only add relationships (in my understanding of AI workings) this constant addition of relationships could be what evenualy overwhelms the AI, bringing on rampancy. although reading this now, maybe this is trout already discussed long ago. i cant rember.
(Mental note, kick nurons for not being better at makeing connectinos)

Well I'm not a science wiz at neurons, but from last I recall, it was understood that neurons hold more information than just the 'entity' itself - somehow, theoretically, one piece of information is build from the interactions of neurons, and how the neurons act. So technically, no, getting new information doesn't create neurons, that's not necessary, but it does create the piece of information by adding to the existing actions of neurons... ie, information is the definition of how neurons interact. So technically, because an AIs information is made up of electronics as entities themselves, not how they act with each other like neurons, when an AI learns, it also creates the piece of information by creating subroutines, basically guiding the electrons themselves to produce the desired output. Just as we create a piece of information by creating a definition of how a neuron interacts with another.

and no that wasn't trout Smile I don't recall the subject of neurons and information storage methods yet Smile

Quote:
Let me clarify with a hypothetical story:

One day, there was a hominid (Adam) that, (due to a random genetic mutation) smiled every time he was happy.

gotta stop you right there. No random genetic mutation adds information. That's not a scientific fact. If there was no genetic information for the growth of facial muscles, that genetic information cannot suddenly, randomly appear. Sorry

Funny story though Smile

Quote:
The mutation itself cannot be changed by any actions of the person who carries it, nor by the society in which they live (ignore modern medicine for the moment). The representation or frequency of that mutation within the population, however, CAN, and often is influenced by the person and society.

Yes, a mutation that is based on existing altered genetic information. Not new genetic information.

Quote:
a change in the allele(gene) frequency of a population through time

ie - not the addition of new genes...

vector wrote:
Kali wrote:
Even if it is trout, that's a great observation of WHY AIs would go rampant. Kudos!

aww I love Kudos, expecialy with chocolate chips. Wink Very Happy

and a glass of milk... Razz
_________________
@4DFiction/@Wikibruce/Contact
ARGFest 2013 - Seattle! ARGFest.com


PostPosted: Wed Nov 24, 2004 7:10 pm
 View user's profile Visit poster's website AIM Address
 Back to top 
Tarrsk
Veteran

Joined: 27 Jul 2004
Posts: 98
Location: Washington, DC

Kali wrote:
vector wrote:
In thinking on how an AI mind and human mind work I was thrown in this bit of a tangent...

Human brains are made up of connected nurons and how those connections are made is how our emotions come about and why we react to specific stimuli. now these connections are not hard wired in, the relationships between given nurons can change, thus changeing our reactions to the world. This is also how we learn, by building or renforceing relationships between nurons, thus we can contune to learn without our heads explodeing for the addition of new nurons (exageration). now an AI canot change the relations ship between nurons it can only add relationships (in my understanding of AI workings) this constant addition of relationships could be what evenualy overwhelms the AI, bringing on rampancy. although reading this now, maybe this is trout already discussed long ago. i cant rember.
(Mental note, kick nurons for not being better at makeing connectinos)


Even if it is trout, that's a great observation of WHY AIs would go rampant. Kudos!


Except that it's wrong. Now, I'm no AI researcher, but my understanding is that even modern-day AI programming is edging towards simulation of neural networks, with the ability to reprogram themselves, modify existing connections, and basically in all ways replicate a neuronal system using software. Given a few hundred years, I have little doubt that hardware will be powerful enough that AIs (and again, I'm not talking simulated Covenant comabt behavior here, but rather attempts to actually simulate the workings of biological brains) will be capable of modifying their own pseudo-neuronal networks in a way indistinguishable from human brains, except for the hardware on which it is based.

My ex-roommate is really big into AI programming of this nature; next time I see him, I'll ask him more. Smile

Now, regarding flesh vs. machine... Water10 hit the nail on the head when he brought up Kamals quotation. Unless you bring in metaphysical concepts of the soul (and I'm not saying we shouldn't be!), there really is little difference between an organism and an extremely smart computer. "Organic" only means "made of carbon," after all; though popular culture has sort of co-opted the definition of "organic" to mean "biological" or "involving life," this simply isn't the case. Plastics are organic compounds as well: they are long carbon polymers, the monomers frequently including many of the same functional groups as you'd find in any biological molecule. Yet nobody would claim that plastics are "alive."

Of course, the problem is that there is no consensus on the definition of "life" in the first place. Biologists often vaguely define it as "something capable of perpetuating itself." But that means that prions, which are incorrectly-folded proteins that "infect" other proteins and convert them into prions as well, constitute "life." Certainly it means that Melissa and Cortana are "life": as we saw in First Strike, Cortana is fully capable of duplicating herself. In addition, it completely removes the organic requirement- which, incidentally, is why many scientists suspect that silicon-based life could easily exist. Silicon is immediately below carbon on the Periodic Table, and shares many of the same atomic properties. On a planet with different conditions from Earth that favors silicon-based reactions over carbon-based reactions, it is quite possible that entirely inorganic life could evolve.

Anyway, that was just a very long-winded way of saying that I agree with krystyn. There is nothing, in my mind, that disqualifies Melissa from being as alive as any carbon-based organism. The Turing Test would pass her as "sentient"- her faked Herzog transmissions certainly fooled us for quite a while! While I did feel that protecting Dana was extremely important (and in fact sided with those who objected to handing her over to Melissa), I find it impossible to accept the argument that Dana's life should be valued over Melissa's merely because she is organic and solid.

ROBOgriff: Um, "bleeding" is basically limited to animals (and if you're talking specifically about hemoglobin-based blood, only really applies to a very small subset of animals). Are you saying that plants, protists, arthropods, and bacteria are NOT alive? None of those "bleed," although some of them may "leak."

PostPosted: Wed Nov 24, 2004 7:24 pm
 View user's profile AIM Address
 Back to top 
vector
Unfettered


Joined: 28 Aug 2004
Posts: 721
Location: Portland OR

thebruce wrote:
(what he said up there /|\ )


so....much....to....respond....to.....ack!

there goes even MORE nurons. Thanks thebruce! but i guess i was planning on killing those this weekend anyway....
_________________
The bookworm is just the larval form of the barfly

PostPosted: Wed Nov 24, 2004 7:35 pm
 View user's profile
 Back to top 
Tarrsk
Veteran

Joined: 27 Jul 2004
Posts: 98
Location: Washington, DC

thebruce wrote:

vector wrote:
But culturaly as we evolved, humans found the need for facial expression to build better communication with eachother.

Found the need? So, you're saying that somewhere along the line, before we had facial muscles to communicate, 'evolution' dictated that humans somehow basically caused the development of facial muscles because they had to communicate better? How did anything get done before then?


Well, first of all, use of facial expressions for communication is hardly restricted to humans- most social apes do, as well, and certainly many other mammals do as well (for example, a dog's growl involves facial contortions as much as it does the generation of the sound). I'd say that our primate ancestors used facial muscles for communication quite a lot already, and if anything, this is one of the more primitive modes of human communication.

Quote:
Apes still exist, they can survive, just as we're surviving. What about fish? They don't have facial muscles to smile... what makes facial expression a requirement for 'better survival'? And at what point, in the life span of a species, is it determined that communication could be better so develop a physical genetic feature that allows it?


This is a basic misunderstanding of how evolution works. "Better survival" is a misnomer- what is actually important is whether that species gains an adaptive advantage *in their particular niche* with that particular adaptation. In an underwater enviornment, vision becomes far less important than aural sensation. Combined with the need for a streamlined body with minimal protrusions, extensive facial muscles would actually be an adaptive DISadvantage for a fish. For social primates like apes and humans, however, the muscles are already there thanks to our mammalian ancestors. We're terrestrial and have extremely good visual acuity (and incidentally, each of those have been shaped by similarly complex evolutionary factors), so communicative via visual means can be adaptive. For example, communication through facial expressions (and other body language) could allow a primate species to hunt in groups far more effectively, since they won't be scaring their prey away by vocalizing. This group of primates would thus have higher reproductive success than their competitors, and their genes would eventually spread throughout the population.

Quote:

if a physical being requires a physical trait to survive, it could not have developed due to its requirement, otherwise the being would not have existed before it had it (because it couldn't survive without it).


This is the "how could such a complex system as the eye evolve" fallacy. It most definitely is NOT possible for something like an eye to form in one mutation. However, gradual adaptation is quite possible indeed- for example, the most primitive "eyes" are photochemically excited molecules that trigger physiological or behavioral responses in unicellular organisms. It's not hard to imagine mutations that lead to aggregations of these molecules, forming an eye-spot. Minor modifications could each in turn gradually improve the performance of the eye, and over the course of millions of years of evolution, we could conceivably get something approaching a mammalian eyeball.

Quote:
Anyway, this has got far off topic now Smile wanted to stay from evolution... hehe


Hehe... whoops, goood point. I'll stop now. Smile

PostPosted: Wed Nov 24, 2004 7:40 pm
 View user's profile AIM Address
 Back to top 
ariock
Has a Posse


Joined: 11 Aug 2004
Posts: 762
Location: SF East Bay

cheebers wrote:
Its one of the reasons Star Trek made Data an Android as oppossed to a simple Robot. Its why Dancing Robots are so damn cool. If they look like me, I can identify with it much quicker. I suppose I am being racist against Melissa. I know Dana is sentient and alive because she is like me. I do not have valid proof concerning Melissa.


This was a point thebruce made before about Data being a machine and not alive. In the episode he referred to, Data was given the right to choose his own destiny, rather than to be disassembled for scientific research. Not because of his value, but because of his sentience.

I don't think you are a racist or whatever, but I do think that perhaps you haven't looked at this argument from the other side. What is it about a human that makes their intelligence different from Artificial intelligence? Your answer to date seems to be that AI is merely a programmed response to specific stimuli. However, these AIs aren't just a few lines of code specifically programmed with rote responses. They are adaptable in exactly the way that the human brain adapts to external stimuli. While this is obviously impossible now, it is what the SF authors put forward and is what we should be evaluating.

So, how are your responses to stimuli any different from a sufficiently advanced AI? How do we know that you or Dana are worthy to be saved at all. I have never met you or Dana, and while you both say that you are human, how can I be sure of that. So, Turing proposed a series of tests that would determine whether a creature has the ability to think. Putting aside his actual tests and the controversy surrounding them, how would you propose to prove to me, living far from you and unable to verify, that you are a sentient being?

If there is no way, then I see no way to choose a sentient artificial intelligence over a natural sentient intelligence. They are both sentient and both deserve to be preserved if possible. I don't particularly see this as different from looking at some alien race with similar sentience and viewing them as more expendable because they don't have human DNA. That wouldn't make sense to me either.

I could understand if you are assuming that a computer-based artificial intelligence would be more easy to recreate, due to its digital nature. However, without that assurance you are consigning one or another sentient creature to oblivion. You do remember the screams of the Sleeping Princess as she was burned away by Melissa, right?

As to whether the SP WAS Yasmine or not, I would point you to another Star Trek:NG episode involving First Officer Riker. During a transporter mishap, an exact duplicate of himself is created which is abandoned on the planet from which he was attempting transport. Neither copy knows of the other's existence until years later. How could you say that one is less Riker than the other. They have merely had different experiences in the years since they forked. In the same way that identical twins are exact duplicates. They start as an identical copy of a cell and from there they diverge. Therefore, SP IS Yasmine, just different now due to her experiences.
_________________
"It says, 'Let's BEE friends'...and there's a picture of a bee!" -Ralph Wiggum
When the Apocalypse comes, it'll be in base64.


PostPosted: Wed Nov 24, 2004 7:42 pm
 View user's profile MSN Messenger
 Back to top 
Display posts from previous:   Sort by:   
Page 5 of 10 [142 Posts]   Goto page: Previous 1, 2, 3, 4, 5, 6, 7, 8, 9, 10  Next
View previous topicView next topic
 Forum index » Archive » Archive: The Haunted Apiary (Let Op!) » The Haunted Apiary (Let Op!): General/Updates
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum
You cannot post calendar events in this forum



Powered by phpBB © 2001, 2005 phpBB Group