Comment Stream

Search and bookmark options Close
Search for:
Search by:

Total Found: 32,739 (Showing 1-25)

Next ►Page 1 of 1,310
Set Bookmark
Fri, Jun 24, 2016, 5:44pm (UTC -5)
Re: TOS S3: The Paradise Syndrome

I agree with most of the criticisms Jammer and the other commenters have made about this episode. So many things in it that are nonsensical. But that didn't really bother me that much. I found myself really enjoying Kirk with amnesia. I though Shatner brought a subtle performance to those early scenes; you can see him trying to figure out if he really is the God the natives think he is. As others mentioned, it reminds me of The Inner Light which is a classic. In the end I agree with the 2.5 stars, given the flaws, but I did quite enjoy it.
Set Bookmark
Brett Heitkam
Fri, Jun 24, 2016, 4:35pm (UTC -5)
Re: TNG S6: Realm of Fear

One strange thing I noticed in this episode was in the first scene in the conference room when Beverly is speaking to everyone about her findings. Picard is standing so awkwardly close to her (practically on top of her) and when she is speaking he is staring at her with this goofy look on his face. For a minute I thought I was watching the gag reel and they were about to bust out in laughter at any second.
Set Bookmark
Fri, Jun 24, 2016, 12:57pm (UTC -5)
Re: VOY S7: Q2

This really has nothing to do with this episode except that I am rewatching some of Voyager and this episode reminded me of my story. . .

I got to meet John deLancie! AND I made him chuckle.

He's VERY tall, AND very handsome, which actually surprised me, because I have never found Q attractive in the least. His wife is BEAUTIFUL, and I seriously kicked myself for not googling before the event so I would have known that she is the actress who played the female voice of Reva in "Loud as a Whisper." Just like in that episode, her voice is lovely. She was also very nice--I was chatting with her for a while without realizing until later who she was.

So anyway, this was at a dinner during the Reason Rally, and John deLancie was seated right behind me. After dinner, people got up and began mixing and chatting, and the organizer requested over the microphone that all the "main stage speakers come to the annex room for a group photo." John apparently didn't hear, because he turned to me and asked what they had said. I repeated it, then said, "I thought you were supposed to be omniscient!"

He chuckled and replied, "Oh, I've NEVER heard that one before!" But he said it with a smile and wink, so I think it was okay. I at least refrained from falling at his feet in admiration, so it worked out well. :-)
Set Bookmark
William B
Fri, Jun 24, 2016, 11:08am (UTC -5)
Re: TNG S2: The Measure of a Man

One thing I will still add is that the comparison to animal life still holds in some ways, especially given that certain animals were selectively bred (over millennia) for both intelligence and ability to interact with humans. Putting human intervention aside, if you need a more intelligent animal, ie for a service animal for the blind, you have to have a dog rather than a spider and you have to treat it better. If you want a pet you can pull the legs off with relative impunity, get a spider not a dog. It may end up being that a scale for defining intelligence on computers will be introduced in terms of adaptability etc and that it will be necessary to have less adaptable computers to be able to treat it ethically. Since intelligence (and, really, intelligence as defined by ability to do human-like tasks) is the main measurement for animal life value, I expect it is likely to be one for AI if a sufficiently rigorous theory of consciousness is not forthcoming.

I am troubled, in the end, by the human-centricity of the arguments about Data and the lack of extension to other computers. That said, there are still two directions: if Data is mostly indistinguishable from a humanoid except in the kind of machine he is, Picard's case stands and it is chauvinism to assume that only biology could produce consciousness; if Data is mostly indistinguishable from other machines except in his similarity to humans, then Peter G.'s point stands and it is chauvinism to only grant rights to the most cuddly and human of machines. Both can be true, in which case the failure of imagination on the part of the characters and likely writers is failing to use Data as a launching point to all AI. Even the exocomps, the emergent thing in Emergence, and various holodeck characters are still identified as independent beings whereas the computer itself is not, which reveals a significant bias toward things which resemble human or animal life.

For what it's worth, I continue to have no doubt Data was programmed to value things, have existential crises etc., in conjunction with inputs from his environment, but I continue to believe that this does not necessarily distinguish him from humans, who are created with a DNA blueprint which creates a brain which interacts with said blueprint and the environment. Soong programmed Data in a way to make him likely to continue existing, and humans' observable behaviours are generally consistent with what will lead to the survival of the individual and species. To tie into the first scene in the episode, Data may be an elaborate bluff, but so might we be. Of course that still leaves open the possibility that things very far from human, whether biological, technological, or something else entirely, can also possess this trait. And again it seems like cognitive ability and distance from humans are the things we use now; probably given the similarity of humanoids, cognitive ability and distance from humanoids will be the norm. I would like to believe there is something else that could make the application fairer and less egocentric. But it seems even identifying the root of human consciousness more precisely (in the physical world) would just move the problem one step back, identifying "this particular trait we have" as the thing of value, rather than these *other* traits we have.
Set Bookmark
Fri, Jun 24, 2016, 10:06am (UTC -5)
Re: TNG S2: The Measure of a Man

@Peter G.

"This is kind of my problem. The things described in the episode don't show Data to be life-like, but rather Human-like, which is a significant distinction."

True, and this goes back to what William B mentioned about the writers being limited in describing Starfleet generally because they only have the human experience to draw from. Incidentally, that vanity thing I mentioned is actually a line from this episode when Data curiously decides to pack him medals when he leaves Starfleet.

But you're right, the episode doesn't really describe what criteria Data has which qualifies him as sentient and the computer as non-sentient. I suppose Data seems more self-aware than a computer, but it's hard to tell if he's acting on incredibly complex programming or something greater.
Set Bookmark
Fri, Jun 24, 2016, 7:14am (UTC -5)
Re: TNG S4: The Loss

Blah, blah, blah, blah, Troi loses her powers and then get them back... yawn! This episode is for the easily amused ST and Troi fans. It's hard to be moved and touched by an insufferable Mary Sue bitch who did nothing but whine throughout the whole episode and didn't care what's going to happen to everyone else on the ship. Although, Marina Sirtis is an insufferable bitch in real life, so it probably wasn't hard her to act like this. I know that no one is perfect... but damn...

This episode could have been about a real disablement, like Wolf in episode Ethics, that was being disabled, even in the far future. Losing your "know it all abilities" comes across more as a dumb joke, "I lose my powers, I hate everyone! I got my powers back, now I love everyone!"

One and half Star
Set Bookmark
captain bangbang
Fri, Jun 24, 2016, 1:44am (UTC -5)
Re: TNG S4: Identity Crisis

I like the character of Susannah, however I wish that the Data-Geordi relationship had been better utilized. We are often told that those two are best buddies but rarely do we see scenes where their friendship grows or is beneficial to either of them. I feel like Data should have been the one to talk Geordi into returning to the ship rather than Leitjen. It would have strengthened their relationship on screen and better illustrated that Data and Geordi are indeed friends and not an engineer and his pet robot. Sometimes seeing them interact, I'm reminded of the Doctor and K-9. I know there are some good scenes of character development between the two, but they almost never correspond to episodes that are character-centric. In fact I can't think of a single episode that played that friendship as a central theme, which is my big beef with the show. The writer's say, "They are best friends", but rarely show it in anymore than the most superficial manner.
Set Bookmark
Fri, Jun 24, 2016, 12:19am (UTC -5)
Re: BSG S3: Occupation/Precipice

Watching this years later and reading the review makes me laugh. of COURSE it's about the Iraq occupation - it was doing what good SF does... holds up a mirror!
Set Bookmark
Thu, Jun 23, 2016, 11:24pm (UTC -5)
Re: VOY S4: Year of Hell, Part II

I've enjoyed and respect all the commentary and analysis Jammer has given, even when I don't agree (it would be a lot less interesting if we all agreed anyway). There's one little issue I see on many of Jammer's commentaries that I can't resist nitpicking about, and I apologize in advance; as an English teacher, sometimes it's hard to resist...but anyway, you can't "center around" anything. If it's at the center, it can't be around something. "Centered on" would be the more grammatically accurate phrase.
Ok, now I'm going to hide in a hole to avoid the phaser fire I probably deserve.
Set Bookmark
Peter G.
Thu, Jun 23, 2016, 9:50pm (UTC -5)
Re: TOS S1: The Enemy Within

I didn't say animal-Kirk had no ability to reason, Is specifically said he lacked self-reflection. He could reason on a tactical level as well as any animal. Have you seen some of the strategies animals in the wild use to hunt? It's better than what most people could come up with if they sat down and planned it out. It's done by instinct, but on the fly they improvise and implement tactics using the powers of their reason. They are not therefore without intelligence, but merely without the ability to conduct abstract self-examination or to question their choices. Animal-Kirk may have employed various strategies to get what he wanted, but in the end his entire motivation was based on the fear of a trapped animal. In that sense I think we're supposed to eventually see him as pitiable, which is why I refrain from calling him evil-Kirk. He has no moral status because he's not capable of moral judgements. He has just enough wherewithal to see reason in the end - barely.
Set Bookmark
William B
Thu, Jun 23, 2016, 8:52pm (UTC -5)
Re: TNG S2: The Measure of a Man

@Peter G., Fair enough.
Set Bookmark
Thu, Jun 23, 2016, 8:35pm (UTC -5)
Re: TOS S1: The Enemy Within

Nolan and Peter (and spoilers below as a word of warning for Nolan),

The one problem with a counterargument that hammy evil-Kirk is ok is that evil Kirk was not always hammy. The scene in sickbay has evil-Kirk slowly succumbing to good-Kirk's arguments, seemingly agreeing with him while showing fear about the future. But as soon as good-Kirk lets him free he betrays him. He then calmly went through several steps to try to trick the crew into thinking he was good Kirk, like changing his shirt, putting makeup over his scratches, etc. His second attempt to get into Rand's pants was a lot more civilized than the first (even if it was only because they were still in the hallway).

What this means, to me, is that he's not just a base, animalistic, primal side. He does show the ability to plan, to think ahead, and to bide his time. Those are signs of intelligence. It's basically a more subtle villain than the I'M CAPTAIN KIRK we saw 20 minutes ago. So I don't think you can just blame this on 60s-era TV making villains wear black hats (besides, Balance of Terror and IIRC Errand of Mercy will show more subtle villains this season). And I don't think you can claim evil-Kirk has no reasoning capacities given his subterfuge.

I guess if the entire episode was hammy evil-Kirk, I would just sit back and enjoy it for what it is. But the fact that there were signs of him not being so crazy just ended up frustrating me...
Set Bookmark
William B
Thu, Jun 23, 2016, 8:14pm (UTC -5)
Re: TNG S2: The Measure of a Man

I will say though that I do think the human-centrism of the episode is a problem insofar as one would expect that there *is* by this point in the Federation some sort of procedure for talking about sentience in non-human terms. Since the vast majority of species encountered in Trek, and especially TOS and TNG, who are accepted as sentient and having of rights are humanoid and very similar to human beings, this is not exactly a problem for the episode, so much as revealing of one of the major limitations of imagination of Trek. Alternatively, this works to some degree because this *is* a myth which is fundamentally about humanity more so than it is actually about anything to do with aliens, and so it makes sense that arguments about machines end up being human-centred.

It is actually pretty disturbing, thinking about it. I do pretty strongly think that the tactic Picard eventually settled on is correct, which is to argue that it is not possible to conclusively demonstrate that Data is sufficiently fundamentally different from Picard to be classified as a different being. However, the point remains of what happens to entities which are sentient but do not *have* a survival urge. This is potentially the case for the Enterprise computer. Certainly, Soong decided to program Data to "want to live"; it seems from various statements made over the years, including by Soong himself, that he intended to create Data as having consciousness and being something like an actual human, with some adjustments made so as to avoid the mistakes of Lore and perhaps to improve on humans. Assuming for the moment that he succeeded in creating a being which has consciousness (and thus sentience), the possibility remains that he could have programmed Data with similar skills but no "desire" for self-preservation or self-actualization.

However, this is not simply a matter of AI. Eventually genetic engineering on a broader scale should be possible, and what happens then? Could beings of human intelligence with no desire for self-preservation beyond what is convenient for their masters be created through a combination of genetic and behavioural work?

It makes a lot of sense that Soong, who really did see his androids as his children and wanted them to be humanlike, would program them to survive and thrive, and, after the catastrophe of Lore, made Data to survive, thrive, and also be personable enough that he would not have to be shut down. Some of this is obviously Soong's own vanity, but some of it is the same sort of vanity that shows up in many parents' desire for their children to carry on their legacy. I like that Voyager complicates some of the Data material by having the Doctor be quite unpleasant, much of the time; whereas Data is designed for prime likability, the Doctor is abrasive and difficult.
Set Bookmark
Peter G.
Thu, Jun 23, 2016, 8:00pm (UTC -5)
Re: TNG S2: The Measure of a Man

My main issue with issuing 'sentient status' to any advanced intelligence is because at bottom intelligence is just processing power. When constructing an AI I find it troublesome to consider that the sole factor separating one AI from another might be a superior CPU, and that makes it 'sentient' and thus affords it rights. Does that mean I'm better off with a slower computer than with a more advanced one, because the latter has the right to tell me what it wants to do?

Some people theorize that consciousness is emergent is a sufficiently advanced recursive processing system. Others say something in the biology matter, perhaps as a resonator with some unseen force. Either way giving rights to technology is a big deal.
Set Bookmark
William B
Thu, Jun 23, 2016, 7:41pm (UTC -5)
Re: TNG S2: The Measure of a Man

Though, of course, the Enterprise computer can run and possibly create holodeck characters of great sophistication, so, there is a strong case of the Enterprise computer being intelligent, which certainly complicates things. That said, I don't think this means Louvois' ruling (etc.) are wrong; rather, the ruling on Data should ideally open up discussion on other sophisticated AI which have "sentient life form"-level intelligence.
Set Bookmark
Thu, Jun 23, 2016, 5:56pm (UTC -5)
Re: VOY S7: Repentance

When I first commented on this episode 9 years ago, I indicated that I was undecided about the death penalty. No longer, and I fight it every chance I get. It is a symptom of a barbarous and misguided society. But I do not think this for the reason you might suppose.

I have no fundamental objection to killing someone; if a person were trying to hurt my family, I might try to kill them, and they might deserve it. But that is in the category of self-defense, and happens quickly--it is not killing for revenge or punishment.

When a society decides that execution is to be punishment for a crime, that society must either find or create someone who is willing to kill. Someone has to pull the trigger or flip the switch. No civilized society should be involved in the business of creating killers.

I began to come to this view when I saw an interview with a death-row guard. He indicated he had carried out countless executions over decades, and one day he woke up insane. Horrible guilt wracked him, to the point that he could not function. By the time of the interview he had gotten somewhat better, but I imagine he will never be fully whole. I realized when watching the interview that I had done this to him. I had driven him insane because I allow my country to continue to execute people. No longer. I will never vote for someone who supports the death penalty again.

This is the same reason I am against torture; in order to carry it out, we must find or create someone who is willing to torture another human being, and I will NOT condone that by my society.

In this episode, it is the victim's family who carry much of that responsibility, and the satisfaction they feel from the revenge will not be enough to comfort their guilt, which WILL come.
Set Bookmark
Thu, Jun 23, 2016, 4:57pm (UTC -5)
Re: TNG S4: Remember Me

Third time I go over the whole series of the "newer" ST. Let me add a bit of info about this episode. Forgive me if I have missed a similar comment on this string of comments, but I think not. I believe the basis of this episode is "The Tibetan Book of the Dead", originally intitled " The Great Liberation from the Bardo through Hearing". If you have not read it, or heard about it, it may change your life, but that is a different issue ... Anyway: At the moment of death, the progressive dissolution of the "physical" components of the human body end up releasing consciousness, and at that moment the basic luminosity of the essential nature of everything becomes overwhelming. If one's mind rejects, fears, tries to avoid the unavoidable, the "individual" mind of the deceased will go over a process of dramatic adjustment to its own creations in the form of imagery and situations which may end up in its reincarnation into another form of life in one of the (6) different levels of form existence (samsara). If at the moment of the great flash (the light tunnel in Near Death Experiences one's mind recognises the unity of it all, and lets go, there happens the liberation from reincarnation (nirvana).
Of course, the process is a lot more detailed, and fascinating, but this is a quick summary.
How does that show in the episode? It is an allegory, a sequence of parallels:
1- The bubble in engineering is death
2-The loss of reality and progressive detachment from reality at death time is the Doctor's process of inability to stay connected to the reality of the crew members that she perceives as disappearing.
3- The alien comes up and mentions that: "the quality and substance of her thoughts at the moment of absorption into the reality of the bubble determines the reality of her experience in her new dimension". This is equivalent to the statement in the Bardo Thodol about determining your reality as it is a projection of one's own mind ... and here it all connects with the beautiful theories and experiences as explained mainly in Tibetan Buddhism, but also appearing in many other traditions.
Thank you.
For the true happiness of all beings.
Set Bookmark
William B
Thu, Jun 23, 2016, 4:38pm (UTC -5)
Re: TNG S2: The Measure of a Man

@Chrome, Peter G. response:

The Turing Test is not namechecked in the episode, but it does sort of remain here: if there is no airtight argument that Data is less a person than Picard, why should Picard have more rights? This is the essence of Picard asking Maddox to prove that he is sentient; the main arguments that Maddox could supply that Picard is conscious and Data isn't are:

1. Picard is more similar to Maddox (in being a biological life form), and, implicitly, to Louvois;
2. Data was created deliberately, rather than by a random physical process;
3. (MAYBE) These are the things we don't know about the human brain, whereas this is what we know about the android net/software.

With respect to 3, obviously there are aspects of Data's programming and design which are unknown, hence the need to disassemble him. With respect to 2, Picard punctures this by suggesting that parents create their children, though it is an incomplete argument. With respect to 1, well, that is part of the reason I think Picard brings up Data's intimacy etc. One could argue that he is appealing to human biases, but he is perhaps working the opposite direction -- by showing the similarities of Data's behaviour to humans, he is countering the natural bias that he is probably not conscious because he is different in "construction" to humans. I'm not really saying I've knocked down all of these (or other potential ones). But rather than starting with why-is-Data-different-from-a-toaster, if you start with why-is-Data-different-from-a-human then Louvois' ruling makes sense. In that case, it is a significant kind of chauvinism that Picard (and Data, for that matter) do not start heading forth and trying to figure out whether they should liberate the Enterprise computer, or weather patterns, or rocks which no one would even think to wonder about, but the "AI rights" arc is not done; and yes, I do think that there is more evidence for Data's cognitive powers than the computer's, though it also has some degree of adaptability.
Set Bookmark
William B
Thu, Jun 23, 2016, 4:22pm (UTC -5)
Re: TNG S2: The Measure of a Man

@Peter G.,

I agree that intelligence and cognitive ability is the main way in which we distinguish different types of animals, and as you say also the similarity to humans is what tends to grant mammals special rights. Within this episode, I think we are seeing something similar with Data.

Picard asks Maddox to define sentience, and he supplies intelligence, self-awareness and consciousness.

PICARD: Is Commander Data intelligent?
MADDOX: Yes. It has the ability to learn and understand, and to cope with new situations.

I submit that this is the way the episode argues what is different about Data from a toaster, or a piece of paper; it is also what distinguishes animals granted special rights from ones which are not. It is not stated explicitly, but I believe that this ability to "learn and understand, and to cope with new situations" is what is missing from other code. Eventually Voyager's computer has bio-neural gel packs, but for now there is no indication that other systems encountered have a neural net like Data's, which is designed to emulate the functioning of the human brain. I think that Picard is being a little jokey in his statement that Data's statement of where he is and what this court case means for him proves that he is self-aware, because "awareness" to some degree requires consciousness. Really, I think that consciousness is necessary but not sufficient for full self-awareness, and the part that is not covered by consciousness is covered by Data's statement of his situation. That is indeed the same quality that a piece of paper which has "I am a piece of paper!" written on it; if that piece of paper has consciousness and somehow controls its "I am a piece of paper!" statement, then it would be self-aware. In any case, the episode did not argue that the Enterprise computer is not sufficiently intelligent in the sense of adaptability etc. to meet the criterion for sentience; that the ruling is on Data alone rather than all AI is a function of the narrowness of the case.

The combination of intelligence and "self-awareness," which is really the demonstration of the component of self-awareness that is not covered by consciousness, is what makes Data an edge case where consciousness is the essential final component, and "I don't know" becomes sufficient. Animals which are "conscious" but with no evidence of self-awareness or intelligence do not have rights, and thus AI which are not intelligent on the level of Data (who has human-level adaptability) will never have the question of whether they have feelings or consciousness raised at all.

How do you prove that something is or is not conscious? And that is why the human-centrism is important; basically that is the *only* tool that humans have to demonstrate consciousness or internal life, or lack thereof. I know that I have consciousness or internal life, and therefore beings that demonstrate qualities similar to mine, and have a similar construction to mine, are likely to be consciousness. I am not claiming this is great; it is of course narcissistic. But all arguments about consciousness start from human-centricity because the only way we have to identify the existence of an inner life is by our own example, or, at least, I have a hard time imagining any other way. In any case, demonstrating that Data (states that he) values Tasha is a way of demonstrating that Data (states that he) has values, wishes, desires which were not programmed into him directly, which adds weight to Data's stated desire not to be destroyed. It also emphasizes that Data has made his own connections besides those which were specifically and trivially predictable based on his original coding -- again, the ability to learn and adapt etc. To some degree, the idea that animals can be sorted by cognitive ability but that "cognitive ability" and intelligence would not automatically be a sign that computers have some degree of internal life is because of similarities to humans -- animals come from similar building blocks to humans (DNA, etc.) and so it is assumed that their intelligence corresponds to something similar to our own, which we know to value because we experience our own value. Now, obviously by the time of the show, humans have met other species which are sentient...but I think that the sentience is still primarily demonstrated by being similar to humans. How can anyone possibly know if anyone else is sentient? The only possible way is to either take beings at their word, or to build through analogy to one's own experience. The only being I can be sure is sentient is me; everyone else's sentience is accepted based on people being sufficiently close to me. I think that humans should expand outward as much as possible and not rely entirely on chauvinism, but I have no idea how exactly I would determine if a fully alien being which claimed that it was sentient truly experienced sentience or was just able to simulate it.

(Actually, I don't "really" know that my experience of consciousness is real, but I am still experiencing something, so I go with that.)

As to whether his statements that he values his life, or values Tasha, etc., indicate that he actually values them, this is what it means for Picard to ask if Data is conscious. If Data is not conscious, then his statements are just the verbal printouts of an automaton; if he is conscious then they are, on some level, "felt." And here I agree that Picard fails to make much of a case; all he does is ask whether everyone is sure. If I were Picard, I would start arguing that the similarity of Data's brain to a human brain and the complexity of his programming indicate a sufficient similarity in all observables to the human brain for us to conclude that it will likely have other traits in common with human brain, including consciousness. Even without the comparison to humans, though, it may happen that we can never fully assume that any rock or mountain or collection of atoms is *not* conscious, and we must accept a certain level of intelligence and self-awareness as sufficient benchmarks to declare something sentient. This is, of course, very unsatisfying, but it is unsatisfying, too, to begin with the presumption that only beings which are sufficiently similar to humans in physical nature (i.e. made up of cells and DNA) have the possibility of consciousness. To simply suspect that anything in the universe might have consciousness is a simplistic direction for the episode to go in, granted, which is why I prefer to think that the implication *is* that it is Data's similarity to humanoids in terms of systems of value, cognitive ability, adaptability and even in design (neural net which is designed to reproduce the functioning of a brain) which may not be possible without emergent consciousness. I think that most people would agree that it seems *more likely* than something that demonstrates intelligence would also have consciousness than something which demonstrates no intelligence, and so I do think this is one of the implicit elements in Picard's argument, which would make it much stronger, though it is also not entirely necessary.

One troubling question is what it would mean to program androids like Data, with a similar level of cognitive ability and similarity to humans, *without* any desire for self-preservation whatsoever.

I actually do agree, though, that Data is designed for ego-stoking. Actually that is some of the point -- Lore complicates the story because Lore immediately recognized his superiority to humans in physical and mental capacity, immediately came into conflict with people who hated him, and promptly had to be shut down. Data *is* meant to be entirely user-friendly. I think it's also true that Soong intended Data to be a person with internal life, but Data's desire to be human in a nonthreatening way, and the reality that it is basically impossible for him to achieve that, is pretty baked into him, which is tragic if you believe that Data *does* have some sort of internal life, as I tend to.
Set Bookmark
Peter G.
Thu, Jun 23, 2016, 3:53pm (UTC -5)
Re: TNG S2: The Measure of a Man

@ Chrome,

"I took Louvois' ruling to mean that if a machine is so life-like that it has the perception of a soul, then it can't be considered property and has the right to chose."

This is kind of my problem. The things described in the episode don't show Data to be life-like, but rather Human-like, which is a significant distinction. It means that entities that emulate being a literal Human Being will receive favorable treatment by the Federation. I'm sure plenty of sentient life-forms in Star Trek don't have 'intimacy' or 'friendship' in the ways Humans know it, so I'm not sure how those should matter (but to a judge out of her depth I can see how she could be unaware enough to think it should). And just a quibble, but Data doesn't have vanity; his having a career is also a circular argument because the argument about whether he *should* have a career relies on him having the rights afforded to sentients.

The Enterprise computer wasn't designed to have personality or look like a person, but it could have been. Would the aesthetic alterations in the programming have made it suddenly sentient because it *seemed* more sentient? If that's all it comes down to then I would confidently state that Data is not sentient. But they did dally with giving the Enterprise computer a personality in TOS, and although it was played as comedy (choosing the personality matrix that calls Kirk "Dear" was probably some engineer trolling him) the takeaway from that silly experiment was to show that some Human captains like their machines to sound like machines and not to pretend (poorly) to be like Humans. To pretend in that way could be felt as an insult to Humans. But what about a machine that acknowledges it is a machine, and acts like one, but wants to be more Human? That's the recipe for ego stroking, and again I wouldn't be surprised if Data's entire existential crisis wasn't a pure magic trick played by Soong to get people to like Data (and thus to protect his work).
Set Bookmark
Thu, Jun 23, 2016, 3:34pm (UTC -5)
Re: TNG S2: The Measure of a Man

@Peter G. and William B

I took Louvois' ruling to mean that if a machine is so life-like that it has the perception of a soul, then it can't be considered property and has the right to chose.

That's the difference between the Enterprise computer and Data. No matter how sophisticated the Enterprise is programmed, it's still missing those very life-like qualities that Data showed in this episode (intimacy, vanity, friendship, a career, etc.).
Set Bookmark
Thu, Jun 23, 2016, 2:45pm (UTC -5)
Re: Trailer: Star Trek Beyond

Anton's passing is terrible news. The cast was the best part of these films...

Also current bad news, it looks like CBS/Paramount have become serious about killing off all fan films:

Depressing week.

Set Bookmark
Peter G.
Thu, Jun 23, 2016, 2:41pm (UTC -5)
Re: TNG S2: The Measure of a Man

@ FlyingSquirrel,

If you afford sentient rights to Data-as-a-box, then the physical housing becomes irrelevant and what you're really doing is granting sentient status to code. That's fine in a sense, but that opens up, as mentioned, a massive quagmire about who can write this code, delete it, alter it, and even maybe about what kinds of attributes it can be given in the first place. Should it be illegal to write a line of code that makes the program "malevolent"? How about merely selfish and prone to kill for gain as Humans now do? It goes beyond the scope of the episode, but my feeling on the subject is that the episode gives a lot of unspoken weight to the fact that Data has been largely anthropomorphized. Maybe Soong did that on purpose to protect his creation using the sympathy of others to its shape.

@ William B,

There are certainly gradations of biological life, and although we're hazy on whether there are levels of sentience (or any sentience) there are clear differences in, basically, cognitive capacity among animals which lets us categorize them by importance. For the most part we protect intelligent animals the most, and mammals get heavier weight than non-mammals. But it's easy to see why we can do this: we can either identify outright biases (we sympathize with fellow mammals) or else identify clear distinctions in intelligence and give greater weight to those closest to sentience. That makes sense for the time being.

For AI, however, we have no such easy set of distinctions because, frankly, we don't live in a world full of various AI's to study and compare. We basically have a lack of empirical experience with them, but the difference is that while we couldn't have known what a cow was until we saw one, we certainly can know what certain kinds of AI would look on a theoretical level. Maybe not advanced code the likes of which hasn't been invented yet, but certainly anything binary and linear such as we have now (and which I suspect Data is as well; he is not a quantum computer).

And IF it's feasible to differentiate between different *types* of code - one is rigid and preset, one learns but its learning algorithm is preset, one can change its programming, etc - then this would be the determining factor in creating a hierarchy of rights for AI. Again, I see this particular discussion as being the real one to be had about Data. Whether he's 'like a Human' or not is an extremely narcissistic way to approach the topic. The question isn't whether an AI resembles a Human, but how AI contrasts with other AI. Is Data just an extraordinarily complex BASIC program that does exactly what it's told to do, no more or less? Note again that issuing phrases such as "but I want to live" can be written into any software of any simplicity, and thus the expression of such a 'desire' shouldn't be confused with desire. I can write the same thing on a piece of paper but the paper isn't sentient. The court case in the episode seemed to take very seriously Data's 'feelings' for Tasha, even though they failed to address whether those were really 'feelings' or just words issued in common usage to suit a situation.

To be honest, even after having seen the show I'm not quite sure whether Data should have been considered Starfleet property or not. It does seem like an extravagant waste to avoid having androids on every ship seeing as how Data personally saved not only the Enterprise but probably the Federation multiple times. And as for the argument of human lives being saved in favor of risking androids...well...duh? Isn't that a good thing?
Set Bookmark
William B
Thu, Jun 23, 2016, 2:23pm (UTC -5)
Re: TNG S2: The Measure of a Man

Incidentally, I like the way Data's status remains somewhat undecided throughout the show. The Federation, I think, *should* actually make up its mind and make a harder determination of what his rights are, but the fuzziness strikes me as very believable. I like, too, that even *Picard* swings between being Data's staunchest advocate and using the threat of treating Data as a defective piece of machinery in something like "Clues." And even Data vacillates. In particular, note that Data's deactivation of Lore in "Descent" goes without much fanfare; certainly Lore is dangerous to the extreme, but I suspect that if Data had killed a human adversary in quite the way he takes Lore out, that there would have been more questions asked about whether he did everything he could. I have been wanting for a while to write about Data's choice in "Inheritance," and how I think his decision not to reveal to Julianna that she is an android reflects a great deal about how Data views himself and his status and his quest for humanity at that point in the series and the tragic connotations thereof. Even though everyone on the show more or less takes the leap of faith that Data is, or has the potential to be, a real boy, it's an act of faith that needs to be regularly renewed and it gets called into question, with characters suddenly reversing themselves because no one is really that sure, even though he's their friend.

I do think that there are some significant problems with the show for not going far enough with Data (and later the Doctor) in following through all the arguments about where exactly the boundaries are supposed to be between personhood/non-personhood, and in allowing other characters to maintain a kind of agnosticism when they should really have to make up their minds definitely. I think that it's very reasonable to object and I don't want to try to come across as saying that one *has* to like the show's overall attitude and the contradictions it runs into. That said, it generally works very well for me with Data (and from what I remember, pretty well with the Doctor). I think that the...emotional dynamics, for lack of a better term, generally work, but there's no question for me that some of the wishy-washiness on the character and what distinguishes him from other AI and what distinguishes him from humans and etc. is the result of writers' backing off from some of the challenges posed by the character rather than *purely* leaving the character's status regularly open for re-review for emotional/mythological reasons. I like the result a lot because what *is* done with Data in the show means a lot to me personally and so I'm willing to overlook a fair amount, but I don't expect everyone to.
Set Bookmark
Thu, Jun 23, 2016, 2:12pm (UTC -5)
Re: TNG S2: The Measure of a Man

@ Peter G.

"Emergence" was goofy, but wasn't there a scene where they discussed what sort of action to take in light of the fact that they might be dealing with a sentient entity that was trying to communicate? Also, I think the idea was that while a sentient mind seemed to have somehow developed from the ship's computer, the computer in its normal state was not sentient or self-aware.

My own view on AIs, incidentally, is that we probably shouldn't create them if we aren't prepared to grant them individual rights, precisely because we'll end up with these potentially unanswerable questions. I don't know enough about computer science to answer your question about writing and deleting a program, just because I'd need to know more about what would go into the potential process of creating an AI and what kind of testing could be done before activation.

If Data were contained in a box instead of an android body, I actually don't have much trouble saying yes, he should have the same rights. Obviously he wouldn't be able to move around, but I'd impose the same prohibitions against turning him off without his consent or otherwise messing with his programming.
Next ►Page 1 of 1,310
▲Top of Page | Menu | Copyright © 1994-2016 Jamahl Epsicokhan. All rights reserved. Unauthorized duplication or distribution of any content is prohibited. This site is an independent publication and is not affiliated with or authorized by any entity or company referenced herein. See site policies.