Review Text
In TNG's first bona fide classic, the nature of Data's existence becomes a fascinating philosophical debate and a basis for a crucial legal argument and Federation precedent. Commander Bruce Maddox (Brian Brophy), on behalf of Starfleet, orders Data to be reassigned and dismantled for scientific research in the hopes of finding a way to manufacture more androids with his physical and mental abilities. When Data says he would rather resign from Starfleet, Maddox insists that Data has no rights and takes it up with the region's newly created JAG office, headed by Capain Philipa Louvois (Amanda McBroom), who serves as judge. Picard takes on the role of Data's defender.
This episode plays like a rebuke to "The Schizoid Man," taking the themes that were intriguing in that episode and expanding upon them to much better effect. What rights does Data have under the law, and is that the same as what's morally right to grant him as a sentient machine? Of course, one of Maddox's arguments is that Data doesn't have sentience, but merely the appearance of such. The episode cleverly pits Riker against Picard; because the new JAG office has no staff yet, the role of prosecution is forced upon the first officer. Riker finds himself arguing a case he doesn't even believe in — but nevertheless ends up arguing it very well, including with a devastating theatrical courtroom maneuver where he turns Data off on the stand.
Picard's rebuttal is classic TNG ideology as put in a courtroom setting. The concept of manufacturing a race of artificial but sentient people has disturbing possibilities — "an entire generation of disposable people," as Guinan puts it. Picard's demand of an answer from Maddox, "What is he?" strips the situation down to its bare basics, and Picard answers Starfleet's mantra of seeking out new life by suggesting Data as the perfect example: "THERE IT SITS." Great stuff.
Still, what I perhaps love most about this episode is the way Data initially reacts to being told he has no rights. He takes what would for any man be a reason for outrage and instead approaches the situation purely with logic. He has strong opinions on the matter, but he doesn't get upset, because that's outside the scope of his ability to react. His reaction is based solely on the logical argument for his self-protection and his uniqueness. And at the end, after he has won, he holds no ill will toward Maddox. Indeed, he can sort of see where Maddox is coming from.
Trivia footnote: This is also the first episode of TNG to feature the poker game.
Previous episode: A Matter of Honor
Next episode: The Dauphin
Like this site? Support it by buying Jammer a coffee.
283 comments on this post
Tres
just wanted to agree with your review of "Measure of a Man" and wanted to add, the final secne, where Riker and Data speak in the conference room, when Data says, "you're actions injured you to save me. I will not forget that." Chokes me up every time. Wonderful writing.
Damien
"Measure of a Man" - absolutely a classic and one of my favourites, possibly favourite.
I only have one small quibble (well, it's a biggie, but not so that it detracts from the overall ep).
It just didn't seem realistic that a case of such huge potential importance would be prosecuted and defended by two people that have no legal training, have never tried a case and are friends and colleagues serving on the same ship! It beggars belief that such a trial could take place, especially given its importance.
Surely the logical thing to do would have been to delay the trial until such time that trained lawyers could be gathered and a legally binding decision could be made, rather than leaving the decision open to appeal/overrule in the future on the grounds of improper procedure.
Trajan
'The Measure of a Man'. Ugh! I'm sorry, I hate it. I have no problem with the theme but as a lawyer, I really, really hate it. It's a bit like the entire Doctor Who series 'Trial of a Timelord' where the greatest and oldest civilisation in the galaxy apparently has a judicial system that bears no relation to any reasonable concept of 'justice'. No JAG officers so you must prosecute? If you don't he's 'a toaster'? No, if you insist on making that ruling I'll ensure that I have your head on a plate by the end of the day and you'll never practice law in the Alpha Quadrant again. As for turning Data off; it was his rights as a sentient being that were for the court to decide. Allowing him to be turned off constitutes assault, battery, actual bodily harm and possibly attempted murder if he had no reset button. And this is allowed in A COURTROOM?
Sorry. No stars from me. Actually, can I award negative stars??
J.B. Nicholson-Owens
More about "Measure of a Man": In addition to Trajan's objections, I'll also add that the episode strikes me as a huge dodge of the issue they set themselves up to decide.
One wonders how Data got to serve at all if his entrance committee only had one objection -- Cmdr. Maddox's objection -- and that committee based their decision on sentience like Data said.
But there's another problem: the slavery argument (should we or should we not let Maddox make a race of copies of Cmdr. Data?). The slavery argument only works if you already agree with what Capt. Picard had to argue. The slavery argument fails if already agree with what Cmdr. Riker had to argue (Data is property, he can no more object to work than the Enterprise computer can object to a refit). It seems to me that the slavery argument presupposes the very thing the hearing is meant to decide and therefore this argument has no place in this hearing.
And Cmdr. Louvois' finding essentially passes the buck (as she all but says at the end of the hearing): she has no good reason to find as she does but she apparently believes erring on the side of giving Data "choice" is a safer route.
I think this episode might merit as the most overrated TNG episode.
Dave Nielsen
"The Measure of a Man." I too loved this episode, but I can't help wondering how this question wasn't already decided years earlier. It seems to me Data's status would have had to be decided before he could join Starfleet, or at least before he could be given a commission. Then I partly agree with Maddox that Data could just be simulating sentience. With a sufficiently sophisticated computer there would be no way to tell. I guess the point is that there's no way to tell with anyone, but then there's a difference between the "programming" of biological life and that of an articial, constructed life form. It's also a bit cheeseball that they would have no staff just so that some of the principal actors won't just be getting paid to sit in the background. I wonder too if it was necessary for the Philippa to have been an old flame of Picard's. Still, with all that I still love and it still stands as one of TNG's best episodes.
Dave Nielsen
Trajan: "As for turning Data off; it was his rights as a sentient being that were for the court to decide. Allowing him to be turned off constitutes assault, battery, actual bodily harm and possibly attempted murder if he had no reset button."
Since Data's sentience was the question here, Riker couldn't be charged with anything for turning Data off so that would be perfectly fine to do in a courtroom. Even after the ruling, it wouldn't have mattered - only if he did it again. If Maddox's arguments had been upheld, and Data was property, who could be dissected against his will, he can't have the rights of a sentient being.
Trajan
Dave Neilsen: Since Data's sentience was the question here, Riker couldn't be charged with anything for turning Data off so that would be perfectly fine to do in a courtroom.
I disagree. You could 'turn off' an alleged human being with a baseball bat but it would produce no more evidence of his sentience than Data's off switch does of his.
X
Trajan: You could 'turn off' an alleged human being with a baseball bat but...
No. You could not 'turn off' a human being with a baseball bat in the same way that you can turn off a machine. You can either turn off a conscious part of a man's brain (brain and the entire organism is still functioning) or kill him. You cannot turn off a human completely, as you can turn off a machine, an then turn him on again.
Trajan
X: You cannot turn off a human completely, as you can turn off a machine, and then turn him on again.
Sure you can. Just don't be so enthusiastic with the baseball bat and knock him unconscious with it. (Which, in my courtroom, will still get you locked up for grievous bodily harm...)
Patrick
The Original Star Trek had a second pilot and in some ways, TNG did too--"The Measure of a Man". This was the watershed turning point of the series it's thoughtful story was uniquely it's own and not another riff on Classic Trek. The actors were truly becoming comfortable in their character's skin; call backs to the series own mythology from Tasha Yar's intimacy with Data, to a mention of Lore made the fictional universe of TNG more real. Secondary characters like O'Brien and Guinan were weaving their way through the mythos. And last but not least: THE FIRST POKER GAME--a brilliant edition to the series that provided some of the best characters moments and the classic final scene of the series.
"The Measure of a Man" and "Q Who" were an effective one-two punch that made the show the one we know and love.
Peremensoe
"I partly agree with Maddox that Data could just be simulating sentience. With a sufficiently sophisticated computer there would be no way to tell. I guess the point is that there's no way to tell with anyone, but then there's a difference between the 'programming' of biological life and that of an articial, constructed life form."
Is there?
It's a somewhat deeper question than the episode really addresses, but... what *is* sentience?
Is it physically contained in the actual electrical and chemical processes of neurons?
Or is it the *product* of a certain complexity of such processes?
If the latter, then not only is there no fundamental difference between biological and synthetic processors giving rise to the sentient function--but there is also no such thing as 'simulated' sentience. If the complexity is there, it's there.
xaaos
Data is the best!!! Loved the final scene between him and Riker.
ReptilianSamurai
Just saw the new extended, remastered version of the episode the other night, and it was absolutely fantastic. It really gives the story a bit of room to breathe, and better develops the guest characters (especially Philipa's backstory with Picard) as well as really exploring Data's dilemma and the nature of being sentient. This version of the episode, in my opinion, is one of the best in all of Trek and I'm really glad they were able to give us this extended cut.
Hope Jammer reviews the extended version at some point, I'd be interested to hear his take on how it changes the episode.
Rikko
What a wonderful episode!
@ Trajan: I don't want to beat a dead horse, but I think you're being too hard on this ep for something it isn't. TNG is not trying to be 'law and order in space'. It's always about the bigger questions.
I can suspend my disbelief with stories like this, specially when I compare 'The measure' to total fantasy wrecks like the black pond of tar of 'skin of evil' or the many energy life -form from countless episodes.
Still, I wont deny that the lack of crew for a trial of this gravity was hilarious. The production staff must have been in dire straits during this season.
Shawn Davis
Greetings to all. I love this episode. One of TNG's classic and features one of my favorite character, Data, in an most interesting position ever.
I have one question though, Riker as Data to bend a metal bar in an attempt to disprove that he is not sentient and Picard object to that by stating that there are many live alien species that are strong enough to do that, Capain Philipa disagreed with him and told Riker to continue with the demonstration. My question is why is Picard wrong? I though what he said about some aliens being strong enough to bend the metal bar along with robots and androids like Data was logical to me.
Thanks.
PeteTongLaw
It seems to me that the space station and the Enterprise-D are not appropriately scaled.
William B
I do like this episode, perhaps even love it, but I admit that I do find it hard to suspend my disbelief in portions of it related to the legal proceedings.
It does seem, as others have mentioned above, as if Starfleet should have settled this issue before; but on some level it does make sense that maybe they didn't, because Data's status is so unique.
That said, I do think the idea that Data would be Starfleet property because he went through the Academy and joined Starfleet is disturbing, because Data is only in Starfleet because he chose to do so. The Enterprise computer never *chose* to be part of Starfleet.
I suppose one resolution to this would be that since Data was found by Starfleet personnel (when he was recovered from Omicron Theta), at that point he 'should have' entered into Starfleet custody as property. It would also make sense if the reason that Data's status as having rights/not having rights was not extensively discussed (e.g. whether Data constitutes a Federation citizen) was that he spent all his time from his discovery on Omicron Theta to his entrance into the Academy with Starfleet personnel in some capacity or another, so that there was never a time in which he would need official Federation citizenship.
On some level it does make sense to me that Data would hang around the Trieste (I think it was?) after they discovered him until eventually a Starfleet officer there sponsored his entry into the academy.
I suppose that if Data had no sentience all along, and had a mere facsimile of it -- if Data genuinely WAS an object and not a person -- perhaps he would go to Starfleet ownership merely for the fact that Data was salvaged by a Starfleet vessel after the destruction of Omicron Theta, and since there are no living "rightful owners" with the colony destroyed (and Soong and his wife for that matter thought dead) it makes sense that Starfleet could claim something like salvage rights.
Re: the point raised by J.B. Nicholson-Owens, it is true that IF Data is property, then so would a race of mechanical beings created in Data's image. It does not actually affect the case directly.
However, I do not think this is a flaw. Picard makes the point that one Data is a curiosity, but a fleet of Datas would constitute a race. Perhaps that was a leading phrase -- but instead we should say that a fleet of Datas would constitute a much larger set. The main purpose of this argument is, I think, to demonstrate that the consequences extend far beyond Data himself.
Put it this way: if there is a 99% chance that Data is property and a 1% chance that he is a sentient being with his own sets of rights, then taking Data by himself, there is a 1% chance that a single life will be unfairly oppressed. But if there are thousands and thousands and thousands of Datas in the future, that becomes a 1% chance that thousands and thousands of beings will be oppressed. That is simply a much bigger scale and a much bigger potential for tragedy. If Luvois ruled that Data were property and he were destroyed but was the only creature destroyed, it would be tragic, but still only a single being. If Luvois ruled that Data was property and thousands of androids were produced and Luvois was wrong, then _based on that ruling_ a whole race would be condemned to servitude. The possible cost to her decision is much greater, and the importance of erring on the side of giving Data rights becomes greater as a result as well.
N.I.L.E.S.
Has anyone considered that the basic premise of this episode is unnecessary based on the shows own rules. The premise is that Data needs to be dismantled so that more androids like him can be created but Data is dismantled every time he uses the transporter. Since the enterprise computer is able to dismantle Data and reassemble him it must have detailed information about his construction. Surely all Maddox needs to do is access the information stored in the transporter logs and he would have all the information he needs to replicate Data.
The above point aside I really love the episode and the questions it raises about the point when a machine becomes conscious. I agree with those that have stated that his issue would have been settled before Data entered Starfleet, especially since the sole bases for Maddox objecting to Data's entrance into Starfleet was because he did not believe that Data was sentient. The fact that the others on the committee allowed Data to enter Starfleet anyway suggest that they believed he was sentient.
I also agree that there were some aspects of the court scenes that were not as convincing as they could have been. For instance, since the issue to be decided is whether or not Data is sentient I find it odd that no psychologist were asked to testify since consciousness is part of what psychologist study. I also find it odd that there were no cross exam when a witness testified. For example, when Data was on the stand Picard asked him what logically purpose several items Data had packed served. In reference to his medals Data replied that he did not know he just wanted them and in reference to a picture of Tasha Data replied that she was special to him because they were intimate. Clearly Picard was trying to imply that Data had an emotional connection to the things he had packed much as humans do. Riker could have easily undermined that premise on cross exam by asking, "When you say you wanted these medals do you mean you felt a strong desire to take them with you?" Data would have had to have answered no because by his own admission he does not feel anything. This would have reminded the audience that Data is a machine.
Grumpy
"...Data is dismantled every time he uses the transporter."
If you're suggesting that Data could be replicated like any other hardware, a fair point. Presumably something in his positronic brain is akin to lifeforms, which can be transported at "quantum resolution" but not replicated at "molecular resolution." But the issue was never addressed in the series.
Also, apparently Data's positronic brain is an "approved magnetic containment device," which the tech manual says is the only way to transport antimatter without "extensive modifications to the pattern buffer."
istok
This is the best TNG episode I've seen so far. Admittedly, I've only seen season 1 and half of 2. Nonetheless it is very compelling and it suspended my disbelief just fine. I don't care to dissect mainstream scifi television in great detail. Something will inevitably fail to add up. But overall, uncharacteristically for the said mainstream television, this episode actually raised some deep issues, and it was done well, in its own context. It actually got me thinking, what is life? No, really? Seems very easy but I have no more concrete answers to that, than I do to the questions, "what is the universe", or, "what is the earth's core really like".
All in all, this was good television.
Frank Wallace
Wonderful episode.
I never saw any reason to question the legal elements of the episode. For one, Starfleet officers are multi purpose types, given that the Federation doesn't have "police" or "armies" in the truest sense. Secondly, The reason for Picard being involved is explained early, and the other captain is a JAG member.
Lastly, the person that wrote the episode has actually trained and practiced law as a career for several years. She will know enough about it to make it believable, and it DID seem believable. Plus, it's the idea behind the episode that matters. :)
Sam S.
I just wanted to add that this episode provides the term toaster for artificial life. This apparently is where Battlestar Galactica reboot gets the concept for its artificial lifeforms.
SkepticalMI
This was basically an all or nothing episode. A concept like this could either succeed magnificently in raising philosophical points or fail miserably in cliches and preachiness. Thankfully, it hit the former far more often than the latter.
Yes, the courtroom scenes were hardly very legally precise (but heck, lawyer based TV shows aren't very legally precise either). Unfortunately, I don't think either Riker or Picard did a very good job. Maybe that was due to the fact that it had to be short to fit in the episode. Of course, they could have cut out some of the Picard/JAG romance backstory for a better courtroom drama.
But it probably would feel incomplete no matter how long they took. In reality, it would probably be a very lengthy trial, so no showing in a 43 minute TV show could fully expand whether or not he's sentient.
And frankly, it isn't necessary. We already know the arguments already. It really does boil down to a few simple facts: On the negative side, he was very clearly built and programmed by a person. On the positive side, he very clearly acts like he's sentient. And frankly, we don't know.
And that's probably what makes this episode work. They acknowledge and reinforce that. Picard's realization (actually Guinan's realization) to make the argument but avoid defining the scope in favor of the bigger picture was pitch perfect. This is a simple backwater JAG office. Should it really be deciding the fate of a potential race? Picard made that point beautifully in the best speech he's had so far. And it was that speech, that implication, that resonated.
The point was not to decide whether or not Data was sentient, but to consider the consequences. And to err on the side of caution.
Of course, in the real world, Maddox would undoubtedly appeal to a higher court, and this would make its way to the Federation equivalent of the supreme court. But you know what? I'm glad it ended here. Another good aspect of this story was that, despite going full tilt towards making Maddox the Villain with a capital V, he seemed to get Picard's point as well. I'd like to think that Maddox does have a conscience and was willing to stop his pursuit based on even the chance that Data is sentient.
This episode seemed to skirt the edge of being melodramatic, preachy, and cheesy, but always managed to avoid falling into it. Most importantly of all, it hit exactly the right tone on the fundamental question. There's a few nagging doubts in terms of the plotting and the in-universe rationale for all of this (which others have pointed out). I think that keeps it from being elevated too highly, but it's still the best episode of the series so far.
Latex Zebra
This might actually be the best episode of any Trek series.
Nick P.
OK, first amazing episode! One of the best of the series...However, I am not sure that I agree with the central theme, that it is wrong for starfleet to create a race of slaves. The enterprise is as sopshistacated as data, and has already been able to create sentience (elementary, dear data), and there is a fleet of them, further, data numerous times saves the ship, why is it wrong to want to mass produce him for starfleet needs?
K'Elvis
Sure, you had to suspend disbelief, but this was one of my favorite episodes of TNG. This wouldn't have been resolved on some Starbase, but by properly trained legal officials in a proper court.
This should have been resolved already, Starfleet had already accepted Data as a person by allowing him to enter the academy and commissioning him. Data's ability to bend a bar is not evidence that he is a thing. As counter evidence, Picard could have brought in a bar of his own, and showed that some members of his crew were strong enough to bend it, while others were not.
To counter the off-switch argument that Riker made, one need only have someone perform the Vulcan nerve pinch, which effectively turns a humanoid off.
If Data had been declared to be property, that wouldn't mean that he was Starfleet's property. Starfleet didn't make him, if he was anyone's property, he would be Dr. Soong's property.
Still, this is an episode well worth suspending disbelief, because the ideas are so profound.
Cammie
I don't think Riker would have liked it if Data did a Vulcan Nerve Pinch to turn him off.
Cammie
I love any Star Trek episode with Q,Data,or Spock in it.I think they are the highlight of the show.
Jons
There is no "we don't know" about him being sentient - the very fact that he spontaneously says (and insists) he's sentient means he is.
And an argument which I think should have been pushed further: Organic life isn't any less a machine than Data. The only difference is that it's a self-replicating machine. Animals (humans included) are organic machines whose building and functioning is determined by dna sequences (GACT instead of 0 & 1).
As for the comparison with the ship's computer: As a matter of fact, not all organic life is sentient: We have somehow determined, for diverse reasons good or bad that non-sentient life isn't as respectable as sentient life. In that, the ship's computer isn't Starfleet's property any more than a dog belonging to Starfleet would be. Still, just as a dog isn't a human being, the ship's computer isn't a sentient android. The fact they're both non-organic has no bearing on this.
In any case, whether it's here or during the Doctor's trial in Voyager, I cannot even begin to understand the arguments of the "they're machines" side. Obviously as portrayed in Star Trek, they ARE sentient (whether we will one day be able to replicate a brain's complexity well enough that this would be possible is another matter entirely).
Yanks
Where did my(our) discussion go?
Shannon
Totally agree! This is probably one of the best episodes of Star Trek across ALL of the series. And I love that it didn't involve phasers, torpedos, or silly looking aliens. This was a moral story about the rights granted to a sentient being of our own design. This is class Trek, with themes that stretch deep into our society... Patrick Stewart was amazing in this episode, with his oh so controlled passion when he was arguing Data's case. "Your honor, the court room is a crucible, and when we burn away irrelevancies, we are left with a pure product, the truth for all time." Great stuff! I only wish I could give it 5 stars, because this was an amazing story!
Yanks
I'll also add that the episode strikes me as a huge dodge of the issue they set themselves up to decide.
One wonders how Data got to serve at all if his entrance committee only had one objection -- Cmdr. Maddox's objection -- and that committee based their decision on sentience like Data said.
But there's another problem: the slavery argument (should we or should we not let Maddox make a race of copies of Cmdr. Data?). The slavery argument only works if you already agree with what Capt. Picard had to argue. The slavery argument fails if already agree with what Cmdr. Riker had to argue (Data is property, he can no more object to work than the Enterprise computer can object to a refit). It seems to me that the slavery argument presupposes the very thing the hearing is meant to decide and therefore this argument has no place in this hearing.
And Cmdr. Louvois' finding essentially passes the buck (as she all but says at the end of the hearing): she has no good reason to find as she does but she apparently believes erring on the side of giving Data "choice" is a safer route.
But you have to realize the only reason we got that conversation in 10-Forward was because Whoopie is black.
Here is the transcript:
"GUINAN: Do you mean his argument was that good?
PICARD: Riker's presentation was devastating. He almost convinced me.
GUINAN: You've got the harder argument. By his own admission, Data is a machine.
PICARD: That's true.
GUINAN: You're worried about what's going to happen to him?
PICARD: I've had to send people on far more dangerous missions.
GUINAN: Then this should work out fine. Maddox could get lucky and create a whole army of Datas, all very valuable.
PICARD: Oh, yes. No doubt.
GUINAN: He's proved his value to you.
PICARD: In ways that I cannot even begin to calculate.
GUINAN: And now he's about to be ruled the property of Starfleet. That should increase his value.
PICARD: In what way?
GUINAN: Well, consider that in the history of many worlds there have always been disposable creatures. They do the dirty work. They do the work that no one else wants to do because it's too difficult, or to hazardous. And an army of Datas, all disposable, you don't have to think about their welfare, you don't think about how they feel. Whole generations of disposable people.
PICARD: You're talking about slavery.
GUINAN: I think that's a little harsh.
PICARD: I don't think that's a little harsh. I think that's the truth. But that's a truth we have obscured behind a comfortable, easy euphemism. Property. But that's not the issue at all, is it?"
Nothing in this conversation has ANYTHING to do with proving Data's sentience.
What one could do with a technology or a thing should in no way have any bearing on this trial.
They should have been trying to prove Data was sentient because then he could be identified as something more than 'property', not that we they could make a bunch of him so it isn't right. If Data was proven not to have sentience, then why wouldn't Star Fleet want one on every bridge?
This is why this episode, in my view, receives more acclaim than it deserves.
It's nothing more that the liberal machine injecting slavery into a situation where it didn't exist because they wanted to make this episode "moral". It pales in comparison to Uhura's conversation with Lincoln:
"LINCOLN: What a charming negress. Oh, forgive me, my dear. I know in my time some used that term as a description of property.
UHURA: But why should I object to that term, sir? You see, in our century we've learned not to fear words.
KIRK: May I present our communications officer, Lieutenant Uhura.
LINCOLN: The foolishness of my century had me apologizing where no offense was given."
See in this exchange, Uhura responds how one would expect one to respond in the 23rd century, where Gene's vision is true. It doesn't faze her in the slightest, because it shouldn't. They bring a pertinent point up, but not in an accusatory way. In TNG, they inject something that happened 400 years ago in an attempt to justify something it doesn't relate to.
Why was Maddox a self-interested 'evil' white guy? Why did the epiphany for Picard come from a black Guinan? How does that epiphany relate to this case at all? Liberal hollywood. Poor writing.
Look at Picard's argument.
"PICARD: A single Data, and forgive me, Commander, is a curiosity. A wonder, even. But thousands of Datas. Isn't that becoming a race? And won't we be judged by how we treat that race? Now, tell me, Commander, what is Data?"
If Data isn't sentient, why is "it" different that a toaster? Because "it's" programed to talk? Do we regret making "races" of Starships? ... desk-top computers ... etc? The issue of "a race" is irrelevant, and only it and slavery are injected in his argument because Guinan was black.
Riker's argument WAS impressive, because he put the FACTS on display.
Picard's was not because he did nothing but put up a "feel bad" smokescreen that had nothing to do with proving whether or not Data, our beloved android, was indeed sentient or not.
So Picard put on a good dramatic show and Data won, which made us all and Riker feel good, but for all the wrong reasons. The Judge rules that Data was not a toaster, but why - because we might do something wrong with toasters in the future?
If they had proven Data was sentient (or some equivalent), then they could have addressed the whole cloning issue and that should be why Mattox can't "copy" Data, not because we might mistreat a machine in the future because we will make a bunch of them. But they didn't.
Josh
"But you have to realize the only reason we got that conversation in 10-Forward was because Whoopie is black."
Well, that is your interpretation, but I think it's fairly clear that Picard considers Data to be self-evidently sentient, yet was unable to argue this from a legal perspective adequately at that point in the episode. The essential argument of the episode - on my reading - is simply that Data is a self-aware being who is entitled to the presumption of sentience like anyone else, even though he is a machine. The corollary is that although Data cannot be "proven" to be sentient, there does not exist any test that can be prove it for anyone else either.
As for the slavery angle, Picard chooses that word to express his abhorrence at the idea of a race of sentient beings who might be "owned" and used like, to take your example, desktop computers. This doesn't have anything to do with any "liberal machine". It strikes me as a peculiarly Americentric reading to assume that this must have anything specifically to do with historical slavery in the Americas.
Elliott
@Yanks: Whoopie's race is relevant to the scene you described, but in a way that breaks the fourth wall, not "because she [Guinan] is black." If Guinan had been, say Beverly in this scene, the lines would read exactly the same and the truth of the statement would be no less, but the emotional *punch* wouldn't be quite so severe. It is purposefully uncomfortable for that brief moment she looks at the camera and we remember that these are actors with history, and out history with regards to how we treated other races, especially black people, has been mostly appalling. Again, the substance of the dialogue is what it is, but there's an extra layer to the scene because of Goldberg's race. It's in many ways what "Angel One" failed so miserably at.
Now, on to your other points:
" The slavery argument only works if you already agree with what Capt. Picard had to argue."
Well that's the point. If one hedges on the issue of Data's sentience, one can neatly hide behind the euphemism of property until the full implications of that process are pointed out by the slavery argument. You may not think Data is sentient--maybe he has a soul, maybe he hasn't (as Louvois said)--but if you're wrong, the issue is not the fate of one android, but the implications for how we treat an entire new form of life. Thus, the gravity of respecting this one individual android's sentience is enormous.
"Picard's was not because he did nothing but put up a "feel bad" smokescreen that had nothing to do with proving whether or not Data, our beloved android, was indeed sentient or not."
I'm kind of baffled by this: Picard asked Maddox what the qualifications for sentience were. He named them: intelligence, self-awareness and consciousness. Picard then went on to demonstrate how Data met those requirements, thus proving his sentience. The issues of race and slavery, as I said, have to do with the *implications* of the ruling, not winning the case. Picard's argument was that it wasn't merely a case of pitting the rights of Maddox against those of Data, but humanity's responsibility to the form of life of which Data is a vanguard.
Peremensoe
Josh and Elliott are correct. Also, Guinan refers to "many worlds," so while we the audience recognize the significance of Whoopi's blackness *for us*, in-universe it is explicitly *not* about just black slavery on Earth, or "400 years ago."
Peremensoe
Oh, and it's the TOS depiction of 'Lincoln' that was disingenuous and 'PC.' The real Lincoln assuredly did not consider black people to be humans of equal worth and dignity to himself.
Yanks
Josh & Elliot,
Elliot, you said it yourself. The "implications" can't be used to make the decision here, so the argument is fluff. That angle should have been ruled inadmissible. (Read J.B. Nicholson-Owens' post above.) Take the entire slavery/race thing out and Picard's argument doesn't change at all. She doesn't even mention it in her ruling.
"PHILLIPA: It sits there looking at me, and I don't know what it is. This case has dealt with metaphysics, with questions best left to saints and philosophers. I'm neither competent nor qualified to answer those. I've got to make a ruling, to try to speak to the future. Is Data a machine? Yes. Is he the property of Starfleet? No. We have all been dancing around the basic issue. Does Data have a soul? I don't know that he has. I don't know that I have. But I have got to give him the freedom to explore that question himself. It is the ruling of this court that Lieutenant Commander Data has the freedom to choose."
I also don't know what "soul" has to do with anything. But that's neither here nor there I guess...
Her ruling is concerning Data and Data only. (as it should have been, that's all this trial was about)
Picard is beaten, goes down to 10-forward, talks with Guinan and comes away with the race/slavery argument. He didn't pop down and discuss this with Beverly. Like I said, this was just injected here as a "moral boost" to an episode that really didn't need it. How a society will treat more "Dats's" is irrelevant here, but most certainly would be a matter to be addressed later if Data's positronic brain can be replicated.
Picard's argument with Mattox anout sentience was all that was needed.
Another thing that gets me is how can Data be a commissioned Star Fleet Officer and not have the right any other officer has? I personally don't think this trial should have ever happened. No idea how he can have all the responsibility and no rights. the Enterprise's computer doesn't have any responsibility.
Let's end on a good note.
I personally think the best part of this episode is this:
"DATA: I formally refuse to undergo your procedure.
MADDOX: I will cancel that transfer order.
DATA: Thank you. And, Commander, continue your work. When you are ready, I will still be here. I find some of what you propose intriguing."
Data is one decision away from being dismantled and he reveals his decision not to participate was not a completely selfish one, he just was not convinced Mattox was competent. He is not opposed to research on him.
If Mattox was competent, would we have had this trial at all?
Intriguing.
Yanks
Peremensoe,
"we the audience recognize the significance of Whoopi's blackness" That's the whole point. We don't get this irrelevant discussion unless Whoopie is in the conversation. It's Hollywood's way.
The "Real" version of Lincloln was not the point. Uhura's response was.
Elliott
@Yanks :
"The "implications" can't be used to make the decision here, so the argument is fluff. That angle should have been ruled inadmissible. (Read J.B. Nicholson-Owens' post above.) Take the entire slavery/race thing out and Picard's argument doesn't change at all. She doesn't even mention it in her ruling. "
If you're looking at this episode purely as a court case to decide Data's status (I grant you your point about his appointment to Starfleet, by the way), then I guess you can call the slavery arguments fluff, but if, like me, you're looking at it like a piece of drama, the ideas are integral to the story. The discussions are a window across time--can't you imagine similar discussions happening during the slave-trade on earth? A man, for example, who took a slave for a wife, extending to her rights and freedoms he denied to his other slaves because she was special to him? Don't we see in "Author, Author" how the narrowness of Louvois' ruling left the door open for further injustices to AI?
It's good to think critically, and not be myopic about issues like this and it was rewarding to see this kind of thinking on the screen.
Yanks
Elliot,
I might agree with you if there were 100's of Data's running around but there is not. This argument would be more applicable to the EMH in Voyager. Data is unique (positronic brain) so the implication is not needed nor applicable here. For this reason it is injected Hollywood drama for the sake of implied "moral justice". Nothing more.
Robert
@Yanks - So if there were only a single black man in the new world (because for some reason we'd only brought one over so far) we couldn't discuss the broad range implications of denying him rights and how that would affect what happens when we got a whole lot more of them? Wasn't the whole point of Maddox's research to figure out how to make 100s of Data's run around?
Yanks
Robert,
Not relevant to this trial. Jesus, even you guys try to inject race/slavery into this. Hollywood has you trained well. It's not about a discussion.
Like I said in my 1st post. You can't make a ruling on a “what if”? This hearing was on Data's right to choose, not "if there were 100's or 1000's of Data what would we do".
Robert
Yanks,
Not relevant and not central are two different things. I will agree that it is not the central argument. I think "no relevant" is stretching here. Maddox's goal was to brand Data as a creature with no rights and then make lots more. Guinan (rightfully) deemed that slavery. Picard (rightfully) pointed out during the trial that the Maddox intended to create a race of androids slaves and that he couldn't even determine (with certainty) that Data wasn't sentient by his own criteria. Yes, taking away an individual's rights to choose are troubling enough and was the core issue of this trial, but Maddox DID intend to create a race of superhuman sentient slaves. That was the point of the episode and addressing it isn't a hyper-liberal Hollywood poppycock conspiracy :).
msw188
Yanks,
I believe your opinion here is too idealistic. A trial such as this one is more than a logical investigation - it is an attempted interpretation of both the meaning and the purpose of law. Not all trials need to be this way; in some cases (most cases?) a trial should be about logically or reasonably uncovering truth. But in cases such as these, truth is not easily definable, and so the meaning and purpose of law become as important as the logical statements themselves.
Picard's arguments (grossly simplified - I'll agree that this episode is slightly overrated) can be viewed as:
1. No satisfactory definition for sentience exists that will allow for a logic-based ruling.
2. Judging non-sentience in error will have consequences akin to slavery, which undermines the meaning and purpose of Federation law.
He 'proves' 1 first (flimsy), and proceeds to claim 2. Given the time constraints of the episode, I think this is solid writing. Louvois' comments cement this purpose of the writers, in my opinion. She cannot hope to logically decide whether or not Data has sentience (or a 'soul', whatever she means by that), and so the only recourse is to "err on the side of caution."
Peremensoe
Yanks, a scene can have two meanings or purposes at the same time. The better the episode is, the more likely it will feature such scenes (or perhaps it's the other way round).
Yanks
Robert / msw188,
I think you're basing your argument on Mattox's potential for success which by Data's own admission was slim at best; hence his refusal to participate. (and protexting his memories) I don't even think we know about Lore yet at this point in the series (could be wrong there).
And, if Data doesn't get the right to choose (loosely based on sentience) "he" IS no different than a toaster, or a computer driven assembly plant etc.
Funny how this very same issue was brought up many times before I chimed in and no one had any issues, but I bring up the truth about Guinan (watch the scene in 10F if you don't believe me, it's plain as day) and folks all of a sudden have objection.
The potential for a "race" only exists if Data wins. It had no place in this hearing aside from courtroom fluff.
I enjoy the discussion with you folks. I guess we’ll have to agree to disagree on this issue.
Peremensoe,
Sure ... or not. :-)
Robert
By your own admission though, your issue with it is that it's fluff in the courtroom. But I think the whole point of THIS court case was just to make us think.
On the whole I chimed in because if anyone's argument was fluff and irrelevant it was Riker's and you didn't complain the same about that. I mean... one can detach and reattach your arm too (although perhaps not so easily), and with certain drugs I can turn you on and off as well.
His strength and processing capabilities are also irrelevant, I have a strength (though not so great) and processing speed (again, not so great) as well :P I assume you and I are sentient though!
And I totally agree with you about Guinan/Whoopi and the 4th wall breaking being the reason for the conversation. To me personally though, that doesn't detract from the episode (and perhaps improves it), but you can find it jarring if you do... obviously we can disagree :)
As for Riker, I suppose in headcannon we can pretend he was making an argument that looked impressive but had no substance so that Picard could easily beat him.
msw188
Yanks,
"And, if Data doesn't get the right to choose (loosely based on sentience) "he" IS no different than a toaster, or a computer driven assembly plant etc... The potential for a "race" only exists if Data wins."
I think this is the base cause for my disagreement with you. This attitude seems to suggest that when a conclusion is reached in a court, it is automatically correct. But if Data is declared to be a toaster by the court, while in fact being sentient, then the potential for a "race" does exist. It is this possibility that has the worst possible ramifications, regardless of how slim its chances are (and that slimness is in the eyes of Data only, not the Judge). In the absence of a workable definition for sentience and/or race, the avoidance of this possibility becomes the focus of Picard's argument, and I don't think that's out of place for a courtroom.
I don't have a strong opinion on the Guinan issue. It felt a little bit contrived to me to make sure that the black person brought this up on the Enterprise, but it does fit the in-universe characters. Guinan is wise and thinks about the bigger picture, not about singular logic. Picard trusts Guinan and always values what she has to say. They're also the two best actors in the series, and this is meant to be a big 'turning point' scene. If one is willing to allow potential consequences into a courtroom in cases where definitions are unclear (as I am), than Guinan's scene fits in the plot of the episode just fine regardless of the 'message' for today's audience. And that's the way such messages should be handled, I think.
PS: I only just found this website on the day I posted my first response (if there's some way to check, you can see it's my first ever post here). I can't say for sure if I would have brought any of this up before the Guinan comment, but I think I would have if you had ignored that part and just put forward the "trial should focus on Data and only Data" argument.
Yanks
Robert,
Just hate that being thrown in our faces. Too much like real life "news". But you’re right, probably just me.
I thought Riker aptly made his case, hell even the Judge said Data was a machine. While I'm not attorney, when I watched this Riker seemed to lay it all out there, very clearly. I mean without a clear definition of what sentience is how would he disprove data having it? Probably better to not bring it up and let Picard climb that mountain if he chooses.
msw188,
One can click on your name and see all your posts. (great points about 'Yesterday's Enterprise' and 'The Offspring' BTW)
I tend to be brutally honest :-) A failing I'm sure and you are correct, there are politically correct ways to express observations, I just chose to "post from the heart" :-)
When I first watched MoM early in 2002 (I think) it didn’t faze me, but now that Woopie is on The View etc… It changed my “viewing pleasure” I guess.
msw188
Yanks,
Cool, thanks for your kind words. I'm glad somebody might feel similarly about Yesterday's Enterprise.
Measure of a Man is one of those episodes that I'm pretty certain I saw back in the late 80s or 90s, but I didn't really remember. It was probably a bit over my head then. Looking at it now, and after all of this discussion, I think I'd give it a solid 3.5 stars. The arising of the conflict strains believability, and even though I think Guinan's remarks fit into the story and courtroom well, they still feel just a bit too much on the nose. The episode asks some very worthwhile questions and explores them well enough for me, but it still lacks the purely emotional element that the best episodes do carry.
For comparison, I'd currently give 4 stars to Offspring, Best of Both Worlds, and Darmok. I'm in mid-season 5 at the moment.
Yanks
msw188,
I'm currently just finished watching DS9 and am trying to catch up with my reviews. When I get to TNG, Offspring will most definitely get 5 of 4 stars. I'm a sap and tear up everytime I watch it. :-)
$G
A lot of lively discussion here. I wish I'd been around to partake!
As for this episode, it's TNG's "Duet" as far as I'm concern. The "so this show IS worth watching" moment. Even barring that, a fantastic hour. 4 stars easy. Top tier Trek.
Nic
I finally got the chance to see the extended version of this episode on Blu-Ray. I must echo ReptilianSamurai's comments. I didn't think it was possible to improve on a classic, but I was spellbound. Of particular interest is a scene where Riker and Troi discuss whether their view of Data's sentience is true or imagined (whether they anthropomorphize him). Even as a telepath, Troi isn't sure. This scene makes Riker's arc much more involving, and I paid much greater attention to Frakes' performance.
All in all, I think this episode matches BSG's "Pegasus" in emotional impact and social relevance.
Trajan
I've just caught up with this discussion following my original comments from years ago. I still hate the episode but appreciate that I'm in the minority. Incidentally, I liked the suggestion of using the Vulcan neck pinch on Riker. Much kinder than my earlier suggestion of a baseball bat!
I didn't know the writer of the ep. was a lawyer. One day I'd like to have a discussion with her as to the fairest form of judicial system for a post-scarcity society. However, I'll refrain from commenting on Measure of a Man again.
Anyone interested in alternative depictions of future legal systems could do worse than to check out Frank Herbert's 'The Dosadi Experiment' which always appealed to me when I was practising.
Macca
Disclaimer: I've only scanned through the comments above so sorry if this has already been discussed.
I revisited this episode yesterday after starting to watch the excellent Ch4/AMC series 'Humans'. A series about robotic servants who gain self awareness.
I thought Picard's arguments were excellent but I was interested to note that the judge only ruled that Data has the right to choose, not that he is a sentient life form. I'm scratching my head about a later episode of TNG that deals with this issue in more depth. Maybe someone can help me out?
I was also interested in one of Maddox' arguments that was largely ignored by the episode. That if data was a box on wheels with largely the same skills and ability to communicate would we be having this debate?
Are the crew motivated by the fact that Data looks like them and would they fight with the same zeal if data looked like the robot from Lost in Space?
Taylor
OK, I'm also an attorney but I can't say I care that the legal "procedures" may seem unrealistic - it's Star Trek and there are so many things that we could nitpick in any episode, many of them considerable improbabilities. Admittedly this is an individual thing, depends on how some episodes strike us, and sometimes I'm the nitpicker as well ...
The subject matter is compelling. Re-watching this episode I kept thinking of Blade Runner ... which very much taps into the "slavery" issue. That's an emotionally powerful angle, although the most intriguing aspect for me is simply that dividing line between humanity and AI, and how we envision an ultimate future when that line is blurred ... thus why's it's been the subject of so much sci fi over the decades.
Diamond Dave
A brave episode, tackling as it does some deep philosophical points that reaches its climax in the static form of a courtroom drama. It grounds itself in a thorough examination of both arguments - a particular highlight is Riker's unwilling but devastating critique. But more so is Picard's absolute certainty of Data's right to choose and his willingness to support him. And perhaps this is one theme that isn't explored - that as a human, we would feel no emotional attachment to a toaster, or a ship's computer, but would when serving with Data. And is that not a measure of Data's sentience?
To my mind the episode does have flaws - the presence of a handy legal officer who, surprise, has a back story with Picard, as well as the flimsy excuse to pit Riker and Picard against each other. But in its intelligent examination of the issues, this is a cut above the average episode. 3.5 stars.
Gabriel
I'm here in 2015 watching it and just wanted to say that I dropped some tears. Really. Enough said.
Chef
And then a decade later a race of disposable beings were put to work by Starfleet, scrubbing plasma conduits.
Kiamau
This is TNG's first great episode but it is hardly TNG's "Duet."
grumpy_otter
I love that there are actual lawyers above me discussing the legalities of this trial! I think they raise good points, but I still love this episode.
To me, this episode is much more about friendship than law, or Data's sapience. I teach history, and I actually show this episode to my students to demonstrate how important it is to be able to argue the other side of your thesis. If you do not recognize the strength of the claims on the other side, you cannot effectively argue your own case. If Riker could not put aside his belief to make the argument, his friend might have been lost.
I have one nitpick about this episode, though I recognize it is a common mistake made by many people, and is so common that the OED might as well change the definition. Data has no sentience at all because sentience refers to the ability to feel. What he DOES have is sapience--the ability to think. Self-awareness is a component of sapience. Even many animals have sentience, but do they have sapience?
CircleofLight
I love the arguments presented in the episode, but you really need to take off your lawyer hat before watching. And Star Trek is notorious for misunderstanding the law, from this episode to "The Drumhead" to "Rules of Engagement".
I'll just make one writing critique before I move onto the good. Forcing Riker to litigate against his friend and fellow officer is terribly forced and unnecessary drama. There's a huge conflict of interests between Riker losing a valuable officer which is going to make him perform his role badly no matter what threats some JAG officer throws out, so why does she even put him in that position to begin with? And, if Maddox was so apt on winning, why didn't he present the case on his own, or hire a professional who he could trust over Riker.
That aside, this episode is great because of the morality issues Jammer brought up in his review. Patrick Stewart's speech is well-delivered, and convincing despite his character's admitted misgivings towards law. Finally, this episode brings out a lot of interpersonal relationships Data has among the crew, and shows just how much of an impact the possibly-sentient android has.
K'Elvis
As Data behaves as if he is sentient - he passes the Turing Test with flying colors - and has been accepted as a sentient being by the Federation and Star Fleet to this point, the burden of proof lies with Riker to prove Data is not sentient. Riker establishes that Data was built by a human, that Data is physically stronger than a human, and that unlike a human, Data can be turned off demonstrates that merely that Data is not human, which is neither in dispute nor relevant. It does not demonstrate that Data is not sentient. I suppose it would have been unsatisfying to simply rule in Data's favor because Riker failed to make the case that Data was sentient. Data maintains a presumption of sentience, so the slavery analogy remains relevant.
Robert
"Its responses dictated by an elaborate software programme written by a man"
This part at least is relevant. I personally would not consider anything to be sentient if this was true. Riker is wrong, in "In Theory" Data adds his own subroutines for dating. And it's not the only time.
Computer programs that can improve/learn might be sentient. Computer programs whose "responses dictated [SOLELY] by an elaborate software programme written by a man" are not IMHO. I think what Riker was going for was that it was all a really convincing "act".
That said, he has no proof for that. And certainly not removing his arm or turning him off. Doctor Crusher could turn Riker off with a hypospray.
desmirelle
This WOULD HAVE BEEN a great episode if it had been a flashback on Data getting into Starfleet Academy. A non-sentient being (flesh or machine) will not be allowed to go through the academy. It's an unspoken requirement. Were it not, the Enterprise would be making decisions, Picard would be a seat warmer, Riker would be decorative but not useful and Geordi relevant only because the ship doesn't have a pair of hands for the little things. As this episode stands, I expect the next episode of the JAG officer's life involved a court martial for trying to make law when she's a judge; but more importantly, for violating the civil rights of a being simply BECAUSE OF HIS RACE!!!!! (And I'm sure that would involve Federation charges, not military; there must be some guarantee of rights for the various species involved with the UFP.)
Chrome
@desmirelle
The implication is that Data is a new type of being and they had no reason to bar him from joining Starfleet. The *legal matter* of his citizenship was never raised.
I could totally buy this happening, actually. There have been recent news stories where students in Texas have become valedictorians of their classes only to reveal that they're undocumented immigrants. The university these students got accepted to never exposed them, and in fact offered to support them if difficulties arose.
desmirelle
Chrome
Wrong analogy, The question was not "Was Data a citizen" the question was "Was Data Sentient" (which had as the unspoken codicil "and therefore master of his own destiny"). If we're treating the show seriously, as this episode wanted so badly to be thought provoking (and only ended up provoking me); then he had to be sentient to attend the academy, his citizenship be cursed for eternity.
My point is not that it was a bad concept (proving Data sentient), but that this particular question HAD to have been settled before he was allowed to attend). The number of students is limited, no parent is going to let a machine have their child's place without said machine proving it's just as "real mentally" as said child. The Command line study includes judgment calls; hence the episode where Troi keeps failing. Sentient is required. Citizenship was never addressed.
Chrome
@desmirelle
I think you took the analogy too literally. The point is even lofted institutions like Starfleet will overlook fundamental details we all take for granted. Starfleet might have been happy to let in Data just to boast to other institutions that the galaxy's only functioning android goes to Starfleet for its reknowned training.
Also, Judge Louvois wasn't ruling on sentience (stating sentience was a question better left to philosophers). All she cared about was whether Data, even as a possibly sentient machine, was property of Starfleet.
So, yes, you bring up good evidence that Picard also brought up: Data's service record. But that alone didn't settle the matter of Data's rights here.
desmirelle
Chrome
I respectfully disagree. Melissa Snodgrass had a wonderful idea; they just misplaced it in time. I'm not arguing that the question shouldn't have been asked; I'm saying in order for Data to enter Starfleet, it had to be asked BEFORE he entered - otherwise they could've let a trained monkey attend and bragged on how well it did.
The underlying issue (and what she ruled upon) was sentience. The judge saying she wasn't ruling on it is ironic, since she would have done what she obviously wanted: made sure she made her name as a judge. Since Starfleet has no slavery policy, the only way Data could be property was if he was not sentient; ergo, sentience is the primary decision being made. The Enterprise, DS9, the starbase the case was being tried on, have no sentience and therefore are property. And if I'm going to take the premise seriously, I will say again, this was properly a flashback on how Data got into Starfleet academy. Starfleet is (as they kept hammering away at us with Wesley's attempts to take entrance exams) an elite organization with lines around the block waiting to get in. If we're taking this seriously as a story, it has to get me to suspend my disbelief. I can't if I'm to believe that this question wasn't answered well before this. It isn't logical.
Taking it seriously is the point of all this. (Which makes 'Genesis' so much more worse than the review says....)
Chrome
You can disagree all you want, but Data's sentience was never decided at that hearing.
The original draft of the script (Google it), actually has Picard explaning before the hearing that he needs to prove Data is a sentient *life form". Data hears this and insists that he's only a machine.
And slavery was never an issue if Data was just considered a conscious machine. Conscious machines may be able to attend the academy, but Picard needed to press that Data was a life form for the sake of the hearing.
I suppose they could've kept the original script intact, but it was probably too lengthy. And I don't see how a flashback would fit, especially because TNG never uses them. This isn't Lost.
Desmirelle
*sigh*
You can keep writing it all you want, but sentience was the UNDERLYING question being decided. In order to be property, Data could not be sentient. If Data is not sentient, he has no business being in command of sentient beings. Now I'm back to why the Enterprise has no rank. What was written in an early draft is rewritten for a reason (like that exchange between Picard, Geordi & Data is illogical).
As a flashback episode, it would have excellent (and possibly given us more "what happened when" episodes which would have done us in good stead as an alternative to "Genesis").
Slavery was exactly the issue, as Guinan/Picard exchange highlighted. The problem was that their argument was circular and depended upon the sentience of the beings in question. Maddox wanted to treat a fellow Starfleet officer like his own personal Lego set. You can't do that to a sentient being; Crusher can't do exploratory on Worf just to see how Klingons work... It's insulting to expect me to believe that the issue hasn't ben decided BEFORE this point. That's my point.
You keep referring to Data as a "conscious" machine. To me that merely states that he's "on" as opposed to "off" - if you're using it to indicate he's aware that he's a machine and operating - since he did not want to be turned off , you're actually saying he's sentient.
Chrome
"sentience was the UNDERLYING question being decided. In order to be property, Data could not be sentient."
This was never stated in the episode. It was just the argument Picard used to push that Data was a life form.
"If Data is not sentient, he has no business being in command of sentient beings."
This is your opinion. Please back this up with statements from the series or comments from the writers. Otherwise, you're just stating your personal preferences, without giving us any objective reason why we should agree with you.
William B
I think the slavery analogy holds, because slaves *were* considered property, despite being sentient. The slavery argument is not actually circular -- it is important because it changes the scope of the conversation from one Data, who Picard says "is a curiosity" to an entire race. If androids are not sentient or sapient, "have no soul" to use Luvoix's final point, then it does not matter how they are treated. If it is *possible* that they are sentient or sapient, then what would be a regrettable outcome if *one* android had his rights trampled on would become a horror if an entire race came into being with their rights denied. The argument Picard puts forth is that there will be far-reaching consequences beyond the fate of one single android, and thus that the bar for proving that it is permissible to treat androids as property should be much higher. One can disagree with this, for example by arguing that it is just as much an injustice if *one* Data has his rights trampled on, but that is what Picard is saying.
Along those lines, my admittedly weak understanding is there are times in history where slaves *did* serve in the military (i.e. the American Revolutionary War).
One would expect Starfleet to have higher standards, of course. And certainly I think that the episode would be strengthened by some sort of explanation of how Data's status could be so unsettled at this point in time. But basically I think that there are some ambiguous areas of the law where people more or less follow something like habit until someone specifically challenges them. People who were not considered full persons could serve in the military at different times in history, and I see it as believable, on some level, that the decision of whether or not to admit Data to Starfleet and the decision of whether or not he was considered a person and even a Federation citizen were basically not considered the same one. We know that people who are not Federation citizens can join Starfleet (Nog, e.g.). Given that Data was found by Starfleet officers, it seems possible that they strongly advocated for him, perhaps pushing for some of the red tape to be pushed aside.
I think Maddox' argument would be strengthened if he claimed that Data was Starfleet property not because he's a non-person who joined the service, but because he was salvaged by Starfleet officers.
That said, I do find it a bit hard to believe that Data's status is quite so unsettled. The main reason I believe it as much as I do is that I am willing to accept a fair amount of bureaucratic/legal incompetence and uncertainty in dealing with Data in the years before this. In fact, a recurring theme of the series is that no one is really ready for Data -- they are unprepared for what happens if Data goes rogue (as in Brothers, Clues, etc.), they are unprepared for Data to "procreate," etc. I think it is believable in that Data is so carefully designed to placate people's concerns about him that people go into denial about the thorny issues that he poses; of course, Lore articulates that Data was specifically created to be less threatening to those around him, and while Lore's spin on it is partly because he's an evil narcissist, I don't think he's entirely wrong. I do wish that a bit more background on Data could have been provided, in particular how he spent his time between being found on Omicron Theta and on the Enterprise; he says in Datalore that he spent so many years as an ensign, so many as a lieutenant, etc., but we don't really know where and he is so...new, undeveloped in Encounter at Farpoint that I have seen people suspect that Starfleet kept him in relative isolation for several years. He says that Geordi was his first-ever friend.
William B
Which is to say, there are some logical holes in the arguments in this episode which one has to get past in order to appreciate it -- I am willing to suspend my issues because I think it is fantastic, and the episode at least *does* acknowledge, to some degree, that Data, Picard, Riker, Maddox, and Luvois are somewhat out of their league in even articulating the issues, let alone fully arguing them. Anyway, one of the issues that is not often brought up is that if you accept that Data is not a person but property, then one has still not established that he is *Starfleet* property, rather than, say, Picard's personal property. I get why this is skirted over, because, as I said, he was found by Starfleet officers and no *non*-Starfleet people have any reasonable claim on him besides himself, with Soong and the rest of the Omicron Theta colonists (apparently) dead.
Chrome
@William B
Once again, you make some excellent points. This is definitely a case of "ambiguous areas of the law where people more or less follow something like habit until someone specifically challenges them". Viewers may be astonished by Starfleet not answering an old and obvious question, but even in our own laws there remain a lot of legal uncertainties. The right for a person to decide who they can marry was only established in the U.S. last year after thousands of years of marriages.
Also, the "bureaucratic/legal incompetence and uncertainty" seem to recurring themes not just with Data, but with other legal questions. Surely how the law treats Data in this episode is a travesty to intelligent life forms, but then we get episodes like "Rules of Engagement" where were shown that Starfleet is very ready to throw of the rights of its own sapient officers to an aggressive power for political reasons.
Some background into Data would've been nice, but I don't think it was necessary for this episode to work. The judge ends with the verdict of Data's nature being an open-ended question. If the episode gave us the answer in a flashback to an earlier time when Data was established sentient, there'd be absolutely no reason to consider Riker or Maddox's arguments.
Peter G.
The funny thing is, the episode really isn't about whether Data in particular is sentient, but about how to define sentience in the first place. And since the writers don't have an answer for that I can see why their resolution was open-ended. What IS the difference between Data and the Enterprise computer? A more sophisticated neural net? Simply the directives each is given? We already know that Data is 100% susceptible to any change in programming, completely undermining the Data we knew before. Then again a person's mind can be messed with as well. However no one programmed that human from scratch, whereas 100% of Data's personality stems from programming that learned and expanded itself.
What if the Enterprise computer was given directives to teach itself too? Would it have a right to decide where it wants to go? It's simply a matter of programming the AI. So to me, the real question is about AI, not about man vs machine. Since Star Trek has a virtually non-existent track record on the issue of AI this was obviously not going to be addressed, even though it's the only issue to discuss here. Is it possible to create sentient life just by chaining together strings of code and clicking "save file"? If so, the Federation might need to have some strict laws about irresponsible creation of life by programmers. It's hard enough to argue that a string of code is life at all, no less sentient, since it's appeals to having wants are reflections of code inserted.
For instance I can write a 20 line code in BASIC that says "I am alive", and when asked if the program wants to die, it will reply with "Please, I do not want to die." Just seeing that phrase on the screen might pull heartstrings, but I think defining a line of code that says "I want to live" as being sentient scraps any meaningful sense of the word. Is Data inherently different than this 20 line piece of code, really?
The ending I would have liked would have been for them to say they could not make a determination on Data since his technology was beyond their understanding. The reason to keep him in Starfleet with his own set of rights should have stemmed from a mutual decision by all parties to *choose* to recognize his rights as an act of goodwill towards a potential life form; to err on the side of respect even in the face of the unknown. That is the Federation way, and that's what should have made the final determination.
FlyingSquirrel
I'm not sure the Enterprise computer would really be considered an AI. My recollection is that it does have certain "canned" responses when asked a question it doesn't understand, suggesting that it is programmed to understand a variety of speech patterns but doesn't actually think on its own. It's perhaps closer to what would be called a virtual intelligence in the Mass Effect universe:
masseffect.wikia.com/wiki/Virtual_Intelligence
As for Data and the question of AI sentience, I'm not sure that's a question that anyone, no matter how far in the future, can answer, simply because consciousness is a subjective experience. You can't prove that Data is actually self-aware and conscious, but you can't prove that about anyone other than yourself either. Yes, he's vulnerable to being reprogrammed, but humans have been known to exhibit personality changes due to brain injuries, and nobody would argue that they're no longer sentient or conscious at that point. My feeling is that any AIs with the same range of behavior as what Data (or the Doctor on Voyager) exhibits should have the same rights as humans out of a principle of erring on the side of caution - I'd rather grant human rights to non-sentient beings than deny them to sentient beings.
Peter g.
@ FlyingSquirrel.
I guess I shouldn't bring up "Emergence", in which the Enterprise computer (or maybe all its integrated systems along with the computer) develops signs of life. The reason I shouldn't is because the episode is dumb.
Anyhow, I get why it's tempting to say "we'll never know", but at the end of the day a determination has to be made about which kinds of code would or would not count as sentient life. You might want to be agnostic or just say they're all sentient, but then can you arrest and jail someone for writing a program and then deleting it? This is the kind of issue we're talking about. Can someone 'murder' Data in the legal sense, or merely destroy him in an act of vandalism? And what if Data's neural net was contained in a box instead of in a humanoid body? Same answer?
William B
@Peter G., absolutely it should be made clear (in-universe, and would be good to have been made clearer for the audience) where the lines are drawn between Data and the computer and other technological life forms. That said, I think it is analogous to arguments about biological life forms. Humans have certain rights. Non-human animals, particularly mammals, have some very limited rights. Plants have virtually none, with a few particular exceptions (protected forests, and sometimes individual trees). The difference between Data and a twenty-line length of code might be equivalent to the difference between a human and a virus. That Data is the only settled issue in this episode strikes me as believable; the Federation should be a more enlightened body, but they are fumbling in the dark here, and the lack of rigour in the human process of defining the legal differences between humans and other animals and the reason behind it makes me find the halting way in which "AI rights" are dealt with on a case-by-case basis in Trek.
I agree that a little more focus on what the difference between Data and the Enterprise computer *is* would be appreciated. That said, I think that the tactic that the episode takes, which is to ask what distinguishes Data from a human, is also pretty valid. The main qualitative difference between Data and a 20-line bit of BASIC code is that Data has an adaptive program which is, as stated in the episode, able to adapt to new information. Data believes that there is an ineffable quality that goes with it, and Data's friends would tend to agree. There is no way for us to guarantee this. The positronic brain is designed to replicate aspects of the human brain, in part (as well as other qualities) in an attempt to not just emulate but also reproduce humanity. All right, so the question is what distinguishes Data's brain from a human brain. There are a few possibilities:
1. Humans are sentient in ways that require some sort of metaphysical element. There is some element of humans that make *any* "constructed" being impossible to program to have human level sentience, perhaps because there is something in humans (and perhaps other living beings) that is not dependent on the physical at all.
2. Humans are sentient in some way that obeys entirely the physical laws of the universe. It may be possible to create a "constructed" sentient being, but Data is not one.
3. Humans are sentient in some way that obeys entirely the physical laws of the universe, and Data also is a sentient being who similarly is able to exist (as an emergent phenomenon). Some "constructed" intelligences do not have this quality.
4. Humans are sentient, as are all "constructed" intelligences, forming some sort of spectrum.
5. Sentience in the way that we tend to describe it does not actually exist; it is an illusion common to humans that they are sentient but it is not true in any particular way. Data is not sentient either, of course, and so what happens to him hardly matters, but the same extends to humans.
5 is mostly eliminated because we *experience* sentience in ourselves, and conclude that other humans are likely to have a sufficiently similar experience to ourselves. However, it can also go the other way. I could certainly imagine Lore, if he were so inclined, arguing that it is impossible for biological beings created by chance with brains running on electron rather than positron signals to be truly sentient.
Anyway, my impression is that the positronic net of Data's is sufficiently similar to the human brain in physical functioning, despite in other ways being very different, that it is reasonable to believe that whatever process that endows humans with sentience endows Data as well. Maddox is, of course, right that some of the reason for concluding this about Data rather than a box with wheels is that Data is designed to look human and to be anthropomorphized. That is rather the subject of The Quality of Life (important though obviously flawed).
That Data is a constructed being does not seem to me to be necessarily all that important. Certainly, it may be that it is impossible for a human to construct something with sufficient physical sophistication to match the complexity of the human brain; essentially humans are competing with millions of years of natural selection. However, if Data has an internal life and sentience, then he has it, and it does not seem to me that it diminishes his internal life that his brain was constructed with conscious intent. In any case, if the argument is not about the experience of sentience and internal life but a matter of free will and ability to break free of programming, I do not think it is a settled matter that humans are able to break free of the physical states in the brain, or of broader biological programming; that people are unpredictable can be a matter of all variables being too complex to account for, or of simply random processes which are similarly not controlled (i.e. quantum indeterminacy does not actually imply that random outcomes are *chosen* by consciousness). I think this is what Luvois is saying when she argues that she does not know if she has a soul. While she presumably does believe that she is
I tend to think that the episode does more or less end with Luvois (and Picard, to a degree) *deciding* to choose to err on the side of granting Data rights. The episode does to some extent frame the decision, at least on Picard's part (in Picard's argument) as being a matter of living up to the Starfleet philosophy of seeking out new life, and of wanting to be clear that they should consider what kind of people they are; whether "they" refers to humans or to the Federation at large is hard to say, because there is still some ambiguity (in both TOS and early TNG, and to some extent extending forward) about whether the subject of the show is *specifically* humanity or of the Federation overall.
To me, I think that Data's story, including this episode, has a lot of resonance even if it is at some point, somehow, conclusively proven that no electronic or positronic created devices could ever have something like consciousness. Whatever else we humans may be, we are also physical beings, who obey physical laws, who, like Data, come with hardware and develop changing software from our learning over time, and whose ability to make our own choices is not entirely easy to understand. Even things like the emotion chip have resonance, given how much easier it is to change one's emotional state with certain drugs or other treatments. Is it possible to find meaning while viewing our identities as intrinsically (perhaps even *exclusively*) tied to the physical reality of our brains? Can we define a soul without recourse to metaphysics? This is not even arguing that there *is* no metaphysical explanation for a soul, but with Data a biological or theological appeal to our humanity is eliminated, for one character at least. I think that this is a lot of what this episode is about.
FlyingSquirrel
@ Peter G.
"Emergence" was goofy, but wasn't there a scene where they discussed what sort of action to take in light of the fact that they might be dealing with a sentient entity that was trying to communicate? Also, I think the idea was that while a sentient mind seemed to have somehow developed from the ship's computer, the computer in its normal state was not sentient or self-aware.
My own view on AIs, incidentally, is that we probably shouldn't create them if we aren't prepared to grant them individual rights, precisely because we'll end up with these potentially unanswerable questions. I don't know enough about computer science to answer your question about writing and deleting a program, just because I'd need to know more about what would go into the potential process of creating an AI and what kind of testing could be done before activation.
If Data were contained in a box instead of an android body, I actually don't have much trouble saying yes, he should have the same rights. Obviously he wouldn't be able to move around, but I'd impose the same prohibitions against turning him off without his consent or otherwise messing with his programming.
William B
Incidentally, I like the way Data's status remains somewhat undecided throughout the show. The Federation, I think, *should* actually make up its mind and make a harder determination of what his rights are, but the fuzziness strikes me as very believable. I like, too, that even *Picard* swings between being Data's staunchest advocate and using the threat of treating Data as a defective piece of machinery in something like "Clues." And even Data vacillates. In particular, note that Data's deactivation of Lore in "Descent" goes without much fanfare; certainly Lore is dangerous to the extreme, but I suspect that if Data had killed a human adversary in quite the way he takes Lore out, that there would have been more questions asked about whether he did everything he could. I have been wanting for a while to write about Data's choice in "Inheritance," and how I think his decision not to reveal to Julianna that she is an android reflects a great deal about how Data views himself and his status and his quest for humanity at that point in the series and the tragic connotations thereof. Even though everyone on the show more or less takes the leap of faith that Data is, or has the potential to be, a real boy, it's an act of faith that needs to be regularly renewed and it gets called into question, with characters suddenly reversing themselves because no one is really that sure, even though he's their friend.
I do think that there are some significant problems with the show for not going far enough with Data (and later the Doctor) in following through all the arguments about where exactly the boundaries are supposed to be between personhood/non-personhood, and in allowing other characters to maintain a kind of agnosticism when they should really have to make up their minds definitely. I think that it's very reasonable to object and I don't want to try to come across as saying that one *has* to like the show's overall attitude and the contradictions it runs into. That said, it generally works very well for me with Data (and from what I remember, pretty well with the Doctor). I think that the...emotional dynamics, for lack of a better term, generally work, but there's no question for me that some of the wishy-washiness on the character and what distinguishes him from other AI and what distinguishes him from humans and etc. is the result of writers' backing off from some of the challenges posed by the character rather than *purely* leaving the character's status regularly open for re-review for emotional/mythological reasons. I like the result a lot because what *is* done with Data in the show means a lot to me personally and so I'm willing to overlook a fair amount, but I don't expect everyone to.
Peter G.
@ FlyingSquirrel,
If you afford sentient rights to Data-as-a-box, then the physical housing becomes irrelevant and what you're really doing is granting sentient status to code. That's fine in a sense, but that opens up, as mentioned, a massive quagmire about who can write this code, delete it, alter it, and even maybe about what kinds of attributes it can be given in the first place. Should it be illegal to write a line of code that makes the program "malevolent"? How about merely selfish and prone to kill for gain as Humans now do? It goes beyond the scope of the episode, but my feeling on the subject is that the episode gives a lot of unspoken weight to the fact that Data has been largely anthropomorphized. Maybe Soong did that on purpose to protect his creation using the sympathy of others to its shape.
@ William B,
There are certainly gradations of biological life, and although we're hazy on whether there are levels of sentience (or any sentience) there are clear differences in, basically, cognitive capacity among animals which lets us categorize them by importance. For the most part we protect intelligent animals the most, and mammals get heavier weight than non-mammals. But it's easy to see why we can do this: we can either identify outright biases (we sympathize with fellow mammals) or else identify clear distinctions in intelligence and give greater weight to those closest to sentience. That makes sense for the time being.
For AI, however, we have no such easy set of distinctions because, frankly, we don't live in a world full of various AI's to study and compare. We basically have a lack of empirical experience with them, but the difference is that while we couldn't have known what a cow was until we saw one, we certainly can know what certain kinds of AI would look on a theoretical level. Maybe not advanced code the likes of which hasn't been invented yet, but certainly anything binary and linear such as we have now (and which I suspect Data is as well; he is not a quantum computer).
And IF it's feasible to differentiate between different *types* of code - one is rigid and preset, one learns but its learning algorithm is preset, one can change its programming, etc - then this would be the determining factor in creating a hierarchy of rights for AI. Again, I see this particular discussion as being the real one to be had about Data. Whether he's 'like a Human' or not is an extremely narcissistic way to approach the topic. The question isn't whether an AI resembles a Human, but how AI contrasts with other AI. Is Data just an extraordinarily complex BASIC program that does exactly what it's told to do, no more or less? Note again that issuing phrases such as "but I want to live" can be written into any software of any simplicity, and thus the expression of such a 'desire' shouldn't be confused with desire. I can write the same thing on a piece of paper but the paper isn't sentient. The court case in the episode seemed to take very seriously Data's 'feelings' for Tasha, even though they failed to address whether those were really 'feelings' or just words issued in common usage to suit a situation.
To be honest, even after having seen the show I'm not quite sure whether Data should have been considered Starfleet property or not. It does seem like an extravagant waste to avoid having androids on every ship seeing as how Data personally saved not only the Enterprise but probably the Federation multiple times. And as for the argument of human lives being saved in favor of risking androids...well...duh? Isn't that a good thing?
Chrome
@Peter G. and William B
I took Louvois' ruling to mean that if a machine is so life-like that it has the perception of a soul, then it can't be considered property and has the right to chose.
That's the difference between the Enterprise computer and Data. No matter how sophisticated the Enterprise is programmed, it's still missing those very life-like qualities that Data showed in this episode (intimacy, vanity, friendship, a career, etc.).
Peter G.
@ Chrome,
"I took Louvois' ruling to mean that if a machine is so life-like that it has the perception of a soul, then it can't be considered property and has the right to chose."
This is kind of my problem. The things described in the episode don't show Data to be life-like, but rather Human-like, which is a significant distinction. It means that entities that emulate being a literal Human Being will receive favorable treatment by the Federation. I'm sure plenty of sentient life-forms in Star Trek don't have 'intimacy' or 'friendship' in the ways Humans know it, so I'm not sure how those should matter (but to a judge out of her depth I can see how she could be unaware enough to think it should). And just a quibble, but Data doesn't have vanity; his having a career is also a circular argument because the argument about whether he *should* have a career relies on him having the rights afforded to sentients.
The Enterprise computer wasn't designed to have personality or look like a person, but it could have been. Would the aesthetic alterations in the programming have made it suddenly sentient because it *seemed* more sentient? If that's all it comes down to then I would confidently state that Data is not sentient. But they did dally with giving the Enterprise computer a personality in TOS, and although it was played as comedy (choosing the personality matrix that calls Kirk "Dear" was probably some engineer trolling him) the takeaway from that silly experiment was to show that some Human captains like their machines to sound like machines and not to pretend (poorly) to be like Humans. To pretend in that way could be felt as an insult to Humans. But what about a machine that acknowledges it is a machine, and acts like one, but wants to be more Human? That's the recipe for ego stroking, and again I wouldn't be surprised if Data's entire existential crisis wasn't a pure magic trick played by Soong to get people to like Data (and thus to protect his work).
William B
@Peter G.,
I agree that intelligence and cognitive ability is the main way in which we distinguish different types of animals, and as you say also the similarity to humans is what tends to grant mammals special rights. Within this episode, I think we are seeing something similar with Data.
Picard asks Maddox to define sentience, and he supplies intelligence, self-awareness and consciousness.
PICARD: Is Commander Data intelligent?
MADDOX: Yes. It has the ability to learn and understand, and to cope with new situations.
I submit that this is the way the episode argues what is different about Data from a toaster, or a piece of paper; it is also what distinguishes animals granted special rights from ones which are not. It is not stated explicitly, but I believe that this ability to "learn and understand, and to cope with new situations" is what is missing from other code. Eventually Voyager's computer has bio-neural gel packs, but for now there is no indication that other systems encountered have a neural net like Data's, which is designed to emulate the functioning of the human brain. I think that Picard is being a little jokey in his statement that Data's statement of where he is and what this court case means for him proves that he is self-aware, because "awareness" to some degree requires consciousness. Really, I think that consciousness is necessary but not sufficient for full self-awareness, and the part that is not covered by consciousness is covered by Data's statement of his situation. That is indeed the same quality that a piece of paper which has "I am a piece of paper!" written on it; if that piece of paper has consciousness and somehow controls its "I am a piece of paper!" statement, then it would be self-aware. In any case, the episode did not argue that the Enterprise computer is not sufficiently intelligent in the sense of adaptability etc. to meet the criterion for sentience; that the ruling is on Data alone rather than all AI is a function of the narrowness of the case.
The combination of intelligence and "self-awareness," which is really the demonstration of the component of self-awareness that is not covered by consciousness, is what makes Data an edge case where consciousness is the essential final component, and "I don't know" becomes sufficient. Animals which are "conscious" but with no evidence of self-awareness or intelligence do not have rights, and thus AI which are not intelligent on the level of Data (who has human-level adaptability) will never have the question of whether they have feelings or consciousness raised at all.
How do you prove that something is or is not conscious? And that is why the human-centrism is important; basically that is the *only* tool that humans have to demonstrate consciousness or internal life, or lack thereof. I know that I have consciousness or internal life, and therefore beings that demonstrate qualities similar to mine, and have a similar construction to mine, are likely to be consciousness. I am not claiming this is great; it is of course narcissistic. But all arguments about consciousness start from human-centricity because the only way we have to identify the existence of an inner life is by our own example, or, at least, I have a hard time imagining any other way. In any case, demonstrating that Data (states that he) values Tasha is a way of demonstrating that Data (states that he) has values, wishes, desires which were not programmed into him directly, which adds weight to Data's stated desire not to be destroyed. It also emphasizes that Data has made his own connections besides those which were specifically and trivially predictable based on his original coding -- again, the ability to learn and adapt etc. To some degree, the idea that animals can be sorted by cognitive ability but that "cognitive ability" and intelligence would not automatically be a sign that computers have some degree of internal life is because of similarities to humans -- animals come from similar building blocks to humans (DNA, etc.) and so it is assumed that their intelligence corresponds to something similar to our own, which we know to value because we experience our own value. Now, obviously by the time of the show, humans have met other species which are sentient...but I think that the sentience is still primarily demonstrated by being similar to humans. How can anyone possibly know if anyone else is sentient? The only possible way is to either take beings at their word, or to build through analogy to one's own experience. The only being I can be sure is sentient is me; everyone else's sentience is accepted based on people being sufficiently close to me. I think that humans should expand outward as much as possible and not rely entirely on chauvinism, but I have no idea how exactly I would determine if a fully alien being which claimed that it was sentient truly experienced sentience or was just able to simulate it.
(Actually, I don't "really" know that my experience of consciousness is real, but I am still experiencing something, so I go with that.)
As to whether his statements that he values his life, or values Tasha, etc., indicate that he actually values them, this is what it means for Picard to ask if Data is conscious. If Data is not conscious, then his statements are just the verbal printouts of an automaton; if he is conscious then they are, on some level, "felt." And here I agree that Picard fails to make much of a case; all he does is ask whether everyone is sure. If I were Picard, I would start arguing that the similarity of Data's brain to a human brain and the complexity of his programming indicate a sufficient similarity in all observables to the human brain for us to conclude that it will likely have other traits in common with human brain, including consciousness. Even without the comparison to humans, though, it may happen that we can never fully assume that any rock or mountain or collection of atoms is *not* conscious, and we must accept a certain level of intelligence and self-awareness as sufficient benchmarks to declare something sentient. This is, of course, very unsatisfying, but it is unsatisfying, too, to begin with the presumption that only beings which are sufficiently similar to humans in physical nature (i.e. made up of cells and DNA) have the possibility of consciousness. To simply suspect that anything in the universe might have consciousness is a simplistic direction for the episode to go in, granted, which is why I prefer to think that the implication *is* that it is Data's similarity to humanoids in terms of systems of value, cognitive ability, adaptability and even in design (neural net which is designed to reproduce the functioning of a brain) which may not be possible without emergent consciousness. I think that most people would agree that it seems *more likely* than something that demonstrates intelligence would also have consciousness than something which demonstrates no intelligence, and so I do think this is one of the implicit elements in Picard's argument, which would make it much stronger, though it is also not entirely necessary.
One troubling question is what it would mean to program androids like Data, with a similar level of cognitive ability and similarity to humans, *without* any desire for self-preservation whatsoever.
I actually do agree, though, that Data is designed for ego-stoking. Actually that is some of the point -- Lore complicates the story because Lore immediately recognized his superiority to humans in physical and mental capacity, immediately came into conflict with people who hated him, and promptly had to be shut down. Data *is* meant to be entirely user-friendly. I think it's also true that Soong intended Data to be a person with internal life, but Data's desire to be human in a nonthreatening way, and the reality that it is basically impossible for him to achieve that, is pretty baked into him, which is tragic if you believe that Data *does* have some sort of internal life, as I tend to.
William B
@Chrome, Peter G. response:
The Turing Test is not namechecked in the episode, but it does sort of remain here: if there is no airtight argument that Data is less a person than Picard, why should Picard have more rights? This is the essence of Picard asking Maddox to prove that he is sentient; the main arguments that Maddox could supply that Picard is conscious and Data isn't are:
1. Picard is more similar to Maddox (in being a biological life form), and, implicitly, to Louvois;
2. Data was created deliberately, rather than by a random physical process;
3. (MAYBE) These are the things we don't know about the human brain, whereas this is what we know about the android net/software.
With respect to 3, obviously there are aspects of Data's programming and design which are unknown, hence the need to disassemble him. With respect to 2, Picard punctures this by suggesting that parents create their children, though it is an incomplete argument. With respect to 1, well, that is part of the reason I think Picard brings up Data's intimacy etc. One could argue that he is appealing to human biases, but he is perhaps working the opposite direction -- by showing the similarities of Data's behaviour to humans, he is countering the natural bias that he is probably not conscious because he is different in "construction" to humans. I'm not really saying I've knocked down all of these (or other potential ones). But rather than starting with why-is-Data-different-from-a-toaster, if you start with why-is-Data-different-from-a-human then Louvois' ruling makes sense. In that case, it is a significant kind of chauvinism that Picard (and Data, for that matter) do not start heading forth and trying to figure out whether they should liberate the Enterprise computer, or weather patterns, or rocks which no one would even think to wonder about, but the "AI rights" arc is not done; and yes, I do think that there is more evidence for Data's cognitive powers than the computer's, though it also has some degree of adaptability.
William B
Though, of course, the Enterprise computer can run and possibly create holodeck characters of great sophistication, so, there is a strong case of the Enterprise computer being intelligent, which certainly complicates things. That said, I don't think this means Louvois' ruling (etc.) are wrong; rather, the ruling on Data should ideally open up discussion on other sophisticated AI which have "sentient life form"-level intelligence.
Peter G.
My main issue with issuing 'sentient status' to any advanced intelligence is because at bottom intelligence is just processing power. When constructing an AI I find it troublesome to consider that the sole factor separating one AI from another might be a superior CPU, and that makes it 'sentient' and thus affords it rights. Does that mean I'm better off with a slower computer than with a more advanced one, because the latter has the right to tell me what it wants to do?
Some people theorize that consciousness is emergent is a sufficiently advanced recursive processing system. Others say something in the biology matter, perhaps as a resonator with some unseen force. Either way giving rights to technology is a big deal.
William B
I will say though that I do think the human-centrism of the episode is a problem insofar as one would expect that there *is* by this point in the Federation some sort of procedure for talking about sentience in non-human terms. Since the vast majority of species encountered in Trek, and especially TOS and TNG, who are accepted as sentient and having of rights are humanoid and very similar to human beings, this is not exactly a problem for the episode, so much as revealing of one of the major limitations of imagination of Trek. Alternatively, this works to some degree because this *is* a myth which is fundamentally about humanity more so than it is actually about anything to do with aliens, and so it makes sense that arguments about machines end up being human-centred.
It is actually pretty disturbing, thinking about it. I do pretty strongly think that the tactic Picard eventually settled on is correct, which is to argue that it is not possible to conclusively demonstrate that Data is sufficiently fundamentally different from Picard to be classified as a different being. However, the point remains of what happens to entities which are sentient but do not *have* a survival urge. This is potentially the case for the Enterprise computer. Certainly, Soong decided to program Data to "want to live"; it seems from various statements made over the years, including by Soong himself, that he intended to create Data as having consciousness and being something like an actual human, with some adjustments made so as to avoid the mistakes of Lore and perhaps to improve on humans. Assuming for the moment that he succeeded in creating a being which has consciousness (and thus sentience), the possibility remains that he could have programmed Data with similar skills but no "desire" for self-preservation or self-actualization.
However, this is not simply a matter of AI. Eventually genetic engineering on a broader scale should be possible, and what happens then? Could beings of human intelligence with no desire for self-preservation beyond what is convenient for their masters be created through a combination of genetic and behavioural work?
It makes a lot of sense that Soong, who really did see his androids as his children and wanted them to be humanlike, would program them to survive and thrive, and, after the catastrophe of Lore, made Data to survive, thrive, and also be personable enough that he would not have to be shut down. Some of this is obviously Soong's own vanity, but some of it is the same sort of vanity that shows up in many parents' desire for their children to carry on their legacy. I like that Voyager complicates some of the Data material by having the Doctor be quite unpleasant, much of the time; whereas Data is designed for prime likability, the Doctor is abrasive and difficult.
William B
@Peter G., Fair enough.
Chrome
@Peter G.
"This is kind of my problem. The things described in the episode don't show Data to be life-like, but rather Human-like, which is a significant distinction."
True, and this goes back to what William B mentioned about the writers being limited in describing Starfleet generally because they only have the human experience to draw from. Incidentally, that vanity thing I mentioned is actually a line from this episode when Data curiously decides to pack him medals when he leaves Starfleet.
But you're right, the episode doesn't really describe what criteria Data has which qualifies him as sentient and the computer as non-sentient. I suppose Data seems more self-aware than a computer, but it's hard to tell if he's acting on incredibly complex programming or something greater.
William B
One thing I will still add is that the comparison to animal life still holds in some ways, especially given that certain animals were selectively bred (over millennia) for both intelligence and ability to interact with humans. Putting human intervention aside, if you need a more intelligent animal, ie for a service animal for the blind, you have to have a dog rather than a spider and you have to treat it better. If you want a pet you can pull the legs off with relative impunity, get a spider not a dog. It may end up being that a scale for defining intelligence on computers will be introduced in terms of adaptability etc and that it will be necessary to have less adaptable computers to be able to treat it ethically. Since intelligence (and, really, intelligence as defined by ability to do human-like tasks) is the main measurement for animal life value, I expect it is likely to be one for AI if a sufficiently rigorous theory of consciousness is not forthcoming.
I am troubled, in the end, by the human-centricity of the arguments about Data and the lack of extension to other computers. That said, there are still two directions: if Data is mostly indistinguishable from a humanoid except in the kind of machine he is, Picard's case stands and it is chauvinism to assume that only biology could produce consciousness; if Data is mostly indistinguishable from other machines except in his similarity to humans, then Peter G.'s point stands and it is chauvinism to only grant rights to the most cuddly and human of machines. Both can be true, in which case the failure of imagination on the part of the characters and likely writers is failing to use Data as a launching point to all AI. Even the exocomps, the emergent thing in Emergence, and various holodeck characters are still identified as independent beings whereas the computer itself is not, which reveals a significant bias toward things which resemble human or animal life.
For what it's worth, I continue to have no doubt Data was programmed to value things, have existential crises etc., in conjunction with inputs from his environment, but I continue to believe that this does not necessarily distinguish him from humans, who are created with a DNA blueprint which creates a brain which interacts with said blueprint and the environment. Soong programmed Data in a way to make him likely to continue existing, and humans' observable behaviours are generally consistent with what will lead to the survival of the individual and species. To tie into the first scene in the episode, Data may be an elaborate bluff, but so might we be. Of course that still leaves open the possibility that things very far from human, whether biological, technological, or something else entirely, can also possess this trait. And again it seems like cognitive ability and distance from humans are the things we use now; probably given the similarity of humanoids, cognitive ability and distance from humanoids will be the norm. I would like to believe there is something else that could make the application fairer and less egocentric. But it seems even identifying the root of human consciousness more precisely (in the physical world) would just move the problem one step back, identifying "this particular trait we have" as the thing of value, rather than these *other* traits we have.
Andy's Friend
@All
You have to go much further. You have to stop talking about artificial intelligence, which is irrelevant, and begin discussing artificial consciousness.
Allow me to copy-paste a couple of my older posts on "Heroes and Demons" (VOY). I recommend the whole discussion there, even Elliott's usual attempts to contadict me (and everyone else; he was the rather contrarian fellow). Do note that "body & brain," as I later explain on that thread, is a stylistic device: it is of course Data's positronic brain that matters.
Fri, Oct 31, 2014, 1:29pm (UTC -5)
"@Elliott, Peremensoe, Robert, Skeptikal, William, and Yanks
Interesting debate, as usual, between some of the most able debaters in here. It would seem that I mostly tend to agree with Robert on this one. I’m not sure, though; my reading may be myopic.
For what it’s worth, here’s my opinion on this most interesting question of "sentience". For the record: Data and the EMH are of course some of my favourite characters of Trek, altough I consider Data to be a considerably more interesting and complex one; the EMH has many good episodes and is wonderfully entertaining ― Picardo does a great job ―, but doesn’t come close to Data otherwise.
I consider Data, but not the EMH, to be sentient.
This has to do with the physical aspect of what is an individual, and sentience. Data has a body. More importantly, Data has a brain. It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.
Peremensoe wrote: ”This is a physiological difference between them, but not a philosophical one, as far as I can see.”
I cannot agree. I’m sure that someday we’ll see machines that can simulate intelligence ― general *artificial intelligence*, or strong AI. But I believe that if we are ever to also achieve true *artificial consciousness* ― what I gather we mean here by ”sentience” ― we need also to create an artificial brain. As Haikonen wrote a decade ago:
”The brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers.”
This is the main difference between Data and the EMH, and why this physiological difference is so important. Data possesess an artificial brain ― artificial neural networks of sorts ―, the EMH does not.
Data’s positronic brain should thus allow him thought processes somehow similar to those of humans that are beyond the EMH’s capabilities. The EMH simply executes Haikonen’s ”programmed strings of commands”.
I don’t claim to be an expert on Soongs positronic brain (is anyone?), and I have no idea about the intricate differences and similarities between it and the human brain (again: does anyone?). But I believe that his artificial brain must somehow allow for some of the same, or similar, thought processes that cause *self-awareness* in humans. Data’s positronic brain is no mere CPU. In spite of his very slow learning curve in some aspects, Data consists of more than his programming.
This again is at the core of the debate. ”Sentience”, as in self-awareness, or *artificial consciousness*, must necessarily imply some sort of non-linear, cognititive processes. Simple *artificial intelligence* ― such as decision-making, adapting and improving, and even the simulation of human behaviour ― must not.
The EMH is a sophisticated program, especially regarding prioritizing and decision-making functions, and even possessing autoprogramming functions allowing him to alter his programming. As far as I remember (correct me if I’m wrong), he doesn’t posses the same self-monitoring and self-maintenance functions that Data ― and any sentient being ― does. Even those, however, might be programmed and simulated. The true matter is the awareness of self. One thing is to simulate autonomous thought; something quite different is actually possessing it. Does the fact that the EMH wonders what to call himself prove that he is sentient?
Data is essentially a child in his understanding of humanity. But he is, in all aspects, a sentient individual. He has a physical body, and a physical brain that processes his thoughts, and he lives with the awareness of being a unique being. Data cannot exist outside his body, or without his positronic brain. If there’s one thing that we learned from the film ”Nemesis”, it’s that it’s his brain, much superior to B-4’s, that makes him what he is. Thanks to his body, and his brain, Data is, in every aspect, an independent individual.
The EMH is not. He has no body, and no brain, but depends ― mainly, but not necessarily ― on the Voyager computer to process his program. But more fundamentally, he depends entirely on that program ― on strings of commands. Unlike Data, he consists of nothing more than the sum of his programming.
The EMH can be rewritten at will, in a manner that Data cannot. He can be relocated at will to any computer system with enough capacity to store and process his program. Data cannot ― when Data transfers his memories to B-4, the latter doesn’t become Data. He can be shaped and modelled and thrown about like a piece of clay. Data cannot. The EMH has, in fact, no true personality or existence.
Because he relies *entirely* on a string of commands, he is, in truth, nothing but that simple execution of commands. Even if his program compels him to mimic human behaviour with extreme precision, that precision merely depends on computational power and lines of programming, not thought process.
Of course, one could argue that the Voyager’s computer *is* the EMH’s brain, and that it is irrelevant that his memories, and his program, can be transferred to any other computer ― even as far as the Alpha Quadrant, as in ”Message in a Bottle” and ”Life Line”.
But that merely further annihilates his individuality. The EMH can, in theory, if the given hardware and power requirements are met, be duplicated at will at any given time, creating several others which might then develop in different ways. However ― unlike say, Will and Thomas Riker, or a copy of Data, or the clone of any true individual ―, these several other EMHs might even be merged again at a later time.
It is even perfectly possible to imagine that several EMHs could be merged, with perhaps the necessary adjustments to the program (deleting certain subroutines any of them might have added independently in the meanwhile, for example), but allowing for multiple memories for certain time periods to be retained. Such is the magic of software.
The EMH is thus not even a true individual, much less sentient. He’s software. Nothing more.
Furthermore, something else and rather important must also be mentioned. Unless our scope is the infinite, that is, God, or the Power Cosmic, to be sentient also means that you can lose that sentience. Humans, for a variety of reasons, can, all by themselves and to various degrees, become demented, or insane, or even vegetative. A computer program cannot.
I’m betting that Data, given his positronic brain, could, given enough time, devolve to something such as B-4 when his brain began to fail. Given enough time (as he clearly evolves much slower than humans, and his positronic brain would presumably last centuries or even millennia before suffering degradation), Data could actually risk losing his sanity, and perhaps his sentience, just like any human.
The EMH cannot. The various attempts in VOY to depict a somewhat deranged EMH, such as ”Darkling”, are all unconvincing, even if interesting or amusing: there should and would always be a set of primary directives and protocols that would override all other programming in cases of internal conflict. Call it the Three Laws, or what you will: such is the very nature of programming. ”Darkling”, and other such instances, is a fraud. It is not the reflex of sentience; it is, at best, the result of inept programming.
So is ”Latent Image”. But symptomatically, what do we see in that episode? Janeway conveniently rewrites the EMH, erasing part of his memory. This is consistent with what we see suggested several times, such as concerning his speech and musical subroutines in ”Virtuoso”. Again, symptomatically, what does Torres tell the EMH in ”Virtuoso”?
― TORRES: “Look, Doc, I don't know anything about this woman or why she doesn't appreciate you, and I may not be an expert on music, but I'm a pretty good engineer. I can expand your musical subroutines all you like. I can even reprogramme you to be a whistling teapot. But, if I do that, it won't be you anymore.”
This is at the core of the nature of the EMH. What is he? A computer program, the sum of lines of programming.
Compare again to Data. Our yellow-eyed android is also the product of incredibly advanced programming. He also is able to write subroutines to add to his nature and his experience; and he can delete those subroutines again. The important difference, however, is that only Soong and Lore can seriously manipulate his behaviour, and then only by triggering Soongs purpose-made devices: the homing device in ”Brothers”, and the emotion chip in ”Descent”. There’s a reason, after all, why Maddox would like to study Data further in ”Measure of a Man”. And this is the difference: Soong is Soong, and Data is Data. But any apt computer programmer could rewrite the EMH as he or she pleased.
(Of course, one could claim than any apt surgeon might be able to lobotomise any human, but that would be equivalent to saying that anyone with a baseball bat might alter the personality of an human. I trust you can see the difference.)
I believe that the EMH, because of this lack of a brain, is incapable of brain activity and complex thought, and thus artificial consciousness. The EMH is by design able to operate from any computer system that meets the minimum requirements, but the program can never be more than the sum of his string of commands. Sentience may be simulated ― it may even be perfectly simulated. But simulated sentience is still a simulation.
I thus believe that the EMH is nothing but an incredibly sophisticated piece of software that mimics sentience, and pretends to wish to grow, and pretends to... and pretends to.... He is, in a way, The Great Pretender. He has no real body, and he has no real mind. As his programming evolves, and the subroutines become ever more complex, the illusion seems increasingly real. But does it ever become more than a simulacrum of sentience?
All this is of course theory; in practical terms, I have no problem admitting that a sufficiently advanced program would be virtually indistinguishable, for most practical purposes, from actual sentience. And therefore, *for most practical purposes*, I would treat the impressive Voyager EMH as an individual. But as much as I am fond of the Doctor, I have a very hard time seeing him as anything but a piece of software, no matter how sophisticated.
So, as you can gather by now, I am not a fan of such thoughts on artificial consciousness that imply that it is all simply a matter of which computations the AI is capable of. A string of commands, however complex, is still nothing but a string of commands. So to conclude: even in a sci-fi context, I side with the ones who believe that artificial consciousness requires some sort of non-linear thought process and brain activity. It requires a physical body and brain of sorts, be it a biological humanoid, a positronic android, the Great Link, the ocean of Solaris, or whatever (I am prepared to discuss non-corporeal entities, but elsewhere).
Finally, I would say that the bio gel idea, as mentioned by Robert, could have been interesting in making the EMH somehow more unique. That could have the further implication that he could not be transferred to a computer without bio gel circuitry, thus further emphasizing some sort of uniqueness, and perhaps providing a plausible explanation for the proverbial ”spark” of consciousness ― which of course would then, as in Data’s case, have been present from the beginning. This would transform the EMH from a piece of software into... perhaps something more, that was interwoven with the ship itself somehow. It could have been interesting ― but then again, it would also have limited the writing for the EMH very severely. Could it have provided enough alternate possibilities to make it worthwhile? I don’t know; but I can understand why the writers chose otherwise"
Andy's Friend
And:
Sat, Nov 1, 2014, 1:43pm (UTC -5)
"@William B, thanks for your reply, and especially for making me see things in my argumentation I hadn’t thought of myself! :D
@Robert, thanks for the emulator theory. I’m not quite sure that I agree with you: I believe you fail to see an important difference. But we’ll get there :)
This is of course one huge question to try and begin to consider. It is also a very obvious one; there’s a reason ”The Measure of a Man” was written as early as Season 2.
First of all, a note on the Turing test several of you have mentioned: I agree with William, and would be more categorical than him: it is utterly irrelevant for our purposes, most importantly because simulation really is just that. We must let Turing alone with the answers to the questions he asked, and search deeper for answers to our own questions.
Second, a clarification: I’m discussing this mostly as sci-fi, and not as hard science. But it is impossible for me to ignore at least some hard science. The problem with this is that while any Trek writer can simply write that the Doctor is sentient, and explain it with a minimum of ludicrous technobabble, it is quite simply inconsistent with what the majority of experts on artifical consciousness today believes. But...
...on the other hand, the positronic brain I use to argue Data’s artificial consciousness is, in itself, in a way also a piece of that same technobabble. None of us knows what it does; nobody does. However, it is not as implausible a piece of technobabble as say, warp speed, or transporter technology. It may very well be possible one day to create an artificial brain of sorts. And in fact, it is a fundamental piece in what most believe to be necessary to answer our question. I therefore would like to state these fundamental First and Second Sentences:
1. ― DATA HAS AN ARTIFICIAL BRAIN. We know that Data has a ”positronic brain”. It is consistently called a ”brain” throughout the series. But is it an *artificial brain*? I believe it is.
2. ― THE EMH IS A COMPUTER PROGRAM. I don’t belive I need to elaborate on that.
This is of the highest order of importance, because ― unlike what I now see Robert seems to believe ― I think the question of ”sentience”, or artificial consciousness, has little to do with hardware vs software as he puts it, as we shall see.
Now, I’d like to clarify nomenclature and definitions. Feel free to disagree or elaborate:
― By *brain* I mean any actual (human) or fictional (say, the Great Link) living species’ brain, or thought process mechanism(s) that perform functions analogous to those of the human brain, and allow for *non-linear*, cognitive processes. I’m perfectly prepared to accept intelligent, sentient, extra-terrestrial life that is non-humanoid; in fact, I would be very surprised if most were humanoid, and in that respect I am inclined to agree with Stanilaw Łem in “Solaris”. I am perfectly ready to accept radial symmetric lifeforms, or asymmetric, with all the implications to their nervous systems, or even more bizarre and exotic lifeforms, such as the Great Link or Solaris’ ocean. I believe, though, that all self-conscious lifeforms must have some sort of brain, nervous system ― not necessarily a central nervous system ―, or analogues (some highly sophisticated nerve net, for instance) that in some manner or other allows for non-linear cognitive processes. Because non-linearity is what thought, and consciousness ― sentience as we talk about it ― is about.
― By *artificial brain* I don’t mean a brain that faithfully reproduces human neuroanatomy, or human thought processes. I merely mean any artificially created brain of sorts or brain analogue which somehow (insert your favourite Treknobabble here ― although serious, actual research is being conducted in this field) can produce *non-linear* cognitive processes.
― By *non-linear* cognitive process I mean not the strict sense of non-linear computational mechanics, but rather, that ineffable quality of abstract human thought process which is the opposite of *linear* computational process ― which in turn is the simple execution of strings of command, which necessarily must follow as specified by any specific program or subroutine. Non-linear processes are both the amazing strength and the weakness of the human mind. Unlike linear, slavish processes of computers and programs, the incredible wonder of the brain as defined is its capacity to perform that proverbial “quantum leap”, the inexplicable abstractions, non-linear processes that result in our thoughts, both conscious and subconscious ― and in fact, in us having a mind at all, unlike computers and computer programs. Sadly, it is also that non-linear, erratic and unpredictable nature of brain processes that can cause serious psychological disturbances, madness, or even loss of consciousness of self.
These differences are at the core of the issue, and here I would perhaps seem to agree with William, when he writes: ”I don't think that it's at all obvious that sentience or inner life is tied to biology, but it's not at all obvious that it's wholly separate from it, either. MAYBE at some point neurologists and physicists and biologists and so forth will be able to identify some kind of physical process that clearly demarcates consciousness from the lack of consciousness, not just by modeling and reproducing the functioning of the human brain but in some more fundamental way.”
I agree and again, I would go a bit further: I am actually willing to go so far as to admit the possibility of us one day being able to create an *artificial brain* which can reproduce, to a certain degree, some or many of those processes ― and perhaps even others our own human brains are incapable of. Likewise, I am prepared to admit the possibility of sentient life in other forms than carbon-based humanoid. It is as reflections of those possibilities that I see the Founders, and any number of other such outlandish species in Star Trek. And it is as such that I view Data’s positronic brain ― something that somehow allows him many of the same possibilities of conscious thought that we have, and perhaps even others, as yet undiscovered by him. Again, I would even go so far as not only to admit, but to suppose the very real possibility of two identical artificial brains ― say, two copies of Data’s positronic brain ― *not* behaving exactly alike in spite of being exact copies of each other, in a manner similar to (but of course not identical to) how identical twins’ brains will function differently. This analogy is far from perfect, but it is perhaps the easiest one to understand: thoughts and consciousness are more than the sum of the physical, biological brain and DNA. Artificial consciousness must also be more than the sum of a artificial brain and the programming. As such, I, like the researchers whose views I am merely reflecting, not only expect, but require an artificial brain that in this aspect truly equals the fundamental behaviour of sentient biological brains.
It is here, I believe, that Robert’s last thoughts and mine seem to diverge. Robert seems to believe that Data’s positronic brain is merely a highly advanced computer. If this is the case, I wholly agree with his final assessment.
If not, however, if Data’s brain is a true *artificial brain* as defined, what Robert proposes is wholly unacceptable.
IT IS STAR TREK’S FAULT THAT THE QUALITY OF DATA’S BRAIN IS NEVER FULLY ESTABLISHED.
Data’s brain is never established as a true artificial brain. But it is never established a merely highly advanced computer, either. It is once stated, for instance, that his brain is “rated at...” But this means nothing. This is a mere attempt at assessing certain faculties of his capacities, while wholly ignoring others that may as yet be underdeveloped or unexplored. It is in a way similar to saying of a chess player that he is rated at 2450 ELO: it tells you precious little about the man’s capacities outside the realm of chess.
We must therefore clearly understand that brains, including artificial brains, and computers are not the same and don’t work the same way. It is not a matter of orders of magnitude. It is not a matter of speed, or capacity. It is not even a matter of apples and oranges.
I therefore would like to state my Third, Fourth, Fifth and Sixth Sentences:
3. ― A BRAIN IS NOT A COMPUTER, and vice-versa.
4. ― AN ARTIFICIAL BRAIN IS NOT A COMPUTER, and vice versa.
5. ― A COMPUTER IS INCAPABLE OF THOUGHT PROCESSES. It merely executes
programs.
6. ― A PROGRAM IS INCAPABLE OF THOUGHT PROCESSES. It merely consists of linear strings of commands.
Here is finally the matter explained: a computer is merely a toaster, a vacuum-cleaner, a dish-washer: it always performs the same routine function. That function is to run various computer programs. And the computer programs ― any program ― will always be incapable of exceeding themselves. And the combination computer+program is incapable of non-linear, abstract thought process.
To simplify: a computer program must *always* obey its programming, EVEN IN SUCH CASES WHEN THE PROGRAMMING FORCES RANDOMIZATION. In such cases, random events ― actions and decisions, for instance ― are still merely a part of that program, within the chosen parametres. They are therefore only apparently random, and only within the specifications of the program or subroutine. An extremely simplified example:
Imagine that in a given situation involving Subroutine 47 and a A/B Action choice, the programming requires that the EMH must:
― 35% of the cases: wait 3-6 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 10-15 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 10% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose RANDOMLY.
― 5% of the cases: wait 60-90 seconds as if considering Actions A and B, then choose RANDOMLY.
― 6% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 10-15 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 3-6 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47
In a situation such as this simple one, any casual long term observer would conclude that the faster the subject/EMH took a decision, the more likely it would be the right one ― something observed in most good professionals. Every now and then, however, even a quick decision might prove to be wrong. Inversely, sometimes the subject might exhibit extreme indecision, considering his options for up to a minute and a half, and then having even chances of success.
A professional observer with the proper means at his disposal, however, and enough time to run a few hundred tests, would notice that this subject never, ever spent 7-9 seconds, or 16-19 seconds before reaching a decision. A careful analysis of the response times given here would show results that could not possibly be random coincidences. If it were “Blade Runner”, Deckard would have no trouble whatsoever in identifying this subject as a Replicant.
We may of course modify the random permutations of sequences, and adjust probabilities and the response times as we wish, in order to give the most accurate impression of realism compared to the specific subroutine: for a doctor, one would expect medical subroutines to be much faster and much more successful than poker and chess subroutines, for example. Someone with no experience in cooking might injure himself in the kitchen; but even professional chefs cut themselves rather often. And of course, no one is an expert at everything. A sufficiently sophisticated program would reflect all such variables, and perfectly mimic the chosen human behaviour. But again, the Turing test is irrelevant:
All this is varying degrees of randomization. None of this is conscious thought: it is merely strings of command to give the impression of doubt, hesitation, failure and success ― in short, to give the impression of humanity.
But it’s all fake. It’s all programmed responses to stimuli.
Now make this model a zillion times more sophisticated, and you have the EMH’s “sentience”: a simple simulation, a computer program unable to exceed its subroutines, run slavishly by a computer unable of any thought processes.
The only way to partially bypass this problem is to introduce FORCED CHAOS: TO RANDOMIZE RANDOMIZATION altogether.
It is highly unlikely, however, that any computer program could long survive operating a true forced chaos generator at the macro-level, as opposed to limited forced chaos to certain, very specific subroutines. One could have forced chaos make the subject hesitate for forty minutes, or two hours, or forever and forfeit the game in a simple position in a game of chess, for example; but a forced chaos decision prompting the doctor to kill his patient with a scalpel would have more serious consequences. And many, many simpler forced chaos outcomes might also have very serious consequences. And what if the forced chaos generator had power over the autoprogramming function? How long would it take before catastrophic failure and cascading systems failure would occur?
And finally, but also importantly: even if the program could somehow survive operating a true forced chaos generator, thus operating extremely erraticly ― which is to say, extremely dangerously, to itself and any systems and people that might depend on it ―, it would still merely be obeying its forced chaos generator ― that is, another piece of strings of command.
So we’re back where we started.
So, to repeat one of my first phrases from a previous comment: “It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.” And the matter is, that the EMH simply *does not think*. The program simulates realistic responses, based on programmed responses to stimuli. That’s all. This is not thought process. This is not having a mind.
So it follows that I don’t agree when Peremensoe writes what Yanks also previously has commented on: "So Doc's mind runs on the ship computer, while Data's runs on his personal computer in his head. This is a physiological difference between them, but not a philosophical one, as far as I can see. The *location* of a being's mind says nothing about its capacity for thought and experience."
The point is that “Doc” doesn’t have a “mind”. There is therefore a deep philosophical divide here. The kind of “mind” the EMH has is one you can simply print on paper ― line by line of programming. That’s all it is. You could, quite literally, print every single line of the EMH programming, and thus literally read everything that it is, and learn and be able to calculate its exact probabilities of response in any given, imaginable situation. You can, quite literally, read the EMH like a book.
Not so with any human. And not so, I argue, with Data. And this is where I see that Robert, in my opinion, misunderstands the question. Robert writes: “Eventually hardware and an OS will come along that's powerful enough to run an emulator that Data could be uploaded into and become a software program”. This only makes sense if you disregard his artificial brain, and the relationship between his original programming and the way it has interacted with, and continues to interact with that brain, ever expanding what Data is ― albeit rather slowly, perhaps as a result of his positronic brain requiring much longer timeframes, but also being able to last much longer than biological brains.
So I’ll say it again: I believe that Data is more than his programming, and his brain. His brain is not just some very advanced computer. Somehow, his data ― sensations and memories ― must be stored and processed in ways we don’t fully understand in that positronic brain of his ― much like the Great Link’s thoughts and memories are stored and processed in ways unknown to us, in that gelatinous state of theirs.
I therefore doubt that Data’s program and brain as such can be extracted and emulated with any satisfactory results, any more than any human’s can. Robert would like to convert Data’s positronic brain into software. But who knows if that is any more possible than converting a human brain into software? Who knows whether Data’s brain, much like our own, can generate thought processes that are inscrutable and inexplicable that surpass its construction?
So while the EMH *program* runs on some *computer*, Data’s *thoughts* somehow flow in his *artificial brain*. This is thus not a matter of location: it’s a matter of essence. We are discussing wholly different things: a program in a computer, and thoughts in a brain. It just doesn’t get much more different. In my opinion, we are qualitatively worlds apart. "
William B
@Andy's Friend:
First of all, I was thinking about your post at times when I was writing the above.
I do think that the meaning of Data's positronic net has some of the connotations you indicate. I do think that there is a very important distinction between Data and the Doctor on this level. And I agree quite strongly that it is important that it is *only* members of Data's "family" who can manipulate him with impunity. In fact it is not just Soong and Lore, but also Dr. Graves from "The Schizoid Man," but even there he is clearly identified as Data's "grandfather." Within this very episode, it is emphasized several times, as you point out, that it has been impossible for Maddox to recreate the feat of Data, and that he hopes that he will recreate it by disassembling Data...which may in fact do nothing to further his abilities along. Further, we know from, e.g., The Schizoid Man, that Data's physical brain can support Graves' personality in a way that the Enterprise computer cannot (the memories are still "there" but the spark is gone). Data can, of course, be manipulated by random events of the week, but we are talking about beings with unknown power, like Q giving him laughter, or the civilization in "Masks" taking him over, or events which affect Data the same as biological beings (like the polywater intoxication in "The Naked Now" or the memory alteration thing in "Conundrum"). I think that a lot of the reason it is important that only members of Data's "family" can affect him to this degree (beyond beings who are shown to have the power to do broader manipulations on all sentient beings, like the Q) is that, as you say, in some respects Data is a child, and the series has (very cleverly) managed to split apart different aspects of Data's growth so that they take place in the series; by holding the keys to Data's growth, Graves, Soong, Tainer and Lore literalize the awesome power that family has to shape us in fundamental ways, which is important metaphorically.
And yet --
I think that the difference here between hardware and software is important, but it does not necessarily mean everything. I do not understand the claim, and the certainty behind it, that sentience requires a body to be "real" rather than a simulation. I presume that you do not require the body to be humanoid, but certainly you seem to indicate that the EMH cannot be sentient because he is simulated by the computer, rather than because he is located inside a body the way Data is. Your claim also that anyone can modify the EMH by changing codes indicates that he is not sentient, again, does not convince me. Certainly it means that the EMH is easier to change, is more dependent on external stimulus. However to some extent this represents simply a different perspective on what it means to be self-aware. The ease with which external pressures can change the EMH matches up with his mercurial, "borderline" personality, constantly being reshaped by environment rather than having a more solid core. Data, despite his ability to modify himself, has a more solid core of identity, but in the time between Data and the EMH's creation (both in-universe, and in our world) the ability of medicine to alter personality, through electrical stimulation or drugs, has increased. You are fond of quoting that we mostly need mirrors, and I think that in most respects, Data and the EMH are there to hold mirrors up to humanity; in both cases, we are looking at aspects of what it means to be human in a technological age, and without recourse to theology to define the "soul," but Data seems to me to reflect the question of what it means to be a person who exists in the physical world, whose selfhood is housed in and dependent on a physical organ, whereas the EMH is something more of the creature of the internet age, where people's sense of self is a little less consciously tied to their *bodies* so much as their online avatars, and some people have become aware of how easily they can be shaped (manipulated?) by information.
What that means, in universe, for the relative sentience of the two is a complicated question. But it does not seem to me that sentience need necessarily be a matter of physical location. I believe that at the time of Data, within the universe, the best computers in Starfleet were just starting to catch up to the ability to simulate human-level intelligences; Minuet is still far ahead of the Enterprise computer *without* the Bynar enhancement, for example, and Moriarty appears sentient but remains something of a curiosity. The EMH is at the vanguard of a new movement, and, particularly when his code is allowed to grow in complexity, he comes to be indistinguishable in complexity from a human. If consciousness is an emergent property -- something which necessarily follows from a sufficiently complex system -- then what would make the EMH not conscious?
The relevance of "artificial intelligence" for Data is that, in this episode, intelligence is one of the three qualities that Maddox gives. Self-awareness is the other. Consciousness is the third, and this is what cannot be identified by an outside party. Perhaps some theory of consciousness could be finalized by the real 24th century which would aid in this, but within Trek it seems as if there is nothing to do but speculate. And so I believe that Picard would make the same argument for the EMH that he makes for Data, and, indeed, for other artificial entities which display the intelligence necessary to be able to more or less function at a humanoid level as well as self-awareness of being able to accurately communicate one's situation. That does not, of course, mean Picard would be correct. But while Picard absolutely argues that Data should have rights because he is conscious (or, rather, *might* be conscious, and the consequences are grave if he does not), he does not attempt to prove that Data is conscious, at all, but rather implicitly demonstrates, using himself as an example, the impossibility of proving consciousness. (This is not *quite* what he does; rather, he demonstrates that Maddox cannot easily prove Picard sentient, and Maddox manages to get out that Picard is self-aware, and we can presume that Maddox would believe Picard to be intelligent, so that still leaves consciousness.)
The question is then whether, according to your model, the episode fails to argue its case -- because no one does strenuously argue that Data's positronic brain is the true distinction between him and the Enterprise computer. This is one of the points that Peter G. argued earlier, that this is the only thing really at issue in this episode, and by extension the episode failed by not properly addressing it. I would argue that this is still not really true, because it is still valuable to come at the problem from the side Picard eventually takes: if Data meets all the requirements for sentience that Maddox can prove apply to Picard, it would be discriminatory and dangerous to deny him the same rights. The arguments presented still hold -- Riker's case that Data remains a machine, created by a man, designed to resemble a man, a collection of heuristic algorithms, etc., remains true, and Picard's case that humans are also machines, that Data has formed connections which could not have been directly anticipated in his original programming (though Soong could have made some sort of "keep a memento from the person you're intimate with" code, even still), and that Data's sentience matches his own remain true. Unless the form of Data's intelligence and heuristic algorithms really do exclude the EMH in some way, it still just seems to me that Picard has not really excluded the EMH in his argument. Since I don't think that Data's artificial brain vs. the complex code which runs the EMH is necessarily a fundamental difference, I don't think this gap in the episode is a problem. But even there I am not certain that the Enterprise computer would not fit some of Picard's argument, except that it is perhaps not as able to learn and adapt and thus would not meet the intelligence requirement.
It is possible that I simply like this episode enough that I'm more interesting in defending it than in getting at the truth -- part of the problem with this sort of, ahem, "adversarial process." But the episode is of course not suddenly worthless if the characters within it make wrong or incomplete arguments. Some of the reason that Picard makes the argument he does is that Data is similar enough to a human that analogies to incidents in human history can be, and are, made, in which differences which seem to be crucial but are later decided were actually superficial are used as a pretext for discrimination and slavery. If artificial consciousness (or artificial intelligence) never gets to the point where an android can be created of the level of sophistication of Data, then the episode still remains relevant as a metaphor for intra-human social issues, which in the end is mostly its primary purpose anyway.
It is worth noting that most forms of intelligence in TNG end up taking on physical form rather than existing in code. The emergent life form in "Emergence" actually gets formed in the holodeck as a physical entity, and that physical entity is then reproduced in the ship and then flown out. The Exocomps which Data goes to save are a little like little brains on wheels, and the connections that they form are, again, *physical* in nature -- they actually replicate the pathways. These along with the positronic brains of Data, Lore, Lal and Julianna suggest that TNG's take does largely match up with yours. However, I am not that certain that the brain being a physical entity is what is important for consciousness. Of the major holographic characters in the show, Minuet is revealed to be a ploy by the Bynars, and whether she is actually conscious or not remains something of a mystery, but even if she is, it is specifically because of the Bynars' extremely advanced and mysterious computer tech, which is gone at the end of the episode. The holographic Leah actually is made to be self-aware, and here we might have to rely, following Picard's argument in the episode, on her not being sufficiently intelligent -- while she can carry on a conversation, she is not actually able to solve the problem (Geordi comes up with the solution), though this is hardly conclusive. Moriarty is the bizarre, exceptional case which "Elementary Dear Data" regards with optimism and "Ship in a Bottle" a more jaded pragmatism, living out his life in a simulation which can be as real as he is, whatever that is. Moriarty really is the most miraculous of these, and the one that most prefigures the Doctor, and really "E,DD" and "Ship in the Bottle" do not so much rule out that Moriarty could genuinely be conscious as supply a one-off solution to a one-off freak occurrence, giving him a happy life, if not exactly the full life he (apparently) wanted in exchange for him not killing them.
William B
The second "Heroes and Demons" comment you quoted was much more focused on nonlinearity as opposed to the physical enclosure of the brain. So I didn't address that in my comment, which I had started writing before you posted your second comment :) I guess one question is whether nonlinearity would actually be necessary to convincingly simulate human-level intelligence/insight. If it was not, then there is less of a problem; AI which would give the appearance of consciousness would not come up. If it was, then Picard's argument would still largely stand, and then either nonlinearity would need to be eliminated as a requirement for probable sentience or a different argument than the one Picard offers would be required -- unless there is something else in the episode that you think would preclude a more linear computer from gaining some rights in this episode if it displayed the same external traits as Data.
Peter G.
@ Andy's Friend
I'm just not sure how you come to the conclusion definitively that Data's 'consciousness' has emergent properties just like a Human consciousness does, and that this is due to his physical brain. There are several objections to stating this as a fact, although I grant it is certainly plausible. I'll just name the objections numerally, not having a better method.
1) I don't see how you can define Data's consciousness as having similar properties to that of Humans since you at no point state what the qualities are of Human consciousness. What does it even mean to say they are conscious, in your paradigm, other than to say they say they feel they are conscious? Is there a specific and definitive physical characteristic that can be pinpointed? Because if, as you say, that quality is some "ineffable" something then we're left in the dust in terms of demonstrating what does or does not possess this quality. We could discuss whether a simulation creates the appearance of it, but that aesthetic comparison is the best we can do. As far as I can tell this is William B's main argument in favor of the episode having contributed something significant to the discussion.
2) As William B mentioned, you state quite certainly that the Enterprise computer is distinctly different from Data's "brain", and that this mechanical difference is why Data can have consciousness and the computer can't. What is that difference? If as you, yourself, say we don't know anything about what a positronic net is, then how can you say it's fundamentally different from the computer's design? At best we can refer to Asimov when discussing this, and the only thing we know from him is that for some reason a computer that works with positrons instead of electrons is more efficient. It is, however, still a basic electrical system, using a positive charge instead of a negative one, and requiring magnetic fields to prevent the positrons from annihilating. Maybe the field causes the circuits to act as superconductors? Who knows. Asimov never said anything about a positronic brain using fundamentally unique engineering mechanisms as far as I recall from his books. This leads to objection 3, which is a corollary of 2.
3) You specify that Data's processing is "non-linear" and thus either emulates or is similar to Human brain processing. How do you know this? Where is your source? You also specify that the Human brain isn't a computer since it also employs non-linear processing. Where's your medical/mathematical source on that? What does it even mean? I could guess what it means, but you seem to state it as a fact, which makes me wonder what facts you're basing your statement on. It's certainly possible you're right, but how can you *know* you're right? Frank Herbert himself was convinced that Humans employ non-linear processing capabilities not replicatable by machine binary processing. Then again, he couldn't have known there was such a thing as quantum computing. And I'm not even convinced that quantum computing is what you might mean when you (and he) discuss "non-linear" processing, which I can we can also call non-binary computing. Even quantum computing, to the extent that I understand, seems to be linear processing in multiple parallel so as to exponential increase processing power. I don't know that the processing is necessarily done in a fundamentally different manner, however. It's not binary, but still appears to be linear. If there is a different kind of processing even than this (that maybe we possess) I don't see that we can even imagine what this is yet, no less ascribe it specifically to Data's circuitry.
4) Since it's never revealed that Data's programming and memory can't be transferred to another identical body, I don't see how you can be so sure that the EMH's easy movement from system to system makes for a basic difference between him and Data. If the Doctor is contained in more primitive computers than Data is then obviously he'd be easier to transfer around, but the distinction then would only be that Data's technology isn't well understood during TNG and thus can't be recreated yet. But this engineering obstacle is only temporary and once Datas could be constructed at will I don't see how you could be sure his personality couldn't be transferred just as easily as that of the EMH. Once Starships have positronic main computers there would seemingly be no difference between them at all, and likewise once android technology is more advanced there seems to be no good reason why the EMH couldn't be transferred into a positronic android body (if someone were to have to bad taste to want to do this :p ). The differences you state in this sense seem to me to be superficial and not really related to any fundamental difference in consciousness between the two of them. They're each contained in a computer system, one being more advanced than the other, but otherwise they are both programs run inside a mechanical housing. Data, like the Doctor, is, quite literally, Data. His naming is hardly a mere reference to the fact that he processes data quickly, I think, since the ship's computer does that too. Now that I think of it, his name somewhat reminds me of Odo's, where each is meant to describe what humanoids thought of their nature when they found them; one as an unknown specimen, and one as data.
5) If anything Data is even more constrained than the Doctor in terms of mannerism and behavior. He cannot use contractions, cannot emulate even basic Human behaviors and mannerisms, and cannot make errors intentionally. By contrast, the Doctor seems to learn much more quickly and is more adaptable to the needs of the crew in their situation. Both he and Data adopt hobbies, but while Data's are merely imitative in their implementation, the Doctor appears to somehow come up with his own schtick and tastes that are not obvious references to famous singers (as Data merely copies a violinist of choice each concert) or to specific instances in his data core. He really does, at the very least, composite data to produce a unique result, as compared to Data, who can't determine a way to do this that is not completely arbitrary. I'm not trying to make a case for the Doctor's sentience, but if you're going to look strictly at their behavior and learning capacity side-by-side, the Doctor's much more closely resembles that of humanoids than Data's does. To be honest, my inclination is to ascribe this to lazy writing on the part of Voyager's writers in not taking his limitations nearly as seriously as the TNG writers did for Data, however what's done is done and we have to accept what was presented as a given. I would say that, at the least, if Data is sentient then so is the Doctor, although my preference would be to suggest that neither is. But William B's point is valid, that a hunch that this is so shouldn't be confused with the certainty needed to withhold basic rights from Data, which is all the court case was deciding. Then again, I see another weakness with the episode as being that the same argument could be turned on its head, with the position being that by default no 'structure' is automatically assigned 'sentient rights' until proven it is sentient. The Federation doesn't, after all, give rights to rocks and shuttlecraft *just in case* they're sentient. In fact, however, their policy (at least as enacted by Picard), appears to be closer to granting rights to any entity demonstrating intelligence at all, whether that's Exocomps, energy beings, tritium-consuming entities, the Calamarain, or any other species that demonstrates the use of logic and perhaps the ability to communicate. Picard's criterion never seems to be sentience, but rather intelligence, and therefore we are never offered an explanation of how this applies to artificial life forms in particular, since some of them (like Exocomps) are treated as having rights based on having "wants", while others, like Moriarty, are not discussed as having innate rights, despite Picard's generous attempts anyhow to preserve his existence. Basically we have no consistent Star Trek position on artificial life forms/programs. Heck, we are even given an inkling Wesley's nanites could develop an intelligence of some kind, even though they clearly have no central processing net such as Data's, and even if they do it wouldn't be as sophisticated. So why is an Exocomp to be afforded rights, but not the doctor, when the Voyager's computer is likely vastly more advanced than the Exocomps were? My main point here is that there is no conclusive evidence given by Star Trek that broadly points to Data as having some unique characteristic that these other technological/artificial beings didn't have, or contrariwise, that if they have it the Doctor specifically doesn't. We just don't have enough information to make a determination about this.
I guess that's enough for now, since I'm even beginning to forget if there are other points to respond to and I haven't the energy right now to reread the thread.
William B
I agree with Peter G.'s last comment. I would tend to say that I'd tend to view Data and the EMH as probably sentient, rather than probably not sentient, because I think that a system sufficiently sophisticated to simulate "human-level" (for lack of a better term) sentience may have developed sentience as a consequence of that process. Either way though the evidence mostly suggests to me that if one is sentient, they probably both are. If Data's brain is different from the code which runs the EMH, this is mostly not emphasized by TNG/Voyager. (I also agree that the Doctor's seeming to be less limited than Data is maybe an artifact of the Voyager writers not putting as much effort in. It does to some degree support the idea put forward by Lore that Data was deliberately built with limitations so as to prevent him from upsetting the locals too much -- the Doctor veers more quickly and readily toward narcissism than Data does, perhaps because Data is acutely aware of his limitations.)
If I had to describe an overall arc in TNG and Voyager, it would be that TNG introduces the possibility, via Moriarty and "Emergence," that sentience can be developed within the computer, but generally Moriarty is treated as a fluke which is too difficult to deal with. I actually think that they don't exactly conclude Moriarty isn't a life form so much as try to respect his one apparent wish -- to be able to get off the holodeck -- and then put that on the backburner presumably handing the problem off to the Federation's best experts; why the holo-emitter takes until the 29th century to be built I do not know, but that's the way it is. In "Ship in the Bottle," they let Moriarty live out his life in a simulation as a somewhat generous solution to the fact that he was holding their ship hostage to achieving his, again, apparently impossible request (to leave the holodeck). In any case, in Voyager the EMH is slowly granted rights within the crew, and finally the crew mostly seem to view him as a person, and by season seven the question of whether the rights granted to the EMH within the specifics of Voyager's isolated system should be expanded outward. The bias toward hardware over software -- Data and the Exocomps vs. holographic beings -- seems to me to be something that the TNG-Voyager narrative implies is not really based on a fundamental difference between Data and the EMH, but a difference in the biases of the physical life forms, who can more readily accept another *corporeal* conscious being, even if mechanical, as being sentient, rather than a projection whose "consciousness" is located in the computer. I think that narrative implies that the mighty Federation still has a long way to go before coming to a fair ethics of artificial beings, which to me seems fine -- in human history, rights were expanded in a scattershot manner in many cases.
I do think too that the relative ease of Data's being granted rights has a lot to do with what this episode is partly about -- precedent. By indicating the dangers of denying Data rights if he is actually sentient, Picard avoids the Federation becoming complicit in the exploitation of an entire sentient race of Datas, or, if Data is not sentient, the Federation loses out on an army of high-quality androids. Either way, once the decision is made, while it may be tempting to overturn the decision if Data becomes dangerous (and it is implied in The Offspring and Clues that the narrowness of Louvois' ruling means that Data might be denied right to "procreate" or might be disassembled if he refuses orders), if he doesn't it is in the Federation's interests to maintain their own ethical narrative. Because a whole slew of EMHs and other presumably similarly advanced holographic programs were developed by the Federation, granting that they are sentient "now" (i.e., by the time Voyager is on) would mean admitting to complicity in mass exploitation which cannot be undone, only stopped. In miniature, we see this in Latent Image, where Janeway et al.'s complicity in wiping the Doctor's memory is part of what makes it especially difficult for them to change their minds about the Doctor's rights.
I do think that while Voyager seems to be pushing that sentience is possible in holograms, it still does not generally discuss whether the ship's computer *itself* could be sentient life, which is a big question. I guess Alice suggests that a ship's computer could be alive if it houses a demon ghost thing. It might also be that a ship's computer is so far from any life form that is classified as a life form that it is unknown how to evaluate what its "wants" would be, or what an ethical treatment of it would even look like. The same would probably actually apply to some forms of "new life" discovered which would not have "wants and needs" in a way that would be recognizable to humanoids, though I can't think of such examples in Trek.
Yanks
Good lord guys!!
:-)
the discussion is afoot!! lol
Andy's Friend
@Peter G. & William B
Like you, I also think very highly of this episode. It has a good script, with some very memorable lines, memorable acting by especially Patrick Stewart, and it is thought-provoking, and was perhaps even more so when originally aired, as it ask questions that we will undoubtedly have to ask ourselves one day, and touch the core of our own existence: what does it mean to exist?
But as I wrote above, and precisely because I consider the matter important, I feel that we must necessarily consider not only the in-universe data available, but also, real, hard science.
This means that while I may agree with you, in-universe, on a number of points, all that is trumped, in my opinion, by real science. Maddox, Moriarty, and Ira Graves are important: but they are so especially as glorious vehicles to ask important questions. And the answers, I find, must usually be sought outside the Trek lore.
This is not a criticism, quite the contrary. It is precisely why this is Star Trek at its very finest: as inspiration for further thought outside itself.
As such, consider Peter G. now:
PETER G.― "5) [...] if you're going to look strictly at their behavior and learning capacity side-by-side, the Doctor's much more closely resembles that of humanoids than Data's does. To be honest, my inclination is to ascribe this to lazy writing on the part of Voyager's writers in not taking his limitations nearly as seriously as the TNG writers did for Data [...]"
Very, very good point, Peter. But notice one word you wrote: "RESEMBLES". Resembles matters not, Peter: see the last three phrases of this comment. And also: why, but why, after such a good observation, do you immediately after it write
PETER G.― "5) [...] however what's done is done and we have to accept what was presented as a given."
I don’t think so, Peter. I love Star Trek, and especially TNG. But we must be able to love the forest, and cut down a few trees every now and then to improve the view.
Moving on:
PETER G.― "2) As William B mentioned, you state quite certainly that the Enterprise computer is distinctly different from Data's "brain", and that this mechanical difference is why Data can have consciousness and the computer can't. What is that difference?"
It is that never, ever, are we given the impression that the Enterprise computer is the equivalent of an *artificial brain* in the scientific sense, whereas it is extremely obvious from the onset that Data’s is one such creation.
PETER G.― "3) You specify that Data's processing is "non-linear" and thus either emulates or is similar to Human brain processing. How do you know this? Where is your source? You also specify that the Human brain isn't a computer since it also employs non-linear processing. Where's your medical/mathematical source on that? What does it even mean?"
And there you have it: we are clearly having two different conversations. You are speaking Trek-speak. I am speaking of science. But the good thing is, the two can actually combine. I see this episode as an invitation, to all viewers, to further investigation of these elevated matters. I suggest you investigate, Peter. It’s much easier today than it was in 1989 ;)
As for William, you are absolutely right when you write that "the episode is of course not suddenly worthless if the characters within it make wrong or incomplete arguments." I wish in no way to detract from this wonderful episode, and I greatly appreciate what Snodgrass tried to do here, and indeed, mostly accomplished. And I could not possibly expect the writer back in 1988 to be an expert on artificial consciousness.
This is thus merely to say that I find this particular talk of ours a little difficult, because you both tend to use in-universe arguments much more than I do. You just wrote, for instance:
WILLIAM B―"However, I am not that certain that the brain being a physical entity is what is important for consciousness. Of the major holographic characters in the show... "
That is of course completely legitimate: to consider what Star Trek says, and not science―to judge Star Trek on its own terms. And I am frequently impressed by the amount of detail you seem to remember. Allright, then: what you then must do is investigate the coherence of the in-universe cases. Allow me three examples:
You first give an outstanding example of what I mean with an in-universe case:
WILLIAM B―"Further, we know from, e.g., The Schizoid Man, that Data's physical brain can support Graves' personality in a way that the Enterprise computer cannot (the memories are still "there" but the spark is gone)."
Precisely. But then, you write:
WILLIAM B―"Minuet is revealed to be a ploy by the Bynars, and whether she is actually conscious or not remains something of a mystery..."
No: it is only a mystery in the moment. The later example you give above retroactively affects "11001001," as it proves, even in-universe, that she cannot be conscious. Minuet is a program running on the Enterprise computer, just an even better program. But in your excellent wording: there is no spark.
And then you write:
WILLIAM B―"The holographic Leah actually is made to be self-aware..."
Maybe it is, and maybe it isn't: we must distinguish between various levels of self-awareness and consciousness. Many robotic devices on Earth today are beginning to exhibit the simplest traits of what to an outsider might appear as rudimentary self-awareness. But we must distinguish between self-awareness and mere artificial intelligence, or basic programming. If a robot vacuum-cleaner drives around a chair, you don't consider it sentient, do you? And if a robotic lawn-mower were programmed to say: "I'm with you every day, William. Every time you look at this engine, you're looking at me. Every time you touch it, it's me," you wouldn't call it self-aware, would you?
An example: toys are being programmed to react to stimuli, and can both say “Ouch!” and cry, etc. Questions:
1―Is the robotic doll that identifies a chair on its path, and walks around it, or even sits on it, self-aware?
2―Does it hurt the doll that says “Ouch!” if you drop it―even if you provide it with sensors able to measure specific force, and adjust the “Ouch!” to the force of impact?
3―Is the doll that cries if you don’t hug her for hours truly sad―even if it is programmed to cry louder the longer she isn’t hugged?
4―If the doll is allowed self-programming abilities, and alters its crying to sobbing, does that alter anything?
5―If the doll is programmed to say that it is a doll, manufactured at such and such place, at such and such date, and that its name now is whichever you have given it; and that it will take damage to its internal circuitry if you kick it, and beg you not to kick it as it will damage it, and hurt it, and begin to cry and sob, does that constitute any degree of self-awareness, or consciousness?
6―If you multiply the level of programming complexity a zillion times, does that change anything at all?
Would HAL dream? When that film was made, a considerable number of scientists would have answered yes. Not so today. The numbers of scientists who adhere to the thought that any sufficiently advanced computer program will result in artifical consciousness―a major group some fifty-sixty years ago, when Computational Speed was a deity to be worshipped and Man would have flying cars by the year 2000―have dwindled considerably since this episode was written. Paradigms have changed. Today, most say: computational speed, and global volume of operations, matters not. An infant child asleep has considerably less brain activity than a chess champion during a tournament match. That does not make it any less sentient.
Maybe you remember those days when this episode was written. The ordinary public, to which I suspect most Star Trek writers must be considered in this context, were marvelled then―or terrified―by IBM's Deep Thought (I live in Copenhagen, remember? I’ll never forget when it beat Bent Larsen in 1988), and the notion that a machine would, some day soon, beat the best chess players alive―regularly. And as I have written elsewhere, today any smart phone with the right app can beat the living daylights out of any international grandmaster any day of the week. But it isn't an inch closer to having gained consciousness, is it?
This divide, of intelligence vs consciousness, is extremely important. Today, we have researchers in artificial intelligence, and we have researchers in artificial consciousness. The divide promises―if it hasn’t already―to become as great as that between archaeologists and historians, or anthropologists and psychologists: slightly related fields, and yet, fundamentally different. The problem is, that most people aren't aware of this. Most people, unknowingly, are still in 1988. They conflate the terms, and still speak of irrelevant AI (see this thread!). They still, unknowingly, speak of Deep Thought only.
So my entire point is, this episode ends up being about Deep Thought. While the underlying, philosophical questions it asks, which science-fiction writers have asked for nearly a century by now, are sound, and elevate it, it is of course a child of its time: at the concrete level, it misses the point. It wishes to discuss the right, abstract questions, but doesn't know how to do it at the concrete level: it essentially reduces Data to Deep Thought.
But... I believe this was on purpose! As William points out, we had recently had the Ira Graves episode. In Manning & Beimling's story, the nature of Data's positronic brain was key. But to Snodgrass', it was detrimental. I am convinced that she was fully aware of the shortcomings of her story: I believe that she doesn't use Data's positronic brain as an argument, because it is a devastating one: it shreds Maddox apart. There would be no episode if she made use of it. And even worse: many viewers, in 1989 and even today, wouldn’t understand why. So she wisely ignores it: she refers Data’s positronic brain en passant only, but does not use the logical consequence of *a frakking artificial brain!* during the trial itself. Instead, she uses arguments of the Deep Thought kind viewers might be expected to understand in 1989―and today. And so we get this compelling drama. It’s on a much lower level of specific abstraction, but it can be enjoyed by many more.
And for once, I can accept that choice. Normally I call this manipulative writing: it's like 'forgetting' Superman has super-strength and allowing common thugs to kidnap him, because we have to get the story started. But in this case, in the name of the higher purpose, I gladly give it a pass.
The problem in all this, of course, is that the various episodes wish to captivate the audience. Therefore, they have to allow for the possibility of the impossible, and they have to leave things as mysteries: is Minuet, or Moriarty, sentient? Of course not. But just having Geordi laugh and say "Captain, that's just a program!" and kill the magic right then and there would kill the episodes. Still, we must not let good story-writing cloud our judgement. We must be able to enjoy a good story, while saying, "Wonderful! In real life, however... Captain, that's just a program!" Ironically, B'Elanna actually bluntly says just that of the EMH. But apparently, few of the fans take her seriously.
On another matter, William made an astute observation:
WILLIAM B―"It is worth noting that most forms of intelligence in TNG end up taking on physical form... "
Allow me to improve on that with my favourite theme: they end up taking *humanoid* form. The only reason people take Leah, Minuet, the EMH, Moriarty, and perhaps even Data seriously, is very simple: they look human. Make them a teddy-bear, a doll, and a non-humanoid robot, and these conversations wouldn't be happening.
I'll give you an example: ping-pong robots and violin-playing robots. Industrial robots have extraordinary motion control and path accuracy these days. But if playing a violin is no different than assembling a Toyota, playing a human requires a lot more than your standard cycle pattern deviation. Yet, pretty soon, robots will crush the best human ping-pong players as easily as chess engines today do chess players. And already today, ping-pong robots show good AI. Now, combine an advanced ping-pong cum violin-playing robot in a Data-body, and let it entertain the audience with some Vivaldi before destroying the entire Chinese Olympics team one by one. How many would start wondering: how long until it becomes alive? How long before it will dream? But show a standard ping-pong robot do the same, and they'll simply say: cool machine.
Now imagine a non-humanoid lifeform, leaving humans no possiblity of judging whether it was sentient or not. So you’re right, William: if you'll pardon the pun, form is of the essence.
...and by the way: notice that you did it again. "Intelligence" is irrelevant, William. Intelligence can be programmed, already today. Consciousness cannot: we can barely understand it.
Finally, a couple of last notes:
1. We may wish to simply turn off Leah, or Minuet―*and just do so.* The EMH, due to the importance of its function, cannot simply be ignored. But being forced to treat a program as self-aware does not make it self-aware.
2. WILLIAM B―"I guess one question is whether nonlinearity would actually be necessary to convincingly simulate human-level intelligence/insight"
Exactly: it wouldn't. But notice, just like Peter G. above, your choice of words: "SIMULATE." Who on Earth cares about simulations, Turing tests, and mere artificial intelligence, William? Do. Or do not. There is no simulate.
Peter G.
I'll just quote the relevant section, which includes quotes from both of us:
"PETER G.― "3) You specify that Data's processing is "non-linear" and thus either emulates or is similar to Human brain processing. How do you know this? Where is your source? You also specify that the Human brain isn't a computer since it also employs non-linear processing. Where's your medical/mathematical source on that? What does it even mean?"
And there you have it: we are clearly having two different conversations. You are speaking Trek-speak. I am speaking of science. But the good thing is, the two can actually combine. I see this episode as an invitation, to all viewers, to further investigation of these elevated matters. I suggest you investigate, Peter. It’s much easier today than it was in 1989 ;)"
I must ask you what your credential is to speak to authoritatively on this subject. I would like to know if you have professional expertise in any of the following fields: computer engineering, neuroscience, mathematics, linguistics, information theory, fluid dynamics, quantum field theory, grand unified/string theory, or any other field somewhat close to these that gives you an understanding that a well-read layman would lack. If so, I'd like to know so I can understand your comments within that context and learn what you mean and where you're coming from. If not, I don't know how you can claim to know so much about "science" that you can reduce my comments to be merely references to Star Trek and not to real life. In point of fact I tend to try to discuss Star Trek logic on its own terms because the show is *fictional* and thus isn't actual reality. If you wanted to strictly discuss reality you'd have to tear the whole show apart. I choose instead to see how the show deals with its own premises, and I think William does too.
However, in terms of real world science and logic, I asked you a direct question about what you mean when you speak of "non-linear" data processing, which you claim Data and humanoids have and which mere machines don't. Your reply was to curtly state that we're having two different conversations. I would like to begin with at least one, since so far I'm not even sure what it is you're arguing. All of my objections above were meant to elucidate that I wasn't sure you were articulating a real argument about Data as opposed to making some conjecture that cannot really be discussed. However, maybe you have something specific in mind when you speak of non-linear processing and how Data's 'brain' in constructed, and if so I'd like to hear it.
And by the way, in modern computational theory it is by no means a discarded notion that a sufficiently complex series of recursively communicating circuits might form what we call consciousness.
William B
@Andy's Friend:
In addition to what Peter G. said, I want to clarify my position a bit. I appreciate your comments very much and I think you may be onto something with regards to what Data is and represents. However:
1) I think it's still by no means made absolutely clear that Data has an "artificial brain" which meets the specifications that you state it does. Don't get me wrong. I am happy to believe that Data does. And with Graves, as we discussed, there is an indication that Data's brain can support a human identity better than the ship's computer can. However, we are still left with the possibility that Data's positronic brain is simply better at processing and reproducing human-like behaviours. We do not know for sure that Data actually is housing Graves, rather than simulating him in a way that is indistinguishable from the real thing. Nor do we know that Data is not generally simulating consciousness rather than actually doing so. Similarly, that Data's positronic brain is very hard to reproduce, and causes cascade failures in the case of Lal, etc., is no guarantee.
The references to Data's positronic brain are many and this supports your contention that there is something about Data's brain that is capable of consciousness in a way that traditional computers lack. However, there are other explanations. They may simply call Data's positronic brain a brain because, well, he is an android, designed in the shape of a human. The control centre of the automaton "body" made to resemble a human is located in the part which is made to resemble a head, and performs a function which is at least superficially similar to a humanoid brain, and thus it can be called a brain.
You said in an earlier comment that it is the fault of the show that it fails to establish what Data's artificial brain does. This still assumes that it is a settled issue that Data *has* an artificial brain, rather than something which people, for convenience, call an artificial brain. It may not be. Even if we accept your premises as definitely true -- which, perhaps you have more expertise in this area than we do -- it is still a leap that Data fits this definition you are stating. Maybe he doesn't *really* have a "nonlinear" brain, and according to the proposal you have put forward, Data only simulates consciousness.
And even then, even if he definitely has an artificial brain, I don't think it's absolutely true that if Data does have something which was *designed to be an artificial brain*, that this would tank Maddox' argument. What you are stating, essentially, is that it will at some point be possible to distinguish between what is actually conscious and what isn't not based on behaviour or anything, but based on the physical make of the object itself. This may turn out to be true, and maybe several experts in the field believe it to be true. But I don't think that it is that settled. I mean, what if a "nonlinear brain" is simply much better at producing external signs indicative of consciousness, and, in fact, some of those external signs include the *physical makeup* itself, which is more similar to (humanoid) brains than traditional computers?
And you know, I basically agree that Data probably has an "artificial brain," as you say it, but I don't know how you make this claim with certainty. Soong could also have simply called Data's brain an artificial brain for PR purposes. This is why the issue of simulation is important. It is my contention that one has to look at the outcomes produced by the "brain" or "computer" rather than the form that the "brain" or "computer" takes.
2) And even if Data definitely has an artificial brain which meets these criteria, I think it's still, as Peter G. says, not at all a settled issue that a sufficiently complex program (recursive, he emphasizes, but I will be a little more general) would not be conscious. The reason I emphasize whether or not it is important that a "nonlinear computer" could *simulate* consciousness is that I am going under the assumption that it is impossible to conclusively prove consciousness, UNLESS one is the conscious entity.
Line up a human, Data, the Doctor, and, say, Odo; some sufficiently non-humanoid, non-"artificial" life form which displays the *external* traits of sentience -- ability to learn and adapt, an ability to change and exhibit new bheaviours, and perhaps the ability to state that it is, indeed, alive. Which is conscious? You would argue that the human, Data and Odo are and the EMH is not. I would argue that it is impossible to be sure; EVEN THE HUMAN can only be verified to be conscious because he is sufficiently similar to me. I am not making this egocentric idly; what I mean is that it is impossible for me to be sure that the human is not a sufficiently advanced automaton, perhaps created by nature, perhaps otherwise. I don't actually know that I have free will, even; I know with certainty that I experience the thing which I define as "consciousness," and because of the extreme level of similarity of other human beings to me, I must reasonably assume that they have the same trait. This assumption then can reasonably be carried out to other humanoid life forms, which in terms of modern biological classification would even be the same *species* (since interbreeding between humans and Klingons, Romulans, Vulcans, Betazoids, Ocampa etc. are possible and in most of those cases we also see that their offspring can reproduce, as are interbreeding between Cardassians and Bajorans, though I can't think of Cardassian-human or Bajoran-human offhand). But then with Data, the EMH and Odo we are left with beings which are completely different in construction and origin. How would we conclude that they are conscious? Or *not* conscious?
Maybe we could identify some physical system which "explains" our consciousness. But even then we would be left with uncertainty whether other systems which are physically similar are actually conscious as well, but are simply reproducing the machinery but missing some unknown spark. We could, I suppose, claim that we know that Odo is (probably) conscious because there is no evidence that any conscious beings set out to create him, and to argue that it is unlikely for a being which displays traits consistent with consciousness to develop "by accident" without consciousness being there as well. However, of course, with Odo and other changelings, they are of course imitative; it is baked into their very nature that they imitate and recreate other beings. The changelings might simply be some sort of inorganic matter which for whatever reason imitates other, actually-alive beings, and then displayed external signs of being alive.
The reason I bring this all up is that my claim is that it is still anthrocentric to make the claim that the "nonlinear" versus "linear" distinction is all that matters, because it just moves the trait that defines consciousness from external signs of consciousness to the sort of physical, observable mechanisms that produce it. It is still making an argument based on what looks human, but instead of arguing about actions it is arguing about hardware.
So my claim is that it may be that consciousness develops when there is a sufficiently advanced system to reproduce all the external signs of consciousness; that the act of simulation is itself an act of synthesis. Understand me: I am not saying that, e.g., someone writing "I am alive" on a piece of paper is sufficient to reproduce life. But to be able to reproduce the full breadth of human behaviours (or, indeed, sufficiently complex animal, perhaps) may not be possible without producing consciousness along the way. This is perhaps an idiotic notion. It gets rid of some of the problems of anthrocentrism -- decentering away from the physical form and makeup of the thing which produces "apparent consciousness" -- but introduces another, in that the only way to define consciousnessmeans to define something which acts sufficiently *human* to be able to appear conscious. I have little to say to that charge except that I'm thinking about it.
3) Just as a small point, the Exocomps and the "Emergence" life form did not take humanoid form. The Exocomps are very close to the hypothetical "box on wheels" Maddox insisted would not be granted any rights, which is why I think that episode is (despite some significant flaws) an important follow-up to TMoaM. The "Emergence" life form does use human forms on the holodeck, but its eventual form is a funky-looking replicated series of tubes and connections.
4) As far as the suggestion that Peter and I were talking about Trek and you were talking about the real world, there is a lot to say, but I don't think it's so clear-cut. You have decided that what is being portrayed, in-universe, is that Data has an artificial brain and that the Voyager's computer is definitely not an artificial brain (and thus that the EMH, run on this and other similar computers, cannot be conscious). This still relies on evidence provided in universe and, *where evidence is lacking*, filling in the gaps with your own impressions of the intent. If it is not utterly conclusive that Data's brain is different from the ship's computer, you rely on the idea that the computer must be equivalent to modern "linear" computers, which is still an assumption about intent. There is nothing wrong with this, but I think that it just means that you are also "down in the muck" with the rest us of trying to interpret what Trek is actually saying, rather than purely talking about the real world. ;)
William B
Here is a somewhat off-topic comment expanding on my point (4). Notably, I do not expect this to be answered within the thread, necessarily, but I think it's important for me to say a bit more of what I think this episode is about, and why it is important, as well as what I think Data (and the EMH) are about, as characters, and why they are important, in addition to being about artificial consciousness/intelligence/life etc. issues.
As far as the charge of us talking Trek speak rather than real life speak: well, certainly real life issues are the most important. However, I think it's fair to say that all Trek is commenting on the real world, it is just that some of it is commenting more directly than others. Data, the EMH etc. are obviously representations of real-world ideas, to some degree or another, but it's a question of how we interpret them. That Data is definitely a representation of a potential being with an "artificial brain" is by no means certain. It is also very possible that the (limited, contradictory) take on artificial consciousness within Trek is primarily there to talk about other aspects of human life -- how humans treat each other, how we treat other (biological) life forms on the planet, etc. -- and so for those purposes, the differences between Data and the EMH might not be important at all -- they might just be different views on the human condition.
There are autistic and Asperger's communities which have used Data as a sort of mascot. Data's experience of difficulty understanding "human," i.e. normative, emotions, his alienation, and other traits like that have been taken on as representative of some humans who find this "artificial life form" a good representation of their experience. This was not exactly the intent of the character, but I think that part of the mythical basis for artificial beings is to talk about difficult aspects of our plight as humans -- of being physical, material beings whose worth is often decided collectively by our ability or inability to fit in with larger conceptions of humanity. Again, Louvois' ruling in this episode includes: "Does Data have a soul? I don't know that he has. I don't know that I have." There is no reason that the EMH is necessarily precluded from being a particular kind of representation of person, in which case I think it may be missing the point to declare him as non-conscious. You can say that this should *not* be the point of Voyager, that in its portrayal of computer coding it should hew more closely to what experts believe is the ultimate signifiers of consciousness, and you may be correct, but I think that in order to talk about what Trek means we have to suss out what it is trying to say and how that relates to our world.
As I see it, part of the problem with indicating that an "artificial brain" is the key difference between Data and the EMH, and one which would simply destroy Maddox, is that it is to some degree divorced from human experience up to this point. That does not mean that it won't be proven in the future, and that moment might fundamentally change human existence. But the majority of human existence has been a matter of taking blind stabs in the dark, trying to reason outward from ourselves to beings sufficiently similar to ourselves and to extend to them the things we would like extended to us. And that comes with it a level of uncertainty about ourselves. *I do not know that I have a soul.* The fact that the "code" that runs in our brains is sufficiently different from that in a computer does not guarantee that we are not simply an advanced computer or that we are not a mere set of physical processes designed for self-replication, with our experience of consciousness being some sort of nearly irrelevant by-product of a self-sustaining system, ultimately no more intrinsically meaningful than a process like fire. One of the key things that TNG does is introduce, from the very first episode, the idea that it is also not merely a matter of humanity deciding which other entities should have rights, but that we have a responsibility to prove that we as a species demonstrate qualities that make us more than children groping about. The Q could easily, and do, look at us and declare, how could a lump of matter with a bunch of electrical processes controlling a bit of organic machinery be conscious in any meaningful sense? That there is no absolute certainty that Data is the same as humans is important, because this is partly a way of looking, anew, at things we take for granted about *humans*, of stripping away which traits are ultimately irrelevant in defining our place in the universe. Granting that Data has value is a way of granting that we have value, through an act of faith. If the whole process is genuinely reduced to a matter of a binary switch wherein some given object either is or is not a brain, *and that this can be verified with certainty*, robs the story of much of its power and also removes the uncertainty that is near the heart of the human condition.
As Picard asks Maddox, I would say: "Are you sure?" Are you sure that the artificial brain correctly distills the essence of what is important about humanity, consciousness, and rights-having beings? Now, of course, I may be misinterpreting your claims. But I think that it is not merely a matter of Snodgrass hedging her bets for the sake of drama that an episode entitled "The Measure of a Man" avoids some sort of ultimate determiner of worth in consciousness. I think that the uncertainty is a fundamental part of human existence, and important to every person who has ever wondered, in real life, if they do not matter, and had to take a leap of faith to believe that they did, as individuals and as a species. If there is some sort of "magic bullet" to the consciousness debate wherein the, or a, physical mechanism of *all* consciousness is identified, then probably the debate over which beings qualify as life forms will shift away from "consciousness" and into another trait which is, once again, mysterious. Once the physical mechanisms of consciousness are sufficiently identified, after all, then we might well understand it enough to be able to do away with discussing human behaviour in terms of a "spark" instead of the result of eminently comprehensible physical processes, albeit very complex ("nonlinear") ones, which may again require us to take a leap of faith to believe ourselves more than just the physical process that governs us.
Andy's Friend
@William B & Peter G.
I was writing to Peter, but I'll answer William's last comment first because it can be done very quickly: I basically agree with everything you wrote.
I think you're quite right about the episode being robbed of its power without uncertainty. It dares ask great questions. It follows it should not provide certain answers. And you are right: knowingly believing in something uncertain is a very powerful thing. It is what makes faith, true faith, indestructible.
I also think you're very, very right regarding Data's multiple roles, as in a mascot of the autist & Asperger's communities. What makes Data so fantastic is that he is so many people in one: the Child, the Good Brother, the Autist... The Android is actually pretty far down the list in importance. This is undoubtedly why he is so beloved: most of us can find a part of ourselves in him. Mirrors, was it, William?
I have a little more difficulty in seeing the Doctor in quite the same multi-faceted fashion. Every Star Trek fan I know likes the Doctor a lot, but for very different reasons that they like Data: they do not receive the same kind of love.
I particularly like your reference to Q, because, as you'll remember, that is my recurring theme: the humanoid & the truly alien. And you're of course right: any truly alien might question our human consciousness; and, if we widen our scope, what I have called the "artificial brain" is merely a word for some sort of cognitive architecture which may be very different from our own. The Great Link seem to have one, and I'm pretty sure it's quite different from Data's brain.
Also, and this is answering both of you now, it is true that we cannot know with absolute certainty that Data's "positronic" brain is an artificial brain. There are strong indications that it is, but we cannot know for sure; and it is true that Data, too, could simply be another Great Pretender.
This leads me to that most interesting aspect: faith. I was going to answer William earlier:
WILLIAM B―"I think that a system sufficiently sophisticated to simulate "human-level" (for lack of a better term) sentience may have developed sentience as a consequence of that process."
...by saying that that sounds an awful lot like wishful thinking. By that I mean that this is a little bit like discussing religion. If you strongly believe that (I’m not saying that William does), nothing I can say will change your mind. There are still highly intelligent scientists who share that belief, in spite of all the advances we've made in the past decades in both neuroscience and computer science. It is, quite simply, a belief, akin to a spiritual one. Some people *want to believe* that strings of code, like lead, can turn into gold.
But that of course is a bit like my belief that Data's positronic brain is an artificial brain, i.e., some sort of cognitive architecture affording him consciousness. I, too, *want to believe* that he has that artificial brain. Because to me, Data would lose his magic, and all his beauty, were it not so. As I wrote, there are very strong indications that this interpretation is a correct one; but as in religion, I have no proof, and I must admit that it is, ultimately, also an act of faith of sorts. I want Data to be alive. To me, Data wouldn't make much sense otherwise. And I know full well that this is, deep down, a religious feeling.
I'm sorry, guys, it's getting late here in Europe... Until next time :)
Peter G.
@ Andy's Friend, I still don't know why you're hung up on whether or not Data has an artificial "brain". You have yet to define what that means. Are you quite sure you're really talking about something, as opposed to issuing phrases that sound like something but have no content? If there's content then why not just say what it is instead of using placeholders? You haven't even provided an explanation for why the Human brain isn't just a sophisticated computer. Until you can answer my very clear point-blank question in any way (about what non-linear processing is) I'll assume you're not really interested in talking about this. I also assume from your lack of confirmation that you are not an expert in the field of robotics, information theory, etc etc.
Incidentally, I find this particular line somewhat accursed:
"...by saying that that sounds an awful lot like wishful thinking. By that I mean that this is a little bit like discussing religion. If you strongly believe that (I’m not saying that William does), nothing I can say will change your mind."
If positing a theory about robotics makes someone a 'religious believer' that you can't communicate with, then I find it hard to believe you are making such absolute declarate statements with a straight face. If you know something special about this then own it and lay it down for us. The condescension needs real creds to back it up, my man.
Andy's Friend
@Peter G. & William B
Peter, I don’t know what it is you don’t understand. Everything you ask me to clarify I already have in my two posts from 2014.
Try reading what William writes. He has understood perfectly what I mean:
WILLIAM B―”What you are stating, essentially, is that it will at some point be possible to distinguish between what is actually conscious and what isn't not based on behaviour or anything, but based on the physical make of the object itself.”
Which is partly (but only partly: see below) correct, and what I wrote to begin with in 2014 specifically about Data & the EMH:
“It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.”
In very simplified terms, not WHAT, but HOW. But I further clarified yesterday:
“if we widen our scope, what I have called the "artificial brain" is merely a word for some sort of cognitive architecture which may be very different from our own. The Great Link seem to have one, and I'm pretty sure it's quite different from Data's brain.”
Another thing: you seem to have misunderstood my point about the “religious” aspect. What I mean is that we all, deep down, are predisposed not merely to accept, but to actively prefer, and choose, one specific possibility, one theory, as true. Einstein famously did it, and it took him many years to recognize his fault. It’s just the way we humans are. In this, our opinions are akin to religious beliefs: William B’s, yours, and mine. Some of us are better at listening to reason than others. But as long as matters remain highly speculative, no reason is more true than any other. And all we really have are our humours, our moods, our feelings (because even intellectual choices are based on emotions) to guide us.
So your comment:
“If positing a theory about robotics makes someone a 'religious believer' that you can't communicate with...”
...was completely uncalled-for.
Now William:
Having said that, I do believe that you have a point, and the Great Link is a good example. What I mean is, to use your words above, it is necessary for us to be able to *recognize* and *understand* the nature and abilities of the "physical object" itself.
In other words, while we may be able to recognize Data’s “positronic brain” as an artifical brain able of consciousness, simply because it resembles and emulates what we know, we may not be able to recognize anything as alien as the Great Link as another kind of physical object capable of consciousness. And in such cases, at least at first, we will depend on behavioural analysis. And who knows if we will ever be able to understand the Great Link?
So in a way, both sides are right. And you are very right: we will probably always remain somewhat anthrocentric. It is difficult not to, when that is what we know and understand best. And if we indeed ever gain warp capability, who knows what new life we will encounter?
Finally, just to correct a slight oversight of yours, you wrote:
“You said in an earlier comment that it is the fault of the show that it fails to establish what Data's artificial brain does.”
No, that’s not what I said: I agreed with you. Try reading it again ;)
Peter G.
@ Andy's Friend,
I could not have been any more specific with my request for clarification, and yet for multiple posts you have dodged entirely. I thought we were having a friendly discussion but it begins to feel more like you're hiding behind words. If you think my question is "already answered" in your previous posts then you must think my reading comprehension is pretty low. I am now 99% convinced you have no idea what "non-linear processing" means, and likewise what the mechanical difference is between a "computer" and an "artificial brain." And yet you base your entire argument on these terms, claiming definitively that this is the case and that I need to go do research to catch up to your level.
And by the way, mentioning that each of us sticks to his "one" belief out of bias as religious people (or all people) do makes two fatal mistakes: 1) It assumes that each of the three of us is making comparable unprovable claims. 2) Implying our ideas are no better than faith-based convictions puts all ideas on an equal irrational playing field, which is both insulting to reason itself and also insulting on a personal level.
1) We are not all making strong claims. William and I were tossing around ideas and wondering what to make of the episode. You are making a bold and definitive claim, and stating that it amounts to "real science" as opposed to Trek-speak. The burden is on you to demonstrate any validity to what you're saying, as you are the only one making a strong claim here. I said your idea is plausible; you say it's true. William and I both agree that based on what you've said so far you cannot know this is so.
2) I don't go in for this passive-aggressive argumentation style, where when called out on BS you go ahead and say that my opinion (or William's) is just some faith-based hunch that can't be reasoned with. It's just as rude as calling us morons as far as I'm concerned. I know you included yourself in that description, but calling all three of us idiots still means calling me an idiot, which I don't accept.
PS - I'll bet $100 cash right now that you can't explain in detail why Einstein was "at fault" for pushing for one theory to be true. What is this so-called fault he was wrong about for so long? And bonus points if you can show that it was because he was naturally inclined to "want to believe" just "one theory". Spoiler: what you said about Einstein wasn't true. The only 'fault' to date his theories have admitted is with the cosmological constant, and that was only because he didn't have access yet to data showing an expanding universe. And that "mistake" has been replaced with dark energy anyhow, which is the same accounting trick in reverse, so even his idea of how to deal with the problem is still considered to be correct. Nothing about relativity has, to date, been called into serious question in the mainstream, nor has even his comment about god playing dice, about which the jury is still out. The Copenhagen interpretation of QM is not a "fact".
Trekker
Um....Wow, even though I understand what each of you are arguing, I am quite baffled at how far scientific and philosophical debate meanders into Quantum theory.
For me this episode was a great viewing pleasure, it touched on ethics of artificial life and the concepts of "responsible use", which is a term that is now frequently applied in the field of information technology and business for applications. Even on a technology use level, the ethics are very interesting.
Data is more than a machine or a program, but is he sentient?
If we apply the Turing tests to Data, based on famed Father of Modern Computing, Alan Turing, we may be looking at this episode in another angle.
At the start, the poker game demonstrates Data has limitations to his understanding of intuition and human behavior. It makes sense to judge him as merely a machine, not a sentient being.
Yet, Data has over this episode demonstrated self-preservation instincts and even intuitive concepts of racism. I loved the scene in Picard's ready room:
--------
Capt. Picard: Data... I understand your objections. But I have to consider Starfleet's interests. What if Commander Maddox is correct? There is a possibility that many more beings like yourself can be constructed.
Lt. Commander Data: Sir, Lieutenant La Forge's eyes are far superior to human biological eyes, true?
Capt. Picard: M-hm.
Lt. Commander Data: Then why are not all human officers required to have their eyes replaced with cybernetic implants?
[Picard considers this shortly, then looks away without giving an answer]
Lt. Commander Data: I see. It is precisely because I am *not* human.
----------
That exchange is worth the episodes labyrinth of legal arguments and philosophical issues over life and sentience, because at the very core of this episode, we can see Data is a sentient being; perhaps not yet fully capable of intuition or human answers consistently that Turing would have wanted for intelligence, but Data does demonstrate intuition and human-like associations with his current issue.
I also think in addition to all of our arguments here, we should also ask a deeper question about Star Trek's Federation and this "Brave New World". Over the decades, we have seen the problems of this galactic power from a deaf ear to humanitarian needs in the name of a Prime Directive on some occasions, then a willful abandonment of it if it is advantageous.
Has humanity really grown beyond bigotry, selfishness, and settled for peaceful exploration of the stars?
I think if anything our bigotries and selfish instincts are still very clear in the reflections of the leadership of Federation and Starfleet, not to mention all the species that composed the federation, who would prejudge a being that has served its institution as merely an asset, when the situation is more advantageous.
Beej
The first truly great episode of TNG, maybe even the first truly good one if you're not grading on a curve. I don't entirely buy Starfleet's position here given its utopian ideals, but in any other universe I sure would. Riker pressed into unwilling service as prosecutor is pretty contrived though and is my only complaint.
This one's a beaut.
Tara
I enjoyed the episode for what it was: a solid conflict, dramatic scenes building to a satisfying climax, sharp dialogue (I especially like "And now - a man will turn it off.") , a strong guest star, a warm portrayal of shipboard camaraderie.
But I remain unconvinced by the arguments made by both "lawyers".
Riker's approach is especially troubling. It seems to me that he is presenting not a rational argument but an appeal to emotion - and the emotion he is going for is "Eww! Data looks human but his body does weird stuff! ."
It is uncomfortable to realize that Riker could have put on the same show with, say, a quadriplegic on a ventilator; "Watch me stab her hand with a fork while she feels nothing. Watch me turn off her ventilator: she doesn't even try to breathe. Watch me push her out of her wheelchair and onto the floor - see how unnatural she looks as she topples." He could trot out any circus geek - say, England's congenitally deformed "Elephant Man" - and make roughly the same argument and draw the same gasps of amazement.
This approach is especially odd as Starfleet recognizes the sentience of many alien beings (some of whom may have super-strength, or removable body parts, shape shifting abilitie and other freakish attribites.).
Picard's arguments are less disturbing but also seem to involve plenty of pulling on heartstrings,. "He's got medals just like a human. He even had sex once! ". These tidbits tickle our sense of recognition (rather like the "smile" on a dolphin)) and maybe make the judge more inclined to anthropomorphize Data's android ass. But they don't actually prove anything.
To the extent that trials are about manipulating the opinion of the jury' (or judge in this case) by means fair or foul, they both Picard and Riker did very well. But their off-topic antics blew so much smoke over the proceedings that the question of Data's sentience was somewhat obscured by theatrics.
Chrome
@Tara
To be fair, Picard actually did run down the list of the three requirements the show set for sentience. Plus, we don't really know what property laws were being challenged here. It could very well be relevant that a machine has a life-like existence in order to qualify as non-property.
Outsider65
I'll admit, I didn't read all the comments, but there seemed to be quite long and interesting discussions. I'll throw a few of my thoughts down all the same.
Data's sentience isn't really a question to the viewer-obviously he is, unless the writer that week decided otherwise. :P asking the question does make us pause for a moment, but we've seen enough of him at this point to know where the show generally places him.
As for the possibility of sentient AI in real life-I'll believe it when I see it. But as others have said, once a program gets good enough, it would likely be impossible for us to tell the facsimile from a real thing. I don't know that we would be able to create that "spark", although I'm fairly sure we'll eventually be capable of creating something indistinguishable from our own sentience.
borusa
I really liked this episode.
It doesn't intend to be an accurate representation of any legal process-both advocates drift from examining witnesses to speeches and back again and .
Also one expects that this issue would be appealled and legislated upon ad infinitum.
This is a bit of a copy of an Asimov story I think but it is laudable and is one of the highpoints in season 2
SlackerInc
Great episode.
I tend to believe that any AI advanced enough to pass the "Turing test" is not just simulating sentience, but experiencing it. (Therefore, I completely reject the basic premise of the "Chinese room" thought experiment.) However, there's really no way to know for sure. And Data really does not even pass the test, at least if we limit it to behaving just like a human. If we leave open the possibility that one might be communicating with an alien, then maybe he does pass.
In any event, I instinctively like the notion of saying that if we aren't quite sure if he's conscious, but it can't be disproven, then he should be treated as such. But then you get to the ship's computer. (And then, later, the issue of holoprojected beings like the Doctor on Voyager.) Thorny stuff!
Rahul
Excellent episode and very thought-provoking. Clearly the best of TNG so far for me - also judging by the discussion above, really resonated with Trek viewers.
Well done with Riker's and Picard's different approaches as opposing prosecutors although the court room battle should wait until the JAG can get appropriate officers. That Picard/Riker have to do it without being lawyers is inappropriate from a strictly legal standpoint. I also don't quite buy the argument as a race of Datas as slaves.
But what is clever is the sentience argument - it was well-handled and Picard's acting here is great.
Yes, I can see why many will consider this one a classic of TNG. A very intelligent episode that the suspension of disbelief needed isn't too bad. The final moment with Riker and Data is heart-warming - great stuff. Great acting and writing.
For me, a strong 3.5/4 stars. Trek's philosophical episodes are really worth watching.
JohnTY
Classic TNG. The circumstances are a little contrived and the resolution a bit easy but a great moral question that allows the series’ two best actors/characters room to show off their skill and affability. (I still have a problem with Riker's argument though, particularly the off-switch bit.)
This episode also reinforces my belief that Data should not have been a commissioned Starfleet officer from the get-go. Let alone a Lieutenant Commander.
I think he needed to be recognized for his development over the series – Perhaps if he had started with some sort of honorary title, giving more ambiguity to his status; having a rank of Acting Lieutenant for example (kind of like how Wes is an Acting Ensign) He could then have been given full status as a sentient being and a commission as a result of this kind of hearing. Then let him nab a promotion along the way as payoff for his growth and importance as a major character.
Instead he gets no promotion though the remainder of the series or films while LaForge, Worf and even Troi get to rise through the ranks.
Anyway..
Bill
Great comments, all. So I'll come swooping in at a 45 degree angle out of the sun.
Whatever happened to Dr. Daystrom's M-5 Multitronic Unit and the science behind it?
Yeah, we know: Daystrom was probably committed and the M-5 became self-inflicted toast in TOS's "The Ultimate Computer." But Daystrom would have had notes from which "rows of fools" could have taken up his work but, this time, not imparted neurotic engrams on what would be the M-6.
So what happened in the 80 or so years between TOS and TNG (or perhaps did not happen) that made Data so special, other than him being in the form of what the Japanese are doing with robots in the 21st Century--looking like us? Did no one pick up on Daystrom's work in those 80 years to even do a modicum of exploration with an M-5 clone, but not afflict it with sociopathic engrams (at worst) or Asperger's engrams (at best)?
Even bigger question: Did Moore's Law (or its equivalent in future technology) come crashing down so hard that the incredible gains we've seen in computing technology over the past 30 years ground to a total halt? What were Computer Scientists doing those 80 years with a design as advanced as the M-5 simply waiting to be re-engrammed?
Or were the M-5 and Daystrom's notes put in storage by "top men" ala "Raiders of the Lost Ark"?
Peter G.
@ Bill,
I think we sort of have to treat The Ultimate Computer as being a standalone rather than part of a technological 'arc' involving successive technologies. However if you want to get technical M5 was multitronic but still using electrons while Soong used positronics which according to the Asimov nod would be far faster. Aside from that we could also suggest that M5 represented a dead end technology; one which created too much of a chaotic result. Even Lore, who was far more reigned in than a true AI would be was too chaotic. So Data's triumph isn't just in his sophistication but also in his safety.
Alex
Regarding the Tasha hologram & medals, I think Data at this point has those things because a human would value them, not because he personally values them. To act human is to be human, even if he doesn't feel it like a 'real' one.
First truly classic TNG episode.
Alex
Other observations:
1. Data is noticeably miffed when Maddox intrudes into his quarters. A true emotionless machine wouldn't care.
2. At the end Data and Riker have a great moment about how Riker harmed himself to save Data and he won't forget it. TNG rarely had such deep interactions.
JonR
I love this episode. The only things that bug me are as follows:
1) When Picard was arguing that Data was self aware, he should have pointed out that Data wouldn't give a shit about being disassembled unless he was self aware.
2) The episode acts like this is the first time Data's legal status as a sentient life form, and his rights as a Federation citizen, have ever come into question.
But hasn't the series already stated (or at least strongly implied) that he's been officially recognized as a Federation citizen? I'm pretty sure it was at least implied that this issue was addressed before he ever entered Starfleet Academy.
So what, are they just changing their minds? It's just never brought up.
These are just tiny nitpicks and pale in comparison to the overall strength of the episode.
Ayrus
What a fantastic episode!
The only sore point being the ordinary way such an important case was handled, by a JAG with no officers.
Especially loved the passion with which Picard put forward the case. And Data's interaction with Riker at the end was brilliantly written and performed.
Startrekwatcher
4 stars. Involving. Riveting Deep. Thoughtful
This episode is a classic. It and The Best of Both Worlds that really get to me and rile me up. The latter with its utter terror at the possible fall of the Federation and doomsday scenario being a real
Possibility. This ep with how it angers me at how Data is treated by Maddox and Starfleet.
There’s nothing more important I believe than an individual’s dominion over themselves and to see Data being nearly forced to be experimented on chilled me to the bone. It was awful enough that Data was initially left with no choice to avoid being disassembled than to give up his life on the Enterprise and resign but then taking away even that was a low blow. Very rarely has a tv character moved me to want to punch them in the face but Maddox did
Trek has done a few courtroom dramas but this was by far the most compelling and engrossing from Riker’s devastating deconstruction of Data into a mere machine with an in and off button to Picard’s renewed fight putting the very question of what constitutes life to Maddox. The subject matter was thoroughly fleshed out in a thoughtful masterful way.I liked Philippe and her very humbled verdict at the end followed by Picard asking her to dinner after the happy outcome.
I felt for Riker. Guinan insightful assist to Picard In ten forward was a truly OMG moment
I liked the going away for Data. I liked Data carefully opening his gift and not messing up gift wrapping paper. That is sooooo ME. 4 stars. The dialog. The arguments so well thought out and delivered. VOay went on to try to do this again but failed tremendously in its poor man’s version of this episode”Author Author”
Hornblower
Amanda McBroom who played the judge is a wonderful singer and songwriter. Probably she and Brent sang some terrific duets between camera setups.
Peter Swinkels
Excellent episode and review.
mephyve
Excellent episode! I could nitpik that no one really expected Data to be surrendered to this hack but I understand that sometimes with stories we have to pretend that there is real danger. Besides, the strength of the story was the rl implications regarding how we will view AI automatons when they become almost human.
I personally wonder whether we should make almost humans at all. Not only are we creating the question of how we should regard them, we are also creating the fear of how they will eventually come to regard us.
As to Data's query at the beginning about how Riker knew his bluff would work; elementary dear Data. You literally stated that you were going to judge people's hands by how much they bet.
JerJer
Boring.
But then most of the episodes are boring, when they're not completely awful.
Rahul
So much to love about this episode -- one of Picard's best, a very good one for Riker, and Data's approach to the whole thing. He doesn't have any emotional outbursts per se but definitely wants to make choices and has preferences. I think it should elicit some strong emotions from viewers re. Data's future -- Data who is one of the most likeable characters in the series.
The argument that ultimately tilts the ruling in Picard's favor (and Data's) is the one about creating a race of slaves (Datas) and how humanity would then treat them. Initially, I felt I wasn't convinced but now I think it's something that goes right to our humanity. Really interesting dialog between Guinan and Picard when she brings up this notion. What would it say about us humans to just have androids who obviously display sentience to do our "dirty work"?
The episode is extremely well conceived, written and acted. Captain Louvois was a tad annoying initially and she did piss off Picard a fair bit. The situation is a tad contrived so that it gets Picard to face off vs. Riker. Such a landmark ruling should wait until proper attorneys etc. can be put into place. I don't know if there's an urgency to Maddox's work. I don't doubt there are some legal flaws here as well like asking Maddox to define sentience and attacking him on it. But it is the slaves argument that ultimately matters. And regardless of these minor nitpicks, the episode achieves its objectives -- nothing overly farfetched here.
4 stars for "The Measure of a Man" -- thoroughly enjoyable hour of TNG. Data's compassion for Riker at the end is a great finale.
Sarjenka's Little Brother
This episode was pivotal in the evolution of Next Gen and "Trek" in general. Wonderful!
Black Jesus
Too many comments to read through to see if anyone else has raised this point but...
If the reason Cmdr Maddox was able to oppose Data’s resignation was that Data is Starfleet’s property, then disproving that should’ve been Capt. Picard’s first move. Did Dr Soong work for Starfleet? Did Dr Soong leave Data in his will to Starfleet? The answer to both is no as far as I can tell. Data opted to join Starfleet of his own volition so that nullifies Maddox’s argument in that respect.
Saying all that, I actually came on here to make sure Jammer gave this episode 4 stars as I’ve watched it numerous times over the years and never tire of it’s brilliance.
Love this site and love you guys with all the love/hate critique of Trek.
Yanks
Black Jesus,
Data was Star Fleet property because it was Star Fleet that found him (I think).
Chrome
There’s a favorite episode of DS9 of mine, “The Ship” which mentions an old salvage code Starfleet follows where the finder of abandoned property takes ownership of it. In that episode, Sisko’s claim is somewhat tenuous - after all, the ship he found was Dominion property and the Dominion were quick to come claim it. Here though, there were no known survivors or even documents ftom Omicron Theta so it makes sense it would be Starfleet salvage.
I think Black Jesus is right that realistically Data’s defense would challenge the property claim in addition to challenging whether Data can legally be property. Of course, it’s much better television watching them deal with issue of Data being alive in court.
William B
I agree that the "salvage rights" argument is the most logical to explain why Data would be considered property of Starfleet specifically, rather than property whose owner is (as far as they believe) dead with no legal heirs. This point is really not addressed within the episode, so it's more properly a plot hole I think, and I'm not positive that the episode's writers thought about this point in detail. It is good that there is a fix for the plot hole, though, even if it's not really to the (generally terrific) episode's credit that there happens to be one.
Peter G.
I don't believe we're meant to understand that Data had always been understood to be Starfleet's property before this. It seems clear to me that it had never been settled exactly what he was and it was open ended. It was specifically Commander Maddox who, when he learned that Data was going to refuse his operation, made a legal move to basically declare Data unfit to make his own determination. It's equivalent to having someone committed when they won't give you your inheritance right away and having them declared incompetent. This move wasn't a statement about Starfleet's position, but rather something he shoved through and got the right person to sign off on it to expedite his work. That's what led to the law suit, where one must basically sue the government in order for a judge to have to rule on whether the move (forcing Data to do something) was legal or not.
I don't think this was ever about salvage rights, and Data has said on multiple occasions that he chose to serve in Starfleet. It was never the case that he was drafted because they found him; he could have just walked away and done his own thing. Or at least, that's what Data thinks. I suppose it's possible that they allowed him to believe that but that we was never really free to choose. That's a spookier thought and I suppose it's plausible but there's never really evidence to that effect. This whole situation was Maddox using legal machinations to try to declare someone incompetent so that he could do whatever he wanted (which comes to look a lot like slavery if Data is sentient).
Black Jesus
Thanks Yank, Chrome, William B and Peter G for indulging my; and then expanding my thoughts on this. Much appreciated.
William B
@Black Jesus, Yanks, Chrome, Peter:
I agree with you, Peter, about how Maddox' legal proceedings basically work. I think the point is more that Maddox' premise doesn't really make sense without a little bit of extra work. Assuming Data is "property" rather than a person, why is he *Starfleet* property, specifically? Maddox, in saying Data is property, is not, I agree, stating the orthodox Starfleet position on Data; Starfleet basically accepted, to some degree, that Data was close enough to a person to be able to "join Starfleet" and so on, but viewed Data enough as a curiosity that once someone (e.g. Maddox) seriously challenged this baseline assumption, they had to go back to the drawing board and check it. But Maddox would have to *additionally* prove why, if Data is property, he should be remanded to Starfleet. The equivalent in your analogy would be why, if someone in the military were declared not of sound mind, the military would get to decide whether they undergo some dangerous medical operation which they would otherwise have a right of refusal. Generally speaking, the person who decides is the next of kin, and then eventually the state if no one else is available, and when it's the state, it generally would be the *civilian* rather than military government.
Now, this episode is not about the distinction between civilian and Starfleet governance within the Federation, and I think it's sending things far afield to try to parse what the episode itself is saying about this, but I think "if Data is property, whose property is 'it'?" is a big enough question that it's worth asking, if we take Maddox' premise on board. I think "Starfleet salvaged the hunk of metal called 'Data', therefore Starfleet owns it" is a pretty satisfactory answer, though not one put forth by the episode. And that's mostly okay, because it's a pretty packed episode, and it's not entirely the point; if it was decided that Data *was* property rather than a person and "it" belonged to, say, the fourth cousin of some Omicron Theta colonist is the "real" owner of Data it would probably be easy enough for Maddox to convince them to let him go forward, and even if not, it would just kick the issue slightly down the road.
(It's even worth noting here that property rights are probably very different in the Federation anyway, particularly for technology. A form of copyright law becomes an issue in Author, Author, eventually, but otherwise Federation scientists and engineers tend to get personal and professional "credit" for their discoveries but tend not to "own" them. However, even under Maddox' interpretation that Data is property, pure and simple, Data is still a single unit, which Maddox is likely to destroy.)
Chrome
@William B
Not that this resolves any of the in-universe issues you bring up, but historically slaves were considered property in the same manner as cattle. So even though the property label for Data might be somewhat forced, I think it does help underscore the real desperation enslaved humans had historically because their rights were basically non-existent (despite ample evidence showing that they were very much as human as their owners, like Data in this episode). I think Guinan's line about "whole generations of disposable people" is particularly poignant because here Data too is facing not just losing the right the choose, but basically the sentence of being stripped down to bare parts to be analyzed and replicated. Just like a commodity.
Elliott
Hi all
Do we not see in the series that all scientific projects, even if being conducted by civilians, are under the jurisdiction of Starfleet? Dr Graves’ research, the particle fountain in “The Quailty of Life,” the Kireger waves from “A Matter of Perspective,” etc are all subject to Starfleet approval and would become Starfleet property once completed. I’m pretty sure that Data went straight from being the subject of Starfleet experiments after being found on OT to enrolling in Starfleet. It could be argued that Starfleet took possession of Data as soon as they deemed him a scientific specimen.
Peter G.
"Do we not see in the series that all scientific projects, even if being conducted by civilians, are under the jurisdiction of Starfleet?"
No, I never saw any evidence of that. Those are Federation installations, usually science outposts and so on. Starfleet ships may service them or go help when there's an emergency, but that's because Starfleet also works for the Federation.
"Dr Graves’ research, the particle fountain in “The Quailty of Life,” the Kireger waves from “A Matter of Perspective,” etc are all subject to Starfleet approval and would become Starfleet property once completed."
Why would you say that? It does seem true that a distinction is created between a private enterprise (which may exist in the future, like Dr. Soong's) and government sponsored research. And in the case of a Federation project (using their equipment and facilities) that the results would be shared with the Federation. But "owning"? That's not something we even understand in theory in terms of how that works.
"It could be argued that Starfleet took possession of Data as soon as they deemed him a scientific specimen."
I suppose it could. It could also be argued that as "Federation property" Starfleet command would have to demonstrate why they should have complete jurisdiction over Data, rather than, say, the Federation council, or a sentient's rights organization of some kind.
In any case, Maddox's position seems in general to me to be ridiculous and a legal invention designed to bend both the intent and the spirit of the rules to his own ends. It's a lawyer's trick. The notion that Data is a piece of property already contradicts his years of service as an officer where people were taking his orders. I'm sure Starfleet has no precedent of a computer having regular command over sentient beings, other than experiments like the M5 computer.
Chrome
@Peter G.
“In any case, Maddox's position seems in general to me to be ridiculous and a legal invention designed to bend both the intent and the spirit of the rules to his own ends. It's a lawyer's trick. The notion that Data is a piece of property already contradicts his years of service as an officer where people were taking his orders. I'm sure Starfleet has no precedent of a computer having regular command over sentient beings, other than experiments like the M5 computer. ”
Well if we take all the dialogue of the episode at face value then Starfleet did already have an assumed property claim to Data. Towards the beginning of the episode, JAG Louvois cited the “Acts of Cumberland” and stated that based on current law Data was property of Starfleet. This led for the need of Picard to challenge the existing law. The conclusion we must draw is that Starfleet allowed Data to pursue a career of his volition as long as that didn’t clash with Starfleet’s needs. His freedom was probably still limited, though. I can’t imagine Starfleet would’ve allowed to move to Romulus and start working for the Romulan Empire, for example.
Elliott
If we consider the history of African Americans as an example, specific rights were granted to black people in increments long before they were legally-recognised as people. Slaves fought in the revolutionary army. It took seven years after the Emancipation for black men to be granted the vote, and even then it took another century to pass the Civil Rights Act. I only bring all of this up to point out that there's every reason to believe that, in its complacency, the Federation would see no contradiction in having Data serve in Starfleet without considering him a sentient being. "Property" might be the most appropriate word, but we can't forget that Federation property has no monetary value within the Federation itself, so it's complicated.
Elliott
@Peter G:
"Those are Federation installations, usually science outposts and so on. Starfleet ships may service them or go help when there's an emergency, but that's because Starfleet also works for the Federation...It does seem true that a distinction is created between a private enterprise (which may exist in the future, like Dr. Soong's) and government sponsored research. And in the case of a Federation project (using their equipment and facilities) that the results would be shared with the Federation."
Well, Starfleet isn't a traditional military. Picard and Riker usually had authority to appraise the scientific projects they were monitoring. Since there's no money, there's no such thing as a private sector or government grants, there's just Federation bureaucracy which is managed by Starfleet.
Peter G.
@ Chrome,
"Towards the beginning of the episode, JAG Louvois cited the “Acts of Cumberland” and stated that based on current law Data was property of Starfleet. This led for the need of Picard to challenge the existing law."
Yeah, in the common law generally a new scenario will be judged based on precedent of cases that are as similar as possible to the one at hand. Problem is, this case is probably without precedent as Starfleet afaik doesn't have *any* other sentient AI that it's aware of besides Data. So her conclusion based on some 'close-enough' precedent is probably the best legal answer she could give, meaning that if one looked to precedent to solve the issue that is what its conclusion was. But precedent is of no value in an un-precedented situation, such as this one. So she may have been technically right in a lawyer sense, but wrong in every way that materially matters; and certainly we shouldn't take from her comment that any actual people in Starfleet had already made this determination in regard to Data specifically. In choosing to uphold this interpretation I can't help but feel that she was doing it to directly spite Picard, even though it was IMO clearly not a reasonable interpretation of law. Maybe I need to watch it again, but my memory tells me that there was some subtle signs in the episode that until near the end she had bias in favor of one side of the issue.
@ Elliott,
"I only bring all of this up to point out that there's every reason to believe that, in its complacency, the Federation would see no contradiction in having Data serve in Starfleet without considering him a sentient being."
In terms of bureaucratic sloth I think that's a good point. The red tape might just have no caught up with a situation that was already ongoing. However the distinction we need to make is that in regard to black ex-slaves the slow creep towards civil rights was because some people had a vested interest in delaying or reversing those gains. It wasn't that everyone was for it and they were just inefficient at doing anything about it; there were many that were fighting it, and so it was a battle. But we mustn't assume this is what's happening in Data's case, that in order to abuse him for as long as possible they will pretend he's not a person. I think we should rather assume that their intentions are good and that it's simply a matter that had never been formally settled, one way or the other. It always struck me as being an innovation of Maddox's that Data is property, rather than a matter that had ever been settled before.
"Well, Starfleet isn't a traditional military. Picard and Riker usually had authority to appraise the scientific projects they were monitoring. Since there's no money, there's no such thing as a private sector or government grants, there's just Federation bureaucracy which is managed by Starfleet."
Right. But a starship would have authority over a science station presumably if given that mandate by the Federation science council or whatever. Picard couldn't just fly to some random science base and start barking orders; they don't work for him and aren't in the chain of command. In the sense that starships can have interactions with civilians (like scientists), the Captain is functioning as a de facto ambassador for the Federation in addition to a Starfleet officer. But very often we see that the Federation sends its own expert and the Captain's job is just to get him there. In such cases the expert is often in command of the mission, and this sometimes generates a conflict between the mission commander and the ship's commander (like in Peak Performance).
Chrome
@Peter G.
"Yeah, in the common law generally a new scenario will be judged based on precedent of cases that are as similar as possible to the one at hand. Problem is, this case is probably without precedent as Starfleet afaik doesn't have *any* other sentient AI that it's aware of besides Data. So her conclusion based on some 'close-enough' precedent is probably the best legal answer she could give, meaning that if one looked to precedent to solve the issue that is what its conclusion was. But precedent is of no value in an un-precedented situation, such as this one. So she may have been technically right in a lawyer sense, but wrong in every way that materially matters; and certainly we shouldn't take from her comment that any actual people in Starfleet had already made this determination in regard to Data specifically. In choosing to uphold this interpretation I can't help but feel that she was doing it to directly spite Picard, even though it was IMO clearly not a reasonable interpretation of law. Maybe I need to watch it again, but my memory tells me that there was some subtle signs in the episode that until near the end she had bias in favor of one side of the issue. "
Yes, I agree to your point that Data is definitely a unique case and that the JAG was using existing (who knows, perhaps archaic) law to make her determination. Also, the episode has an arc about the JAG wanting to create new law - perhaps her activist role in that regard hurt Picard in a past incident. Nevertheless, I don't think she was really trying to spite Picard as she certainly was sympathetic to his position about individual rights and argued some of Picard's points to Maddox before the proceedings began.
One might even make the argument that the JAG was biased towards Picard since they knew eachother and had a date after the decision. Yet, I think the better reading of their relationship is a thematic one where Picard is at first very skeptical of Starfleet law but comes to realize, through litigation, that his deeper understanding of his argument made his position stronger. I.e., the system does work and he could get along with it and the JAG. So yeah, on the whole, I think the writers were trying to make Louvois the neutral party to be persuaded one way or the other.
Another Dave
I don't know if anyone else made this particular point and it's really a tangent. Data had absorbed every text book and treatise on poker and didn't grasp the concept of bluffing? I give the episode 4 stars but that inconsequential quibble always nagged at me.
Chrome
@Another Dave
I can see what you mean, and of course you’d think Data could conceptually understand bluffing (I think he does) but still not know how it works in action. I have some Japanese friends who I’ve watched Texas Hold ‘Em with, and they’ve explained to me that in Japan people are generally more “true to the numbers”. For example, they know the strengths of poker hands and odds of winning, but they’re terrible at pretending their cards strengths, their mathematical odds, are higher than what they actually have. This puts them at a severe disadvantage in poker because one basically can’t win if you only play strong hands (good players catch on to this quickly and won’t call you.)
Now of course Data’s not quite the same as someone from a culture foreign to poker, but I can imagine him being very hardwired to the numbers - trying to play poker as if it were blackjack and just considering odds just like my friends. Bluffing is an art and it takes practice to both perform and catch. Data at this point is very much an amateur.
Peter G.
@ Chrome,
I still don't buy it either. The one thing I'll say about the TNG poker scenes is that the writers didn't know much about poker. I don't know much either, but I do know enough to know that what I'm seeing is "TV poker" and not poker.
From what I understand about high-level poker strategy, bluffing isn't some art in the sense of mastering facial technique or being a good liar or something. It's simply about creating conditions where your moves can't be predicted, and actually Data should be excellent at this. I've been told that top level poker players randomize their play some % of the time, based on mathematical principles of what the perfect ratio is of playing the cards versus the randomization game, and additionally the manner in which you randomize must be randomized as well. Data should be a whiz at this as he could number crunch that in an instant. In practice he should be a top level poker player even if he knows nothing about human emotions. It's possible that "instinct" may provide an edge to another top level humanoid player so that Data might not quite be the best player around, but he should be able to whoop Riker and Troi.
I get what they're going for: he struggles with things that most humans take for granted. But poker is a bad venue to show that, I think; it's scarcely better than "burning the midnight petrolium."
Chrome
@Peter G
“I've been told that top level poker players randomize their play some % of the time, based on mathematical principles of what the perfect ratio is of playing the cards versus the randomization game, and additionally the manner in which you randomize must be randomized as well. Data should be a whiz at this as he could number crunch that in an instant. In practice he should be a top level poker player even if he knows nothing about human emotions. It's possible that "instinct" may provide an edge to another top level humanoid player so that Data might not quite be the best player around, but he should be able to whoop Riker and Troi. ”
This goes back to what I was saying with my example with my friends knowing the numbers of the game. It’s true there’s tons of math and mechanics involved in professional poker. If you ever watch WPT, the odds of either player possibly winning from their current position are displayed conveniently for the viewer, but the players don’t have this luxury and need know all this from their own math. I agree Data should be really good at that aspect which could lead to some skilled bluffs, but the bluffs themselves are not mechanically executived with any degree of certainty.
Bluffing is psychological based not only on board state but on your opponents’ behavior and way of thinking. You play your position relative to your read on your opponents’ mental and emotional state. Small intuition-based skills like knowing why opponent A might bluff here and why opponent B might not are critical components of a successful read and separate amateurs who get the fundamentals (like Data) from those who can manipulate other players (like Riker). This is distinct from say, chess, where there’s an optimal move or set of moves derived from boardstate alone.
I could go into more depth about poker psychology but there’s tons of books and articles on the subject. If you’re interested, here’s an article describing the player types using dualism and realism: www.psychologytoday.com/us/blog/fixing-psychology/201305/poker-and-psychological-realism?amp
Now whether you think Androids should be good at psychology might be based on your own ideas of what you think machines may be capable of someday. But this series, outside this episode shows Data fairly inept at even realist psychology. It does seem like he gets better at it though, so maybe there is a high-level mechanical variant to bluffing. But I think the show depicts realistically that Data needs time and practice to understand a sort of ultra-mechanical imitation of what humans can do naturally with correct cognitive insight.
George Monet
What ruins this episode for me is that each of the "regulations" that were created for this episode are more stupid than the last one and not one single person ever makes the most simple argument, "if Starfleet has recognized Data as an officer then it has recognized Data as an individual being with the same rights as every other member of Starfleet and thus has waived the right to argue that Data is not endowed with all the rights of every other member of Starfleet."
The JAG regulations about court procedure are also completely ludicrous. If Picard is representing Data and filed a claim with the JAG office AGAINST Starfleet
and Starfleet did not appoint a representative then the summary judgement would not be entered against Data but against Starfleet as it would be Starfleet who failed to showup in court. The state doesn't automatically win a case when the DA fails to showup in court to represent the state. If the state could automatically win a case so simply then why would it ever showup in court?
Secondly Riker would not be chosen to represent Starfleet because he is unqualified, biased and there is already a more senior member of JAG present in the form of Louvois herself. As per the above lunacy, if Riker really wanted to represent Starfleet then he should have simply failed to showup in court because apparently that would cause Dumbvois to enter summary judgement for Starfleet. Do you see why this was such a stupid thing Catch 22? Also Riker is a LIEUTENANT. CAPTAIN MADDOX IS A CAPTAIN. SINCE CAPTAIN OUTRANKS LIEUTENANT THEN MADDOX WAS THE SECOND MOST SENIOR MEMBER OF STARFLEET ABOARD THE STARBASE. Failing that Riker could have had an in camera meeting with Dumbvois where he told her how biased he was and Dumbvois could have moved to the third most senior member, then the fourth and so on down the line. Or she could have simply waited two hours for someone from another JAG office to take a shuttle to their starbase to represent Starfleet, something which would happen in the real world.
The only way that Data could make a claim is if he had the same rights as every sentient member of Starfleet. If Data was a machine then he could not make a claim stating that he has all the rights of every sentient member of Starfleet. Thus if Data can file a claim then he cannot be property. QED.
This is an episode with an amazing idea that is completely ruined by the horribly stupid execution which makes the entire episode unwatchable because every second I need to stop the episode so I can facepalm.
Marshal
@George Monet
Not that you’re wrong or anything, but I think with court episodes of TV you have to give the show little slack on following civil procedure. Keep in mind that if they strictly followed procedures as real courts do, it wouldn’t make for very interesting TV.
That said, I understand where you’re coming from and think there should be a high bar on the episode to be thoughtful and entertaining to make up for its disregard for realism.
What I do think this episode gets right is that it does thoughtfully consider the principles of how different forms of intelligent life should be treated. Picard doesn’t win his case because he bucked procedure, but because he convinced everyone (even Maddox) that Data was close enough to sentient life that he deserved the full rights thereof.
Peter G.
"@George Monet
Not that you’re wrong or anything"
Other than when he says that Commander Riker is a lieutenant and Commander Maddox is a Captain?
Chrome
Certainly, you’d need a contrived set of circumstances to have Picard face Riker in a legal battle but it’s television and all in good fun.
One hole in your logic George Monet. The episode never says Data is making the challenge to Starfleet’s claim, and in fact Picard says explicitly *he’s* making the challenge. This situation isn’t so contrived as there were similar cases with slaves back when they were considered property and their owners were responsible for them.
HackFarlane
@ George Monet
FINALLY! George Monet, you explained all my objections to this overrated episode better than I could. I have never understood why this episode is so highly regarded--perhaps it's worth 2 or 2 1/2 stars because of the cast's acting and interesting questions raised by it, but come on, the execution is so dumb and makes Starfleet come off as fascistic with threats of summary judgments ("I WILL END THIS RIGHT NOW!") and incompetent with the hair-brained idea of a starship captain, not a JAG lawyer or whatever the Starfleet equivalent is, representing Data's case and the first officer (?!) of the same ship prosecuting for Starfleet, even after said officer clearly says he doesn't believe in Starfleet's case. That B.S. about "I don't have a full staff" is ridiculous. Postpone the damn case until you can get one, like any reasonable state would. Ironically, this may have worked better in a situation such as the one on STAR TREK: VOYAGER where the crew was on the only starship in the quadrant as far as they knew.
Is Maddox worried that Data is a flight risk, and therefore the case should be heard NOW? Well if it's Starfleet's idiotic argument that Data is property until proven otherwise, why can't they just detain him/store him until the case can be prosecuted properly?
I know, I know, it's because the main cast members have to star in this hackneyed courtroom drama. Patrick Stewart can definitely play a good lawyer, and his speeches are dramatic, but the situation is so ludicrous that, like George, I can't watch this episode without cringing at the bizarre circumstances in which the characters find themselves.
wolfstar
I'm the same, this has long been a 2.5 in my book. The idea is great (as are the performances) but the trial is borderline nonsensical - the execution just doesn't cut it. I like Melinda Snodgrass's later episodes (Pen Pals, The Ensigns Of Command, and The High Ground) much more, as well as The Offspring, which is exquisite. For me it's not so much the fact the trial is even taking place that's the problem (though I agree with the criticisms above), more the way it's portrayed and how it proceeds.
Data’s android balls
This episode must lose at least one star for the mere fact that I find it highly unlikely that anyone in the 24th century would choose to use a “toaster” as an analogy for Data. Perhaps a tricorder, replicator or sonic shower, but definitely not a toaster. *Tongue firmly in cheek*
Nic
Sorry to point this out but all those stating that this episode isn't great (or even good) based on how unrealistic it is in terms of the legal proceedings are completely missing the point of the episode.
The episode is designed to explore the issue of Data's sentience and the legal procedure is used merely as a set piece.
To state that it isn't good because of how unrealistic the court scenes/system are is the equivalent of derailing the entire series based on how 'unrealistic' the situations are. The entire premise of star trek stands on such flimsy limbs that unless you suspend your disbelief you will never enjoy any of it.
Bobbington McBob
Absolutely one of my favourite pieces of science fiction, let alone TNG. Intensely moving and powerful, we really get to see that shakespearian training come to the fore in Stewart, and that seems to also invite Frakes to really step up as well. When he turned Data off, it was truly chilling.
"Well there it sits!". I swear someone is cutting onions in the room every time I watch that part
Meister
Wonderful episode. Yes I wondered why the question wasn't settled when Data was going through the Academy or getting his commission but who cares. (and the fact that this wasn't raised as an argument by Picard made me think the writers realized that this was a weak link in the premise of the episode.) Its interesting as we ramp up so called AI now how this episode will be seen differently.
I will try and come back and read the comments here. Although they were written years ago, they are like a book club to me. thanks!
Lizzy DataLover
I agree with a lot of the points made in these posts *I disagree with some of them too.* I am 100% pro-life, a vegetarian, an animal rights activist, and I'm very much for human rights as well, extremely against racism and sexism and all that, and a true believer/follower of the Gene Roddenberry future. Although even it seems to have its flaws. Take this episode for example. As good as it is, I thought the people of the twenty fourth century had *evolved* to the point of accepting everyone, now matter who or what they are. True, Data is quite harder to understand/define than any organic being apart of the Federation, but like Picard said, Starfleet's very mission is to find new forms of life. And Data is exactly that. It is totally hypocritical to begin a quest to explore other life-forms and then refuse to accept or even try to understand them once you encounter one.
And of course there's the whole deal with the question of Data's life in the first place, but IMHO there's really not that much difference between biological life and technological life. Although I may not have many frames of reference other than fictional characters, it seems that Data and his fellow crewmates function in relatively similar ways. One being organic and one being mechanical in nature, they both have virtually the same makeup/bio functions. His positronic brain doesn't actually sound all that different from a human brain. The main difference being it is constructed of metal instead of organic tissue. So it's basically what the human brain would be if it were made from metal.
I do not have any prejudice against AIs, in fact most of my very favorite TV characters are AIs. Like right here. I even named my phone Andy *for android* and usually refer to him as a he. As playful as this is, it just shows you how much I'm willing to accept. Much more than the people of 2364 it seems. Now I'm not expecting them all to name their phones and kiss their oweys when they malfunction, but it would be nice to see some willingness to accept the unknown here. We cannot prove true sentience of ourselves any more than we can of Data. Louvois said it best when she stated that she doesn't even know if she has a soul let alone Data. So Maddox's arguement is invalid, because if he calls Data a machine just what exactly does he really mean by that? If he means a self-functioning structure that is constantly working to keep itself going and relys on specific specifications to do so, then like Picard said, we're all machines. And if he means a piece of mechanical equipment with no brain that blindly serves its makers without a second thought, *because it has no thoughts* then he is still wrong. And I'll tell you why. Your toaster would never ask you "why do you put bread into me?" Even if it could talk. The very fact that Data rejects the procedure in the first place should have been proof enough; if Data were really a mindless automaton with no sentience, then he would have blindly accepted. But instead he fights for his case, making very rational arguments as to why it would be a tragedy if something were to go wrong. How could a being with essentially no life be trying so hard to protect it? Why, he wouldn't even know what life is.
Data is as human as you or I when we get right down to it, perhaps not completely physically, but he is living. And since we cannot prove what living really is or what it means for anyone, we will probably never know the true answer to the big question of "does Data have a soul." I will never know it of myself and most likely neither will all of you. But I believe it is there, or else we would all just be empty shells, much like Data when he's been *turned off.*
"What makes me sentient and him not?" Good question Picard. What is the difference? What gives biological beings more rights than anything else in existence? True, organic creatures seem to have the longest track record of showing real sentience, *whatever that may be* but how do we know that this isn't the beginning of the rein for artifical sentience? Just because they were relatively non-living before doesn't mean times wont change. We started out as practically nothing before our life and eventual sentience emerged. And it certainly took its sweet time. We have no frame of reference for even life itself to be judging other more unique beings for not being alive. We're judging them for not being exactly what we are, and not living up to our standards, not for being insufficient in nature. Until we learn what the true meaning of life is *spoiler: its 42* we are not allowed nor qualified to make assumptions on what life is and what it should be.
All that being said, I really enjoyed this episode for its marvelous tackling of an outrageous issue. Classic Trek.
Springy
Bravissimo, one and all! So good! Delicious!
A classic, well done in all aspects.
Particularly loved the Phillipa-Jean Luc thing, and how they accused each other of being too attached to duty, too cold - too robotic, you might say. And I liked the parallels between Maddox-Data and Phillipa-Jean Luc . . . the recognizing each other, the history that included a grievance, the feeling that, for our totally human pair, neither one was going to sacrifice themself for the other - though that was what their history and personalities demanded. So they had to walk away, just as Data did, and return to duty.
"What does it mean to be human?" is the question that takes center stage, and the answer isn't simple. What a mess we are - duty-bound but love struck, longing for autonomy but desperate for communion, wanting freedom and choices but craving roots and stability, anxious to protect ourselves but willing to take bold risks for joy of it. Call my bluff, if you dare.
I loved the Guinan part, Whoopi was great, helping Picard see the big picture. No man, no starship even, is an island, entire of itself.
Just wonderful - Riker at the card game, bluffing, Riker with Phillipa, unwilling to find out if she's bluffing about sentencing Data to toasterhood, Riker and Data, at the end.
Sure, it had its imperfections, but they were insignificant. It brought me to tears a few times. A great ep!!
William B
@Springy, isn't it great? I love the parallels you pointed out.
I think the Picard/Phillipa thread is interwoven nicely, and there's a sense that Picard's winning her over, and her being willing to be won over, heals their rift, just as it apparently heals the Data/Maddox one. No hard feelings: they recognize that, in the end, they all genuinely wanted to get at the truth, as blinded as some (esp Maddox) were to it.
I like how Riker's arguments were more physical and visceral, Picard's more intellectual and spiritual. Both because it suits the characters, and also because it fits what aspects of humanity Data has and lacks. Both Riker and Picard have to detach emotionally from Data (become more like Data) to do their part - - Riker to do the duty he hates, Picard to see past Data to the bigger picture. And both have to be less like Data in order to make their cases dramatically.
Chrome
Great observations Springy and William, it's always nice see comments on this one with fresh eyes.
The interesting thing about the Picard/Phillipa relationship I think, is not just that Picard had to win Phillipa over with his case, but also Phillipa had to show that the justice system works to Picard. The Stargazer hearing, alluded to in this episode, tells us that the law isn't always as pleasant and can feel unfair. Here Phillipa needed to be, and I think was, a very shrewd magistrate who managed to not give in to either side until the matter had been thoroughly fleshed out.
Peter G.
There's a thread here I hadn't noticed before and is nicely woven in. Early in the episode Picard refers to Phillipa as having really laid into him in his hearing, to the point where he felt betrayed that she wasn't merely acting as a legal official but seemed to be going after him with a vengeance. He clearly took it personally, and felt that she had it in for him, and he still hasn't forgiven her for it. We get the sense that they had a romantic relationship back then, and if I'm guessing I would suspect that maybe they broke up shortly before that hearing, so that it might have given Picard the impression that she was being vindictive just to get at him. Or something like that.
Later in the episode Riker is told that he must play prosecutor - a task not really logical to assign him but for the purposes of the thread I'm referring to it's necessary. Because Riker doesn't want to do it: he feels it will be a betrayal of Data to go after him in full force. He finally agrees because the force of the law and procedure simply requires it of him. At the end of the episode he apologizes to Data and feels like he's betrayed him, and Data informs him almost casually that had Riker not done so Data would have been in even more trouble, and that going after Data in full force was the real sign of Riker's friendship. More broadly, we can argue that pursuing justice - even in regards to people we care about - helps everyone and should be seen as a sign of caring. Maybe that means caring about society, or justice, or the truth: some higher purpose than just "I want to stick by my buddy". Because in the greater scheme trying to make a better world is the best way of sticking by your buddy, as contrasted with the dictator style of doing so, which involves breaking the law to gain advantage for you and yours.
And this brings us to the crux of the episode: the question of the hearing isn't actually left unresolved after all! Picard *could not* understand Phillipa going after him with a vengeance, and took it as an affront and betrayal. But Data receives the exact same from Riker and thanks him for it. And *that* is what proves that Data is not only alive but is possibly a life form more inclined towards forgiveness than humans are: he immediately saw it from both his own point of view, and Rikers, but also in terms of the big picture. There was never any question of Data taking offense because it was personally uncomfortable for him to have a friend do his duty. He could instantly come to a peace with that situation that Picard never could even in years of thinking about it. Even by the end of this episode Picard is only starting to come to terms with being on the rough end of justice, while Data naturally understood it. Quite a sub-story!
Springy
@William, great point that both "pairs," by the end, manage to come to a better understanding than they had previously. Agree there's a sense that Picard is winning Phillipa over, though I'd say vice versa is also present - as @Chrome so nicely points out.
@Peter G, love your observations and agree that what Riker does to Data, is what Phillipa did to Picard. Picard was badly hurt by it though, felt betrayed, and that retarded his ability to see what Data sees so effortlessly: Riker did what he had to do, for Data's sake and at some cost to himself.
Not sure if this makes Data more or less human, but that's an issue the ep is exploring and we're meant to think about. Vanity/pride is mentioned several times, and Phillipa calls Picard a pompous ass. And it is precisely vanity -pride, a bruised ego - which Data lacks, and Picard is bristling with, which allows Data to see more clearly.
The call of duty, and the struggle to determine where your first duty lies - is a constant refrain - Phillipa did her duty during the court marshal. Picard contemplates where his primary duty lies - to Star Fleet? To Data? To future generations? Riker does his duty. Data hesitates to respond on the stand, because of his word to Tasha. It isn't until Picard assures him he would not be betraying Tasha, that he responds.
There's a lot about selflessness, self interest, self preservation, and self sacrifice in this ep, too. Risk and reward, freedom of choice, what you choose and what is forced upon you.
It's just masterfully woven together. Beautifully done.
IkesNephew
Fun trivia about the episode's background: Melinda Snodgrass had written an original series Star Trek novel, Tears of the Singers, in 1984, but had never written for TV. Her friend, George R. R. Martin (working on the Beauty and the Beast series at the time), told her that her writing strengths would lend themselves to television.
Melinda then wrote The Measure of a Man and sent it to George, who liked it and immediately sent it to Star Trek. And the rest, as they say, is history.
Latex Zebra
Surprised people haven't come back to this with it's relevance to the new Picard series.
Boopop
I just did Latex Zebra - it had me welling up. I've mentioned this sort of thing (Turing test, etc) to friends who have no interest in sci-fi and still they don't show any interesting. I don't really get it as I find the whole subject fascinating. I'm curious what will happen when we start having Alexa/Google Dot speakers with the same intelligence as that of a ept. We have animal rights for pets, will we need the same rights for appliances? I think that's a closer debate that will be needed than the likes of what happened in Measure of a Man.
Great episode and I wouldn't have guessed it was so early in TNG. Picard clearly had a strong connection with Data from early on in the series so I can see why the writers for Picard decided to continue along this thread.
Boopop
any interest*. Wish I could edit my comments >.>
Latex Zebra
@Boopop - Pleased to see I'm not the only one going back. I just watched I, Borg again and was blown away!
philadlj
Just watched this again after a few years, on Netflix, which has a stunning HD remaster. It's often almost *too* good at times, as it reveals that parts of the Enterprise and Starbase weren't as spotless as SD television suggested! Still, exposing the imperfections of the production design is a small price to pay to see the minute detail in the actors' faces and eyes, which really brings their performances to life.
And *what* performances. Stewart, Spiner, Frakes, the guest stars, and Goldberg are all at the top of their games, and have a superb script into which to sink their teeth. Even LaVar Burton maximizes his minimal screen time with his subdued yet devastating goodbye to Data.
Looking back at how the stinger contained a poker game (the first on TNG) in which Data first learns about the meta-logical intricacies of the game, and I can't help but see Riker's bluff being echoed in a part of Picard's courtroom strategy: rely on that which is unknown to sway your human opponent.
Every time I watch "The Measure of a Man" I notice another subtle moment of excellence in the writing and performances. And it draws a little closer to clinching the position as Best Episode of Star Trek, Period.
THERE IT SITS, indeed!
Focksbot
I've also just watched this episode on Netflix, and it's certainly the best TNG episode of the run so far. I'm also not surprised to see it's generated a lot of healthy discussion on this page. A few points:
* It's inevitable that the legal proceedings are rather unrealistic. You cannot stage a hearing of the sort we have today, deciding an issue of this magnitude, for a TV audience. You couldn't do it in five times the running time. There would be far too much evidence to consider, legalese to unravel, and nitty-gritty back-and-forth argument to work through. The events of this episode are best thought of as a debate for the benefit of the audience, thinly disguised as a legal hearing.
* Once you accept that, it's a relatively simple exchange of ideas - and that's why the episode works so well. Riker bases his approach on the presumption that a machine and a sentient being are two opposing concepts. A machine can be owned and a sentient being can't, so therefore all he has to prove is that Data is more the former than the latter. Hence his case is based on demonstrations of 'machine-like' qualities.
* Picard realises he doesn't need to operate on the same assumption. He can argue that it's possible for something to be a machine *and* a sentient being. So first he sets out to demonstrate that sentience is impossible to prove to any greater degree in a human than in Data. Having established that, the force of his argument comes from pointing out the devastating consequences of treating sentient beings as property. It follows that if you cannot firmly disprove sentience, it is better to operate on the assumption that it exists.
* This all works completely fine as a logical exchange for a general audience. The only problem is that at a few points they start talking about 'life', which - as others have pointed out - is a different concept. It doesn't make sense that the argument should hinge in any way on whether Data is alive, because houseplants are also alive, and no one has a problem with these being property.
* The Guinan exchange is an entirely worthy element of the plot. For the sake of drama, it's a good move to show Picard thrown off his game by Riker's case. He knows that it makes for a relatively weak rebuttal to simply say: "The truth is we cannot prove Data, or any one of us, is not sentient." Logically, that swings it, but emotionally, it's very flat. Guinan tips Picard off as to how to develop that argument into something much more dramatic - something that brings wider issues of morality into it.
* I continue to shake my head, as I did over that DS9 Vic Fontaine episode, at how some people react to any character mentioning or implying race as if they were being slapped in the face with political correctness. Must be a US thing.
* The acting in this episode is a step up. It feels like the characters are really starting to own their roles for the first time. Can't agree enough with what @philadlj says above.
Rahul
Obviously much has been said about the first poker game on TNG but as someone who used to play (recreationally), it really bugs me how Data absolutely does not play like a cold calculating android should -- or even as a regular human should.
He folds trip queens on the river when all he has to do is pay another $10 -- he is getting pot odds and has an excellent hand. If Riker actually does have a flush, then he would be correct in paying the $10 to see it. I'm sure a poker pro would pull his hair out if he saw this hand!
But of course the writers wanted to tie Data getting bluffed out with the bigger issues in this episode... but the teaser is a lesson in how not to play poker.
Mr Peepers
It was a crime for Riker to turn off Data, claiming he is just a machine. There would be huge ramifications in that if he did not have a nonvolatile memory. Just like clubbing a man, once you turn him off, you can't turn him back on.
My first computer was a Commodore 64. After powering it on, you could load a program into it and use it for meaningful work. But if you shutoff its power, any data or program it had is lost. Once you power it back on, it is a totally new machine with no residue evidence of what it formally was. The same could occur after Data is disassembled. Once put back together, it may not be the same Data you had before. It would have to relearn everything it formally was. Just like a person can not be reincarnated, or cloned. In either case, you end up with something totally different even if it was possible.
Lorene
I especially liked the Phillipa character and the interchanges with Picard. A complex character. She was a very believable judge in tone and demeanor.
Sen-Sors
Judge Philipa, you mean the one that banged Picard back in the day? I could tell because she heavily implied it during every one of her dialogue scenes. It's funny how she demands Picard and Riker play out this objective, adversarial relationship in court but she as the judge can hardly go ten minutes without making reference to how she used to bang the defense, during the trial no less!
Good actress and a good character for the episode's larger themes, but holy crap lady, we get it.
Lorene
I meant complex in her career: having been demoted “they forced me out”, and then come back after prosecuting Picard in the star gazer case. Her strong judicial abilities. The humorous interchange when she told Picard her universe was reassured by his continuing to be a “pompous ass”. You are too hung up on sex, Sen-Sors.
Jason R.
Picard's speech is rhetorically brilliant.
Riker has already established that Data is almost certainly just a machine. So Picard is in a trap - as long as he focuses on the question of Data's sentience he will probably lose because let's face it, the superficial evidence before the court favours Maddox. In that narrow argument, Picard is toast. Whether Riker turning Data off is legal it cuts to the very heart of the issue. When Picard tells Guinan Riker nearly convinced him, he's speaking for the audience. Even as a fan of the show you see Data just fall over like a puppet and you have doubts!
But then thanks to Guinan Picard realizes the way out of the trap is to ignore the question of Data's sentience, or at least shift the focus.
Alright Maddox, what do you envision happens if you win? And that is the trap for Maddox because Picard knows that Maddox will go on about the limitless potential of armies of Datas.
And suddenly, the new question becomes: what if Maddox wins and what if he is wrong, even by the slightest degree? What does that tiny 1% sliver of doubt mean?
And that's what wins it for Picard. Louvois might think that there is a 1% chance of Data's sentience and a 99% chance of him being a machine. But is she willing to gamble on setting in motion a generational tragedy? Will she be the instrument of an atrocity, the next Hitler?
Suddenly hypothetical worries about the Enterprise computer refusing a refit seem petty. Picard has brilliantly put the burden not just on Maddox but on the court and that burden is absolute, utter certainty. Because anything less is utterly unthinkable.
As a lawyer, I can certainly take issue with the laxness of the courtroom procedure but not with the rhetorical skill of the two sides and the razor sharp focus of the writing. This is the kind of episode that a newcomer can step into and enjoy without knowing the first thing about these characters. In fact, a newcomer might benefit from a certain objectivity that regular viewers who know Data might lack.
While some of the musical cues give away Maddox as the "bad guy" initially, the courtroom procedure is a perfectly even contest where both sides make splendid arguments without the writing forcing the audience to a "correct" answer.
What I especially enjoy is Maddox's conversion - it feels earned because even he is swayed by Picard's implacable logic.
This is the episode I would show a newcomer to Trek. And people hate Season 2? Bah!
He's dead, Jim.
As a newcomer to TNG, I generally hate it. I bear no pre-conceived prejudice toward the show; on the contrary, after falling in love with TOS, I was excited to check out the Next Generation of the show. Unfortunately, in nearly every category except the impressive effects and larger budget, it seemed to me a disaster, coasting off the success of the former show and movies. Terrible acting, bad stories that rip off a TOS episode half the time, etc.
And then this happened. I decided to watch Measure of a Man, expectations set rather low by this point. I am really impressed by both the writing and acting; it's as if the show did a complete 180. Who knew Picard and Riker could actually do a pretty damn good job when given the chance? Heck, even Geordi (exactly what does he do besides have space-age glasses?) has a great scene with Data before it looks like he will be forced to resign.
This is the first episode I've seen in about 1.5 seasons (though admittedly I've not seen most of them, just a handful from that range that sounded like they might not totally suck) that actually feels like it belongs in Star Trek. It managed to intelligently handle a complex issue, and it actually changed my mind by the end.
I went in expecting the idea of giving a machine "rights" to be ludicrous nonsense, with any comparison to slavery being a predictable cliche and inaccurate analogy. To the writer(s)' credit, they do a great job of presenting that viewpoint, presumably one most people watching already hold.
Then they show the other viewpoint, which makes such a compelling case, I am genuinely at a loss for how to disprove it (which you actually can't, nor can you prove it, and I will explain why in a moment.)
The idea that Data is unequivocally a machine, yet he is sufficiently advanced, that he has evolved enough from the earliest calculator, that it is impossible to find a pertinent criterium that actually distinguishes him from any other natural life form. This idea is something I've never thought about before, and my reaction after Picard lay out this viewpoint was probably the same as Maddox's.
Not only is this idea so brilliantly handled, and the dialogue very well delivered, but the wording of the lines themselves is so eloquent.
"Does Data have a soul? I don't know. I don't know if I have one. But I think he should be free to discover that for himself."
Now that I have given the writers and cast sufficient credit for pulling off an amazing episode I did not think possible for this show, let's acknowledge that it is far from perfect, though the faults listed below do very little, if anything, to detract from the excellent quality of the episode.
There is a glaring conflict of interest present in the episode, namely Picard's prior history with the acting judge of the case. She so obviously has feelings for him and wants to get back together with him. If nothing else, she at least wants his forgival for being too harsh against him in a prior case. And now she's going to decide a case in which Picard is one of the litigants. Obviously, she's biased to let Picard win.
Then the idea of giving Riker the role as prosecution is also a conflict of interest, which he does somewhat acknowledge before reluctantly accepting it. Riker is supposed to act in the prosecution's best interest, but he keeps quiet when Picard references Yara's brief liason with Data in a very misleading manner. She went crazy, because everybody went crazy that episode (another bad TOS plot rip-off), yet Picard presents it to those unfamiliar with that event as if Data had a genuine emotional relationship with her.
Why is it that the Federation needs to dismantle Data to find out how he works and mass produce him? I always figured Data was already one of many. It seems pretty stupid that if Data is your only working prototype, to place him on board a ship like the Enterprise where conflict with hostiles is commonplace and risk of losing life is relatively high (if you wear a red shirt of course :) ) Wouldn't you want to keep him some place really secure, working in some classified program in a Pentagon-like facility? Especially if you don't know how to replicate him?
And where's the patent? Surely the creator would have had blueprints that he saved, and applied to the Federation for patents? I guess this future without money (and no incentive to protect your IP) isn't quite as swell as Picard likes to boast it is.
This need to dissect Data might have been more believable if he were already one of many, but Data had some human-like quality that made him an exception, perhaps something in his artificial brain that the others don't have, that came about spontaneously. That would also go a long way to supporting the notion he is a life form, if he can adapt to his environment and develop features outside of his programming.
Finally,--and this isn't a problem with the plot--Picard's core argument is deeply fallacious, as I hinted at earlier. Why? Because it's essentially Pascal's wager.
If you're unfamiliar with Pascal's wager, it is the idea that everyone should believe in God, because the consequences for taking the risk of not believing and turning out to be wrong are just far too costly to ignore. In other words, if you waste your life going to church and praying, at worst you've been a moral person (theoretically) who lost a little time on Sundays. But if you refuse to believe Christianity (or whatever religion) your eternal soul is damned to an eternity in hell. Even if you believe God doesn't exist, is it wise to take that risk when the potential loss is so great?
The Picard-Data version of this old fallacy is as follows: "Data is probably not a life form. He is, after all, a machine that just somewhat resembles a man. But if there is a chance that he does have a "soul"; if he really could be a sentient being just like other life forms, then by denying his rights you are setting a precedent that will condemn him, and all future Data "life forms" like him to generations of slavery.
Given humanity's dark past with slavery, and the inability of the slaveowners to see the error of their thinking from their own time period, this is a very persuasive rhetorical argument.
But logically, it relies on an assumed presupposition: namely, that there is indeed a big enough chance that Data is really a life form.
You could use this same "Pascal's wager" argument to "prove" monkeys deserve equal rights as men, because if you're wrong...
But nobody would seriously entertain that idea because there is no chance the chimp in the zoo has sentience the way we do. (and that chimp is a lot closer to being human than Data is).
As an atheist, I reject the Pascal's wager proposition, because I see no reason to seriously entertain the idea there is a chance at all God exists; therefore, any consideration of the potential consequences is entirely irrelevant.
If we assume that Data is a machine, and by definition machines are incapable of genuine thought or emotion, then there is no chance he could have a soul, and we should not worry whether machines designed to make work easier have programmed reactions to such work. Picard has sidestepped the question of "Is Data alive?" and takes advantage of the fear associated with, "Well what if he is?" to convince the judge and viewer of his case.
But I suppose therein lies the beauty of the argument in this context, and the lofty sci-fi question proposed by the writers. We truly do not know that Data isn't alive. There is a plausible chance he could be genuinely sentient; we are lying outside the margin of error.
In this interpretation, Star Trek TNG asks us not to view Data as a proxy for computers of today, but as the idea of what a computer could be far in the future. It asks, if a.i. could eventually become sufficiently evolved, as we are sufficiently evolved, and sufficiently removed from its binary origins, as we are from single-celled bacteria, could it ever reach a point where it's indistinguishable from what we consider a life form? I, and I would think most people, assumed no. TNG challenged that assumption, and succeeded in changing my mind to a position I began the episode mocking. That is truly commendable.
"The Measure of a Man" is an episode that has only grown more relevant since the time it first aired. The developing field of a.i. learning has now opened up a world previously thought impossible. It was taken for granted that you can't create new pixels for a truly higher quality image than what was originally present. You can zoom in, but it'll just become blurry. The artificial "intelligence" of today's world no longer follows just a user-defined algorithm, but it learns and improves its own algorithm over time the more data it collects. One could say it "learns from experience," in a manner of speaking. We've still got some way to go before giving Siri orders is outlawed as slavery, but we are much closer to the world TNG and Data ask us to imagine than ever before thought possible.
Jason R.
@He'sDeadJim,
That is a great connection between Picard's argument and Pascal's wager.
But in reality, Pascal's wager is a false dilemma; it presupposes that there is one binary possibility, either Christianity or atheism. But of course an atheist will point out in response that the Christian is also an atheist, rejecting Zeus, Mohammad, Odin, Vishnu etc.... so the Christian's odds, in reality, aren't much better than the atheist's.
But in this episode, Picard's version of the wager really is binary - Data is or he isn't, and if he is, what Louvois will have done if she rules against him is pave the way for a tragedy of appalling implications.
Picard is telling Louvois: think what it will mean if you are wrong, even by the smallest degree. How will history judge you? Faced with that kind of responsibility Louvois realizes that picking the "safe" route is actually more perilous in its moral implications than the alternative. It's just way above her pay grade and given those implications she blinks.
Peter G.
@ Jason R,
Good argument. And it's even worse than Pascal's wager not quite being the same as Picard's argument: in Pascal's wager the issue is about gambling with your own fate. And if we're being particular, it's about whether you should feel obliged to actively adopt a belief, so it's about going out of your way to believe or do something whereas normally you would just go about your business. In the case of Data we have Maddox trying to oblige *someone else* to do something that will be destructive to them, and perhaps resulting in a forced servitude for countless other beings. So the Pascal's wager for Maddox (and Louvois) isn't about whether they should feel obliged to do something, but rather whether they can oblige others to do something, which is already a far more perilous proposition. And further, Maddox is the one trying to change the status quo by force, and so not only would he have to make the gamble, but he would be forcing Picard, Data, and everyone else to dive into the gamble with him. That he is forcing others to bend to his will makes the wager exponentially more hazardous. Even if we stuck with Pascal's actual wager, about whether to believe in God, the proposition would change drastically if it was about whether to force others to worship God, rather than just to take care of your own chances.
So while on the surface I agree that the "but what if..." argument is tenuous if the goal is to convince someone they should believe something, the case is reversed here where you are trying to require others to abide by your belief. In that case you had better be 100% sure you are right, because otherwise employing force is a potential atrocity. And actually even *if* Maddox is right and Data is not sentient, it is problematic enough to override the Enterprise and its Captain even on stealing their humanoid toaster. So what if it's technically Starfleet property, you are going to disrupt the command structure and trust on a ship to serve your personal desire to do experiments? Why should his ownership of the toaster trump Picard's? If anything I would think that Data's record of saving the flagship numerous times would at least be a contender for its best use, rather than dismantling it in the vain hope that you can reverse engineer it when you already know you are lacking essential knowledge to do so.
Peter G.
I just want to add one more point:
It's important to remember that Data was actually not opposed to Maddox running experiments, and in fact seemed supportive and enthusiastic at first. He was not at all against the creation of other Soong-type androids. Where Data found a problem was when he realized that (a) Maddox intended to essentially destroy him in the process, and (b) that Maddox didn't really know what he was doing. He was stuck in his own research, and to make up for his shortcoming - and refusing to accept that he failed - he was throwing out a hail Mary on the off-chance that by taking Data apart he might be able to figure out where he went wrong. So it's not even that Data was trying to avoid a race of androids serving in Starfleet, but rather that he recognized that Maddox's motives and methodology were bad and wasn't going to participate in a bad experiment. If one day Maddox were to come back much better equipped to do a good experiment Data would no doubt be happy to oblige, and in fact his continued correspondence with Maddox (plus his statement at the end of this episode) shows that he really would welcome that chance.
So one main issue, in addition to Maddox having to be completely certain he is right about Data's sentience, is that Maddox is trying to strongarm Data and Picard in a wasteful way, to force them to comply with his very dubious plan that is more self-serving than anything else. The fact that his motives and methods are flawed seems to be a key reason that Data is refusing. If the situation were different and Maddox was 100% certain to be able to replicate Data, and Data was just refusing on principle, I think the arguments in favor and against would have to change. Yes, the slavery issue that Guinan brought up would still be on the table. But also on the table would be to ask why Data refuses to allow humanity to have this great boon. I think there would then be an onus on Data to demonstrate why, as the inheritor of Soong's tech, he should effectively be allowed to patent and refuse access to it.
He's dead, Jim.
Jason R., Peter G. thanks for adding those interesting points. It's comforting to know that in a time when the world seems so upside down, there's a certain calm temerity and respect for philosophy among sci-fi fans, that remains unchanged.
Peter, you bring up a valid point, that the comparison at play here is a different one, and any time the use of force against someone else, especially one as destructive as murder, is in play, the stakes are certainly very high.
Yet, to the believer, in the context of Pascal's wager, the odds are higher still. Should your body perish, years, perhaps decades are lost. Should your spirit be condemned, however, your entire existence is lost forever. There is no limit to your loss.
The point of the argument I made, is that no matter how infinitely great the cost of an imagined threat is, if the threat is indeed imagined, such loss should not be factored into the equation.
By using this faulty logic, a party, such as Picard, could escape the logical rigor of having to prove Data's sentience--a position, like the question of God's existence, which can neither be proved nor disproved.
In this way, I was simply mirroring what those before me had pointed out, though at the time of writing, I had not read those posts. Since one should not consider the possible costs when the likelihood of those costs happening is nil, such an argument could only be valid if there is at the least some likelihood they might occur.
However, proving that likelihood requires external evidence to support your case, and the costs themselves do nothing to add to Picard's case. Which is why a Christian, who has other reasons for believing in God, finds Pascal's wager a lot more convincing than an atheist, who already finding no reason to believe, is no more swayed by thinking of the possible costs. In short, the wager simply reinforces whatever the individual had already believed a priori.
Ultimately, whether Picard was right to win that debate or not is besides the point (though as I explained before, I do believe he provided enough doubt for an examination of the risks to be merited). The larger question, and the more meaningful takeaway, is that if we assume machines could become evolved enough, then it is possible for them to reach a point where there is no difference between them and other life forms, and at that point, our current (real life) assumptions about machines fail to hold up.
Ben D.
I find this episode overrated. The question is tantalizing but the execution is mediocre. I found the courtroom scenes and setup especially unbelievable. For example:
1. Riker had a clear conflict of interest due to serving with Data, and was correct that he was not qualified to prosecute the matter. In fact, Riker (and the entire Enterprise crew) could and should have been called as character witnesses by Picard to prove Data's sentience. It's not plausible that there could be no other prosecutor but Riker, or that the proceedings could not have been delayed until the arrival of a qualified non-conflicted advocate. It's not like there was some life-or-death emergency where Data had to be transferred immediately.
2. Picard should have objected to Louvois' use of "it" to describe Data in court, because that decision implies her intended verdict (although obviously she changed her mind later). The Judge should have been referred to Data as "Data" during the proceedings -- neither "him" nor "it."
3. In the real world, a presiding judge like Louvois would never have accepted Picard's offer of dinner right after rendering a verdict (and indeed, an attorney in Picard's position would never have made such offer), as it would smack of bias.
4. The episode never clearly framed the legal issue that the court was meant to decide. Initially, the question presented is whether Data is "property." But then it turns into a question of whether he is "sentient." It might seem that we're talking about two sides of the same coin, but legally the question of what defines "property" according to Starfleet would likely have far different parameters from what defines "sentience," and there could be overlap. For example, a dog could be considered sentient as well as property, could it not? It also bothers me that Maddox's personal definition of sentience went on trial, rather than a concrete legal definition -- the legal definition is what would be decided by a court.
Sorry to pick on an episode that is beloved by so many, but it was mostly meh for me.
Tidd
Without doubt, the greatest TNG episode so far, and one of the best Trek episodes ever. Anyone who suggests it might be the best TNG ever would get a nod from me. The court-martial episode of TOS was very good but Measure Of A Man knocks it right out of court. Others have said what should be said about it: Riker’s reluctance to prosecute, Picard’s chat with Guinan that led to the slavery thought (and brilliantly played by Whoopi Goldberg), Picard’s emotional but court-swaying final arguments... all exceptional writing and acting.
I’d just like to add a few additional things that I personally love about it:
- the first TNG poker game!
- the space station and the views of the Enterprise in slow orbit around it
- the brittle relationship between Picard and LeVoix ... both French by name
- the final scene between Data and Riker
- having Maddox played as an obsessively brilliant scientist, not as some kind of super-villain
This is drama at its very best. Not just great sci-fi, great TV that could be appreciated by anyone, not just those who like Star Trek.
4 stars AT LEAST!
Tidd
Just one additional point having read some of the comments above:
This was NOT a trial in the legal sense - it was a “hearing” to determine Starfleet regulations on the matter of property. Yes, it went on to discuss matters of a deeply philosophical nature but let’s not overplay the actual scenario.
Tidd
Oh, “one more thing”:
Did anyone else notice the parallel between the opening poker game and the hearing itself? In the game, Riker overplays a bad hand and wins. In the hearing, Riker also has a bad hand (he wants Data to win), and this time he overplays - the business with the metal bar, the arm, the off-switch - in order to win by appearing to “lose”. The look on LaVoix’ face when he turns Data off was a clear indication that Riker knew exactly what he was doing.
This could perhaps have been emphasised by a slight change to the dialogue at the end:
Riker apologises to Data.
DATA: “No need, sir. You gambled and lost, in order that I would win.”
RIKER: “An interesting way of looking at it.”
DATA: “With respect, sir, this has taught me more about poker than the game we had.”
RIKER (laughing): “Perhaps we should have another game.”
DATA: “I would like that, Commander. I think I have a better chance of winning now.”
RIKER: “Hmm. Second rule of poker: never predict aloud that you’re going to win.”
Exeunt, Data with a quizzical expression involving raised eyebrows.
Jason R.
"Did anyone else notice the parallel between the opening poker game and the hearing itself? In the game, Riker overplays a bad hand and wins. In the hearing, Riker also has a bad hand (he wants Data to win), and this time he overplays - the business with the metal bar, the arm, the off-switch - in order to win by appearing to “lose”. The look on LaVoix’ face when he turns Data off was a clear indication that Riker knew exactly what he was doing."
I am intrigued by the parallel to the poker game. So Riker's "losing hand" = "the fact that his argument is better". because for him the victory condition is losing, not winning?
So Riker "bluffs" in the hearing by pretending to think his argument is strong, even though he knows it is weak (but it is actually strong?)
Haha!
Except your post suggests you think Riker threw the hearing deliberately with the off switch / arm removal stunts. But that can't be because 1) Louvois said she would end the hearing if she got a whiff that Riker wasn't doing his best and 2) Even Picard called the argument "devastating" and claimed to almost have been convinced by it.
Peter G.
I agree with Jason R. in that Riker did not throw the hearing. If we want to make a parallel to the poker game, I think it would be that Riker's 'bad hand' is having to argue the position that he's given in the first place. It's a bad hand - having to betray Data - but he plays it as powerfully as he can to try to win.
Tidd
@Jason R
I think that 1) Riker's ploy in the hearing was so devastating that LaVoix (Louvois?) could not have suspected that he wasn't doing his best to try and win the case; it's only retrospectively you could suspect Riker was trying 'too hard' to win. 2) The same applies to Picard, whom Riker wouldn't have taken into his confidence - in fact, perhaps he was even trying to provoke Picard into making the strong arguments that caused him to win the case.
If you remember, Riker was examining Data's specs on the computer beforehand, and when he found the off switch location, there was a look of triumph on his face. It wasn't the look of someone who thought he'd win his case and consign his friend Data to dismantlement, it was more the look of someone who'd found a way to force the issue and gamble that Louvois would make the ruling she actually did. In other words a superb bluff.
Jason R.
@Tidd let me say I love your reading of the episode and I really want it to be true since the side effect is it means Riker was making a fool of both Louvois and Picard!
But alas, you've misred (or misremembered) the schematic scene. Yes, Riker is initially happy but the part you forgot was when you see him suddenly become sad and I think there is even a musical cue for this.
The only reading that tracks is that Riker smiles instinctively when he realizes he's got a killer courtroom move (deactivating Data) but then frowns because he immediately also realizes that this could win the case, which he does not want.
Tidd
@Jason R
I had indeed forgotten that part. Your reading is certainly the most likely, but an alternative is that Riker is suddenly unsure, that he is sad because he thinks that his gamble might fail and result in Data's dismantling (which is actually not a million miles from how you see it!)
The one bit that I can't forget is that the last two players in the poker game were Riker and Data. Coincidence? Possibly... possibly not.
Jason R.
@Tidd
I think the poker game tracks well with the court case and I agree it's no co-incidence. I am amazed I never noticed before - thanks for pointing it out.
But your theory doesn't turn on Riker deliberately throwing the court case.
Peter G.
Yes, it's enough to understand the poker scenario as showing Riker being in a position to win on a bluff and make Data lose everything. I don't see a need to try to unscrew that position into smaller minutiae, such as, why would Riker enjoy the thought of Data losing his hard-earned credits? Does he feel that Data spends his credits unwisely, or maybe Data spends money on Republican causes? No, that's too complicated. Riker likes winning, especially when strategy meets cunning, and that's all there is to it. If you're going to play poker with him, he will take great pleasure in destroying you, and there's nothing personal in it. The same goes for when he decides to take on his role in earnest for the hearing. The moment of joy is the thrill of using his cunning and going for the win. It doesn't fit into his character to enjoy losing, even if on purpose. Think back to Peak Performance: he wanted to hand Picard his ass. It didn't matter that this was his mentor, he was going to take great pleasure in being top dog.
Tidd
@Jason R @Peter G
I can see both your points of view and you may well be right. I'm just glad that I spotted the parallel between the poker game and the hearing! (Pauses to polish own trumpet... LOL)
Ekaros
I never understood how any of those arguments wouldn't be subsequently be applied to humans. That is what I would have done as Picard or Riker. The guy and everyone else is simple machines, thus property of Star Fleet. Since there is sufficient number of spares, and clearly doctor in question was brilliant one why not start reproducing him instead of this unique machine. Much safer and less wastely way to create copies. After all it is just a machine and property of Star Fleet. With no rights.
Q
Had they kept Lore in storage instead of stupidly leaving him in space they could've given him to Maddox.
Noni
I love this episode, and I have a very minor personal connection to it:
The brother of Brian Brophy, the actor who played Maddox, was my English teacher in my sophomore year of high school, a million years ago. Brian actually came and spoke to our class about acting, and recited Tennyson's "Ulysses." He was a very nice guy. I wish I'd asked him some questions about being on TNG, but I was kind of shy as a kid. Oh well.
I was bummed they recast Maddox with a different actor for the Picard series.
Sen-Sors
Read a real stinker of an article recently, one of those that's obviously meant to rile people up and get hate-clicks. It's a Polygon article, of course. I am indeed playing right into their hands by clicking and linking to it, but what the hell, it beats getting mad about the news am I right?
https://www.google.com/amp/s/www.polygon.com/platform/amp/22924014/star-trek-discovery-character-emotions
The article is titled "Star Trek Needs Less Logic And More Crying" and it's a real hoot. The crux of the article is that Trek has historically been about a rigid focus on logic over emotion at all costs, which Discovery rectifies with it's swooping shots of SMG's whispering, teary-eyed delivery and plots about the ships computer "feeling seen".
This thesis is obviously bullshit, but my favorite part comes after the author lists some examples that actually undermine this point ("City on the Edge of Forever" and all of DS9) and proceeds to make the case that "The Measure of a Man" reinforces Trek's obsession with logic over emotion because Picard's big speech at the end was rooted in cold, sterile logic and nothing else!
From the article: "But while Picard states his case lovingly and movingly, it’s a fundamentally logical argument that he wins with. If Starfleet defines life according to forms it knows and if Starfleet exists to seek out new forms of life, then it must alter its definition according to those new forms." Okay, that's technically true, but it is most definitely not the core of Picard's argument; indeed, his arguments at the start of the trial are rooted primarily in logic, and they seem ineffective. Riker turns Data off; Data is a machine, and we know machines are not alive, right? That's logical.
It was Guinan's speaking with Picard and invoking the spectre of slavery and "disposable people" which colored Picard's whole argument moving forward, where he explicitly stated that neither he nor anyone else "knew" what Data was, and would Starfleet consign him and his "race" to slavery on that basis? This is an argument that is rooted in appeals to humanity and emotion above all else, because Picard has no way to "logically" prove that Data is a life-form.
The author even quotes the "There it sits" line as if it is one of cold, calculated logic and didn't immediately follow "What is it? I don't know, do you?" What a complete and embarrassingly total misreading of a not-terribly-subtle episode.
Robert
This episode was not worth 4 stars.
How did Data get into Starfleet if this issue had not been addressed already?
How can the JAG simply rule him 'a toaster' while ignoring the prior precedent that Data was voted into Starfleet? Or for that matter, how could Starfleet commission a "non-person"? It just doesn't make sense.
Deactivating Data on the witness stand is so highly contradictory that it's almost laughable. If the court is testing his sentience, then he is being assaulted by not being asked if he can have his hand taken off or body deactivated. If "not sentient" is the default, then how can he be a viable witness in the first place? If the JAG has already decided he's a non-person piece of property then his witness testimony is invalid.
I wish they did more research on court rooms to make the legal aspect of this episode more interesting. It just didn't make a lot of sense and was too simplistic in execution.
Booming
@Robert
That is really missing the point. The episode is not an attempt at portraying legal proceedings accurately. That is all just there to have a philosophical debate about what it means to be a person and the tendency of societies to deny or take away rights of people who are different.
Sigh2000
@Robert "How did Data get into Starfleet if this issue had not been addressed already?"
I always think about that too while watching the episode.... at first.
However, then I recognize that many people in this world live as if a license has been granted to them by those who are in power. That license can simply be revoked if those in power choose to do it, or if the group in power is replaced.
Data's sentience was not questioned when he was admitted to the academy, but with passing years Starfleet's resolve on the issue significantly weakened.
The episode needed IMO to explain a little bit better why the original position on Data's sentience had changed. One or two sentences could have done the job, but instead the Phillipa-Picard cringe-mance was developed instead.
Agree with @Booming that the main purpose of the episode is to present a "debate about what it means to be a person and the tendency of societies to deny or take away rights of people who are different."
It handled that quite well.
Top Hat
Here's my explanation for what happened when Data applied for the Academy:
The Academy in itself does not particular care if Data is sentient as a matter of law -- they care if he's capable of becoming a Starfleet officer, which he is, and his performance on the admission tests demonstrates as much. There are at least three people on the admissions committee (maybe more but Maddox needs to be a minority voice). Bruce Maddox objects on the basis that he is an android (this is actually kind of ironic since his later plans for Data depend on him being in Starfleet). But he is overruled and Data is admitted.
Nonetheless, no strictly legal precedent about Data's sentience or non-sentience is established one way or the other. It's not like Data needs to go to court to get permission to enter the Academy -- he is admitted with no more fuss than Maddox's objection. So no strictly legal precedent on Data's sentience or non-sentience is established at this time -- you've got a being attending the Academy and then serving in Starfleet whose rights have not been defined under the law, which is odd, but for the most part not an active cause for concern. Starfleet tacitly supports Data's categorization as sentient but no more than that -- it's officially an unresolved matter, and that's no particular matter to Data since he's where he wants to be. It is kicked down the road for someone else to deal with, as is often the case.
However, this lack of resolution catches up with Starfleet 20-odd years later during the events of "Measure of a Man," and it's in large part because Maddox has convinced Admiral Nakamura to sign off on his mad scientisty plan to dissect Data.
I actually have a bigger problem with the episode's logic in that nobody questions the idea that Data would be property "of Starfleet." Starfleet didn't make him, and while they technically find him, they didn't press him into service either, he voluntarily joined years later. Shouldn't he belong to Soong's next of kin, if anyone?
Booming
@Top Hat
Ok, let me play devil's advocate here.
"I actually have a bigger problem with the episode's logic in that nobody questions the idea that Data would be property "of Starfleet." Starfleet didn't make him, and while they technically find him, they didn't press him into service either, he voluntarily joined years later. Shouldn't he belong to Soong's next of kin, if anyone?"
One could argue that Soong build Data on an Earth Colony, probably a Federation colony of some sort, meaning that Data was build with machinery provided by the Federation. Soong himself left no testament and considering that nobody claimed Data, then if they define him as a machine without personal rights, technically they could claim that Data is Starfleet or Federation property.
Tomalak
I think Top Hat's point is very strong. Even if he is Federation property, whatever that means, why would he be Starfleet's? There is something very sinister about the idea that if Starfleet comes across a bunch of dead colonists on a Federation planet (or any other) then their property becomes Starfleet's.
He joined up with Starfleet as Top Hat says, but surely he did so under the same terms as anyone else: notice period, ability to resign etc. Did they print out a special Hotel California contract that Data didn't bother to read, allowing him to join but never leave?
Peter G.
I think the disagreement about whose property Data would be if he wasn't sentient is mired in the issue of the definition itself. The question is whether Data is sentient, or maybe more loosely, whether he is a he or an it (not the gender issue, but having a personal pronoun at all). Starfleet admitted "him", meaning people spoke to him, asked him questions, and his input determined his own path in Starfleet. No one made him wear a yellow shirt, he chose that. I suppose it's possible he wanted command track and was denied, but without knowing that I assume he chose engineering, hence why he's the ship's operations officer. So all of these things presuppose he was being treated like a person. But if he's not a person, retroactively none of this means anything, right? Except for one thing: he's in Starfleet because he chose it. It's not just that Starfleet found him, but rather than he gave himself over to Starfleet. He could have gone off and worked in Sisko's restaurant if he had felt like it. So it's this choice of his that put him in arm's reach of Starfleet in the first place.
So now the argument, culminating in court, seems to be about not only whether Data has a choice, but also about that Starfleet has rights to him because he's part of Starfleet. Obviously commander Maddox couldn't have gone to Sisko's and demanded Joseph turn him over while peeling potatos, on the grounds that Starfleet can just seize any property it wants. The argument is that Maddox can use the power of the chain of command to appropriate Data for his own use. But Data is in Starfleet because he made a choice. So it seems to me the issue of who would own Data is not just an unmentioned plot hole but is actually core to the episode. They don't name it, but then again it's a 45 minute show. If Data belongs to Starfleet because he's in the chain of command then a contradiction is being argued: Data does not have a choice, but Data is only answerable to Starfleet because he made a choice. So if a ruling were to come down that Data is just a fancy toaster, it should rather be ruled that he had no business being accepted to Starfleet in the first place and that his admission was an error in hindsight, since toasters do not have the standing to make an application to the academy.
All of this seems to me to point to what is really going on: Maddox is not really arguing about Data's sentience, but is just making a power grab. He wants Data, he will do or say anything to get Data, to make up for his own technical shortcomings. He wants to sacrifice another to further his career. In that sense I've always felt Maddox got off too easy in the end, even with an offer of friendship, when really what he's doing is at best corrupt and for corrupt reasons. If there was a case to be made that Data's design needs to be harnessed for the good of humanity, and cannot be lost to a random anomaly of the week or to bad films, he certainly did not make that case and neither did the episode. It seems to be about exactly what Guinan said: slavery, and trying to find any way to take a powerful thing and employ it for one's own use. So from that standpoint the issue of whose property Data would be seems to be moot: Maddox is acting like a burglar in the first place and would no doubt try to burgle him from whomever had him, right or wrong.
Sigh2000
Silly word "sentience." A philosophical device.
Does Data have thoughts? Yes.
Is he self-reflective? Yes.
Ask Data the question "Do you think of yourself as property of Starfleet?" Data answers "No."
Ask a toaster "Do you think of yourself as property of Starfleet?" It does not answer. Insert bread, make toast.
The crux of the 'Data is a mere mechanism' argument is the 'shutting off' i.e., cutting Pinochio's strings (as per Riker).
All that thinks can be shut off in some way. So what. Maddox my boy, you're dead wrong. Please go away.
Peter G.
@ Sigh2000,
"Does Data have thoughts? Yes.
Is he self-reflective? Yes.
Ask Data the question "Do you think of yourself as property of Starfleet?" Data answers "No."
Ask a toaster "Do you think of yourself as property of Starfleet?" It does not answer. Insert bread, make toast."
What you cite here is only a question of how sophisticated a device Data is, not a question of whether he has rights. "Sentient" for Federation purposes means the being is treated as a lifeform that has rights beyond being just a wild animal (in the organic department) or a device (in the silicon department). It would be very easy to develop a device that could do the above things but not have the ability to think as we call or, AKA hard AI. If all the machine is doing is regurgitating inputs then it's just an advanced computer, like the Enterprise computer. To have rights Data has to be able to do more than that. Having thoughts and being self-reflective in the sense we'd care about is more than just being able to utter the phrase "I have thoughts and am self-reflective". If this begs the question of what consciousness is in the first place, well yes it does, so Maddox is in way over his head on that front.
Obviously if one's opinion is that none of us are actually sentient, and are all just automatons going on automatic in a deterministic pinball game, then all of these arguments go out the window, but then again so does secular humanism and the Trek message.
Sigh2000
@Peter G.
I disagree with the sentence: 'Data is not sentient'.
In several other episodes, Data says that he does not have feelings, or that he cannot feel pain and similarly limiting admissions. I think that those statements are meant to imply that he is not like us and therefore not truly "sentient."
What are we to conclude? Either he has not been programmed to feel or that he has not been outfitted with neurons capable of registering sensation (pain or pleasure). He thinks, but he does not feel....at least supposedly.
However, if he has the power to think, and knows that a sensation such as pain is unpleasant (according to Webster's 23rd century Dictionary 5th Edition) he therefore can conceive "pain," and must conceive of what it is like to feel it. And we know that he is aware that Riker suffered when he was ordered to get Data defined as property. In response he knew how to un-do Riker's suffering.
I do not think that we are "all just automatons." However, I do think that human beings are, as Picard suggests in the episode, another form of machine based upon a kind of genetic programming. In the end, it may be that we are all just "regurgitating inputs" (taught to us by others). That is, until we sit back and ask the questions that Data is found frequently asking himself.
dlpb
A great episode. One of the best in sci-fi.
Leif
@Jason R. @CircleofLight @Trajan, in retrospect would you all still say this episode is legally inaccurate or too lax procedurally or would say it is pretty realistic in a future more enlightened that our time? I really want it to be but I'm no lawyer or legal expert if the writer of the episode is a lawyer or has legal training would you now say from some legal standards what happens here is legally valod..except for maybe having Picard and Riker being the lawyers? Hope to hear from you.
Peter G.
Jason R. said:
"Well Peter I am just struggling with the idea that Data is not property of Starfleet, yet could not, de facto, be considered a person. I mean sure it's not spelled out by Levoix, but what other conclusion can you draw? Whose property is he if not Starfleet's and if he is no one's property, how can he be anything but a person?
Keep in mind the context that Starfleet wanted to detain him and disassemble him but the ruling said no. He was even allowed to resign from Starfleet. What does a "toaster" do if it's not owned by anyone? How can a toaster be permitted to retire from its job?
Any reasonable reading of the decision requires a conclusion of personhood as I see it. There is no other logical conclusion.
"This is contrary to the newer view held by many people that a "right" is really just a privilege granted by the state, revocable at any time"
Well setting aside philosophical and metaphysical questions, there is a practical reality that a "right" not enforced by the courts and by extension the government, is no right at all.
I mean the right to bear arms is literally enshrined in the US constitution and yet some would negate it completely if they could whereas the right to an abortion is nowhere in the constitution yet the same people believe that it is sacrosanct.
I am not intending to wade into those issues; I am only pointing out that the court is typically where the rubber meets the road on these issues and "rights" are not as clear cut as we think, even when it comes to personhood. I mean most anti abortion types would say that a fetus is a "person" - it does no good to say that personhood is a an "intrinsic right" if we can't agree on what a "person" is. In Data's case, personhood is the whole issue."
I'm not trying to be pedantic, but I think the actual story point we're meant to understand about "the verdict" on Data is that the jury's still out. We as the audience need to see more of him over the years to decide whether he's a person or not. It's a narrative challenge to us and to the show. The appears in a legal sense as the court declining to make the decision...yet. But in order to allow for more time (in a meta sense, for the show to give Data more episodes for us to see what we think of him) the judge has to forbid Maddox to take Data apart. The "let's see what we learn" idea wouldn't work if Data was in pieces in a lab. But if Data is Starfleet's property we won't get more time to see whether he's a person or not. So she ruled that Data is not Starfleet property, because that is the only way she could allow the matter to be decided later. I think this ruling was a statement about not allowing Maddox to take Data, and was not meant to imply anything about his personhood; i.e. the fact that Data is not Starfleet's property to be taken apart shouldn't be taken to mean she is declaring that he 'has a soul' (which she clearly states she is not recognizing). It just means Maddox has to keep his paws off Data. And by the way, even if Data's personhood wasn't on the table, it would make pragmatic sense to prevent underachievers like Maddox from ruining unique 'artifacts' in order to try to save their failed research.
So I'm not sure it's necessary for us to figure out, as you're asking, exactly what legal situation Data is in now. The story point is that he's being granted more episodes, essentially, since there is more for us to learn about him before (if ever) deciding. Just like in Encounter at Farpoint, the trial is ongoing. However, if I was to wade into the legal details (not being a lawyer), I would suggest that Data not being Starfleet's property does not automatically jump us all the way to recognizing that he has rights. Indeed, it's entirely possible to say that Data cannot be Starfleet's property since he's the rightful property of Soong, and therefore, any next of kin of Soong if he were dead. The fact that Data goes about autonomously doesn't imply that he's no one's personal property, especially since we literally know who built him from scratch. It is also possible to argue that Data is no one's property, but that he also isn't sentient, and to treat him as a sort of rare natural resource that goes about as he pleases (much as a river can be used by a community wherever it is, even though no one per se owns it). And we could even suggest a middle category for sentient AI, where they are neither anyone's property, nor recognized as having 'human rights', but where nevertheless they're treated with much greater respect and latitude than a toaster would be. Maybe a close example would be with dogs, where many countries grant them status forbidding certain practices be done to them, and require humane treatment of them, but where they still don't have rights in the normal sense (i.e. the right to refuse to obey, the right to go where they please, etc).
Recognizing that an android is a person (and therefore has rights) would imply many serious things. For instance they should have a vote; destroying them would be murder; shutting them off without their consent (as Riker did in the trial) would be criminal assault; and so forth. I'm not sure I see sign of this recognition even tacitly in the series, although Picard and the others do seem to treat him like a person for the most part. But even this gets put to the test in a real way in Redemption, for instance.
Jason R.
@Peter I do agree that his personhood isn't completely settled from Louvoi's "not peoperty" ruling. But her ruling isn't as equivocal as you imply. She ruled he was "not the property of Starfleet". That's pretty unambiguous. That goes beyond Maddux's little experiment.
As I said, another court could conceivably rule differently depending on where her court stands on the Federation judicial pecking order. But it's definitive ruling.
And in a real legal system I would expect a ruling like that would stand a good chance of forming the building blocks of a more definitive personhood ruling by a higher court. I'd say Data's status as a Starfleet officer was already the first nail in the coffin of any argument against his personhood. Louvoix's decision was another blow to that argument.
Booming
Shouldn't the question of which new lifeform should get rights be with the legislature and not the judiciary? Or should there be some kind of expert committee that evaluates all the ramifications?
Jason R.
"Shouldn't the question of which new lifeform should get rights be with the legislature and not the judiciary?"
Whoa Booming I didn't take you for a radical right winger.
Booming
@Jason
"Whoa Booming I didn't take you for a radical right winger."
Natürlich!
https://www.youtube.com/watch?v=1ELjttkTKug
SlackerInc
@Peter G.: "I think the actual story point we're meant to understand about "the verdict" on Data is that the jury's still out."
I strongly disagree that this was the intended message of the episode. I have heard multiple podcast interviews with the episode's writer, and I can't quote chapter and verse but this was definitely not the impression I got (nor was it the impression I got from the episode before that).
Peter G.
@ Jason R,
I agree completely that the first, second, and third nails in the coffin in Maddox's argument are to be found in the fact that Starfleet allowed him into the Academy in the first place, granted him an officer's commission, and for the most part having everyone converse with him as if he's alive. This tacit acknowledgement seems to me to say a lot about how Starfleet viewed him, whether they knew it or not. This is quite different from, say, VOY's Doc, where they (and no doubt the original design engineers) very clearly treated him like an annoying hologram for a while, and only because they were stranded in the DQ did that ever change.
@ SlackerInc,
While I think authorial intent matters a lot, I have to say I'm not entirely convinced I *believe* an author about their intent when it's a TV episode that becomes a hit classic hailed for its moral content. Suddenly you want to look like a visionary. Not that what they say (if you're correct) should be disbelieved outright...but what I see onscreen really doesn't look like Data is declared a person. In fact Picard goes quite far in daring them to prove Data isn't a person, i.e. to prove they're not committing slavery, because he knows equally well he can't prove Data *is* a person. He's burden-shifting onto Maddox's side to prove they're not committing a crime in seizing Data. Maddox and Phillipa can't prove that, so that avenue gets blocked. But Picard himself never says that Data is definitely a person, and Phillipa never rules such, so I don't know how the author could believe their story shows Data's personhood being established. I think what they did accomplish (for example with his fondness for the memory of Yar) was to make us question a lot harder which things should and shouldn't count when we consider the matter. But Picard himself said that Riker's case was devastating, and putting aside whether the writing was good enough for us to agree with that, the author clearly does, so at best we're supposed to see the issue as complicated and lacking an easy answer.
SlackerInc
But by the same logic, none of the humans can be really sure a Vulcan or Ferengi is sentient,. (In the case of a fellow human, they can reason they must be, based on having the same biology and brain structures.)
Peter G.
@ SlackerInc,
Yes, it's the old Descartes problem: how can you tell someone else is alive in the same way you are? It's not such an easy problem, as evidenced by how many people seem to treat each other like bots. But the easy answer is that we can accept another being close enough to us that we don't have to stretch our imagination much to believe they're like us. One strength of Trek is that it says we can overcome this barrier and harder ones (like in The Devil in the Dark). One weakness of the franchise is that (due to budgetary concerns) this is rarely tested since most Federation members we meet are similar-type humanoids. We don't find races like the Calamarain at Federation conferences.
TNG is putting out the idea that robots/androids are totally unexplored territory for humanity to consider, and that up until this point there was never any accepted notion that a robot could be alive or sentient. Maybe this is behind the times in sci-fi literature, but that's the issue being put on the table in this episode. So why is it harder to accept Data than a Vulcan? Simply because he's less like us, and in his case different in an important categorical way. It probably does imply a bias on our part, yes. Should that come as a surprise?
Kranolery
Majority of these comments drive me nuts. I've watched TNG numerous times since its first broadcast and I'm well aware it has its good episodes and its not-so-good episodes. This episode, for me, is well up there with some of the better ones (just to briefly state that I do not consider 'The Inner Light' as one of those better ones).
I just wanted to find somewhere to say that I really like this episode. But then I make the mistake of reading the rest of the comments, sharing my admiration and appreciation of it. Only to find a bunkerload of cynics, nit-pickers, and petty critics, who feel the need to analyse every frame and syllable within an atomic inch of its life. I hate you people. You are the dregs of humanity. You should all be gathered up in a big sack and thrown in the nearest river. Or the nearest airlock. You'd probably get off on that.
Top Hat
It stands to reason that a thread with 220 comments on it is not going to be a monolithic love-fest, even for a generally well-liked episode.
2 posts above is a ****
Kranolery is right - murdering critical thinking people is our best bet to preserve free thought.
Steve
Did anybody else notice that when Riker disconnects Data's hand, its fingers start to move into an "obscene" gesture?
Winnie
@Robert
Deactivating Data on the witness stand is so highly contradictory that it's almost laughable. If the court is testing his sentience, then he is being assaulted by not being asked if he can have his hand taken off or body deactivated.
I, personally, did not care for this either. I think it was more of a plot point to show that Riker had come up with this "brilliant" strategy for the prosecution.
I also thought the question about Data's medals really had nothing to do with the question of sentience either. I would have thought, however, Data would have more of an understanding of the medals, rather that stating he didn't know why he packed them.
I thought the episode was a good one, but a question came to mind while watching it:
Does Data understand the cessation of life? In other words, does he know what death is?
Peter G.
@ Winnie,
"Does Data understand the cessation of life? In other words, does he know what death is?"
Do you?
Winnie
@Peter G
Why do you ask?
Peter G.
I am assuming your question was relevant to the episode's story, which is whether Data is sentient. If his understanding of death is relevant to that, it would imply that it is relevant to our sentience as well. So whether you believe you know what death is would be as pertinent to whether you are sentient as it is for Data, no?
Winnie
@Peter G
Thanks for the explanation. Picard makes an attempt (and I don't think it was a good one) to explain the requirements for a sentient being. By the way, the requirements are those outlined by Maddox, which come down to what I see as this being his opinion. It had nothing to do with whatever requirements Starfleet Academy had as far as defining sentient beings, if they defined it at all.
I didn't like Riker shutting Data down in a court setting, but if you noticed, Data did not seem to be aware that anything had happened at all. Riker certainly didn't tell him.
Granted, you could argue that no one knows if they're dead. The lights are off and no one's home. However, if they happen to be found and are resuscitated back to life, they are given details. It happened to a family member or mine. Three times.
It really doesn't have anything to do with whether or not I consider myself sentient, because again, in my opinion, a sentient being was never really clearly defined in the first place.
As for death, I think I got my answer. If the episode never truly explained sentiency, I don't think it explained death either.
And I am totally in agreement that Picard and Riker should have never been assigned to defend/prosecute Data, nor do I have faith in a Judge who simply threatens them with the loss of Data if they don't comply with her wishes. If Data was to be defended, he should have been defended by competent JAG attorneys.
Yes, this is nit picking but I didn't come away with the fee
Winnie
ETA: Yes, this is nit picking but I didn't come away with the feeling that this was a great episode in that Data was vindicated. It seemed to me that, if Maddox came up with some valid reasoning down the line, Data would have to experience the same issues all over again.
Peter G.
@ Winnie,
I think I understand your general position, but my question was why it's relevant - or perhaps what you even mean by asking - whether Data understands death.
Winnie
@Peter G.
Maddox defines self-awareness to include existence. Okay, then death is a part of the criteria of self-awareness. Otherwise, existence is pointless.
Was Data "dead" after Riker flipped the switch? Or, was it just a temporary shut down?
In his, what I presume is a closing argument, Picard asks the court if Data has a soul. Phillipa includes in her opinion "that the case is best left to philosophers and saints".
A soul is immortal and saints are dead and in Heaven. Death was made to be part of the episode.
Unfortunately, the episode degraded from what was an interesting question of, "Is Data a sentient being?" to "Should Data be able to choose for himself?".
Peter G.
@ Winnie,
I'll cite this bit of text:
PICARD: What about self awareness. What does that mean? Why am I self aware?
MADDOX: Because you are conscious of your existence and actions. You are aware of yourself and your own ego.
PICARD: Commander Data, what are you doing now?
DATA: I am taking part in a legal hearing to determine my rights and status. Am I a person or property?
PICARD: And what's at stake?
DATA: My right to choose. Perhaps my very life.
PICARD: My rights. My status. My right to choose. My life. It seems reasonably self aware to me. Commander? I'm waiting.
MADDOX: This is exceedingly difficult.
I presume this is the context you're referring to when you say Maddox's definition of self-awareness includes death. Contextually I think it's pretty clear he is referring to the ability to be aware of oneself in the general sense, not the ability to be aware of the abstract implications of existence itself. The former is simple awareness of self, the latter would include something like philosophical or religious knowledge. No one naturally has knowledge about their own death, but being sentient beings does mean we naturally become self-aware at a very young age. I think Maddox means this. Evidence of this reading is that when Data does show evidence of awareness of what's happening to him at present and what it means it clearly frustrates Maddox in proving that his definition of self-awareness doesn't apply to Data. Since Data doesn't mention death, and yet his statement satisfies the condition, I'm satisfied that Maddox does not mean awareness of one's cosmic existence vis a vis life and death. It's fine if you personally would include knowledge about death in the definition, but there is no evidence in the text that Maddox does.
"In his, what I presume is a closing argument, Picard asks the court if Data has a soul. Phillipa includes in her opinion "that the case is best left to philosophers and saints".
A soul is immortal and saints are dead and in Heaven. Death was made to be part of the episode."
I don't quite think it's fair to include specific religious notions of reality that go against the given premises of the show, i.e. those of secular humanism. So a Christian idea that souls are immortal should not, in terms of literary analysis, translate into an assumption that Picard means the word "soul" to mean what a Christian means by it. I think he means something defining about a person's individuality, if you wanted my guess. Likewise, the phrase "philosophers and saints" seems to me to indicate people of a wise and perhaps revered reputation, rather than literal saints in heaven. If she did mean saints in heaven, in context this would imply that Phillipa expects philosophers to confer with literal apparitions of saints like St. Peter in conference rooms to debate sentience. I really don't think this assumption is reasonable.
So in both cases I do not believe death has been invoked in the episode, although I do appreciate your question (which is somewhat different from Riker's point) about whether Data is dead while switched off. However, unlike VOY's doctor, I don't think TNG ever really had an agenda of asking whether Data is 'alive' in the literal sense, even though it is certainly an issue of whether he's a person. In Time's Arrow there is some indication that Data considering his own 'death' somehow makes it feel like he has something in common with the rest of the crew. But even then I think this is a bit figurative since 'dead' in this sense means ceasing to exist or to function more so than the 'life' leaving you. In Data's case he can't really die since in theory he can be repaired unless he's totally disintegrated, so the analogy even as he's thinking of it is not quite apt IMO. At best I suppose his 'death' would involve the destruction of his programming or memory circuits, which is not quite the same as (in your case) a Christian definition, since lack of mental function in a human is not equal to death from a Catholic perspective, for example. The 'life' in a person is different from the person's memory or executive functions, whereas in Data's case I think we would have to leave it as just his positronic brain that matters. Obviously secular materialists also believe it's only our brain that matters, so the distinction matters more if you have a different metaphysics. As far as the episode goes, I think Riker switches off Data just to highlight that he's a mere machine; I don't think the off switch trick is meant to indicate anything about what death is. Although it *could* have meant that, I don't think Riker would think along such philosophical lines.
Sorry to go on at length disputing your point. In principle if this was a real debate going on now I would enjoy the addition of what death means and how that might relate to whether a being is 'alive' or a 'person'. But I don't think the episode includes this in its agenda. In contrast, Where Silence Has Lease does include a discussion about death in its definition of life.
Winnie
@Peter
Also, I apologize. It was not Picard who asked the question, it was Phillipa.
The immortal soul is not a religious creation. By Phillipa making a decision to leave Data free to explore it tells me he really doesn't understand death.
Let's leave this one at agree to disagree.
Peter G.
@ Winnie,
"By Phillipa making a decision to leave Data free to explore it tells me he really doesn't understand death."
Which brings me back to - do you?
winnie
I presume your question is only looking for a yes or no response, since you don't believe death is relevant to the episode in the first place. The answer is yes.
Peter G.
I sort of assumed you did, but was perhaps fishing for the reason you think you do. For instance if your understanding of death requires a religious belief system, let's say Christianity, then it would seem to me you're arguing that Data isn't sentient because he isn't Christian. But that would create the problematic corollary that humans aren't sentient unless they're Christians. And if your understanding of death isn't based on a religion, then what is it based on?
winnie
Peter, I don't see any need to elaborate. You already stated why you found my argument irrelevant.
Peter G.
I think your argument doesn't reflect what the episode is saying, but by no means do I think it's irrelevant, or I wouldn't ask :)
Winnie
Thanks for the clarification. I based my response on part of the words you used in one of yours:
"Sorry to go on at length disputing your point. In principle if this was a real debate going on now I would enjoy the addition of what death means and how that might relate to whether a being is 'alive' or a 'person'. """But I don't think the episode includes this in its agenda"""". In contrast, Where Silence Has Lease does include a discussion about death in its definition of life."
You didn't use the term, 'irrelevant', I did. However, Irrelevance is defined as not applicable. I believe that is what you are saying in the above portion of your post.
William B
I genuinely don't want to stir up more trouble. I haven't read the whole exchange above. I will say for interest sake though that I think death is on the table in the episode's themes. Here's what Data says about Maddox's plan:
DATA: I regret the decision, but I must. I am the culmination of one man's dream. This is not ego or vanity, but when Doctor Soong created me he added to the substance of the universe. If by your experiments I am destroyed, something unique, something wonderful will be lost. I cannot permit that, I must protect his dream.
I think what is being suggested in this scene is that if Data is destroyed by Maddox's experiments, he will be killed, and, tying in with Luvois's later mention of the soul, a person will have died, rather than a machine he turned off. Literally he is not saying this, but this is to me the heft of the scene, that Data is arguing that his personhood will be lost if he is destroyed - that he will die. Data couches it however in an ego-decentered language, and so there is wiggle room in interpretation, both of what the writer is saying and what Data himself (or itself, if we go by Maddox at this point in the episode) is saying. Data is also describing himself as being another person's scientific and artistic achievement, for example, so in that sense it's kind of like the Mona Lisa begging not to have some hack rip her apart and tape her back together in order to mass produce Renaissance masterpieces.
I don't think either Luvois or Data are talking about an immortal soul (IMHO) but are both attempting to grapple with whether there is something ineffable in Data (and by extension future androids) that is akin to what is ineffable in humans, and which will be lost to the physical world if he is killed.
Peter G.
Well I would put a fine distinction between a suggestion being not ultimately accurate, to it being irrelevant, even in context of the episode analysis. But more importantly, I do not believe it is irrelevant at all broadly speaking. I think suggesting your own take on how the proceeding should have gone, or what the implications are of what is shown, is great. The line I'm drawing is just to separate your own ideas about what it all might mean from what the episode itself is trying to say. Those don't have to be the same.
Peter G.
@ William B,
It's interesting you bring up that speech of Data's. I suppose I personally interpret it as him saying that he is valuable, that Maddox destroying him would be a waste and maybe in some sense robbing the world of something. That can, however, be true whether or not Data is sentient. I think your Mona Lisa example is pretty good, since if we imagine part of the Mona Lisa's design happens to be the ability to speak about itself, it might say "I am priceless", which is not in itself a demonstration of any kind of living quality. Maybe more relevant would be why it would say that: was it programmed to do so? And was Data programmed to find himself valuable, or is that a determination he came to by himself? That would be the more interesting issue for me.
William B
@Peter, good point. My gut feeling is that Snodgrass (or whoever is responsible for the final speech) believes that Data does have a certain irreplaceable something which is like a human soul (as in essence, sentience, personhood) but knows that Data would not make that argument, so has to come at it sideways. I agree that Data's own words do not necessarily mean he is sentient (nor does my gut feeling about the authorial purpose of the speech constitute "proof"). I think Data's argument itself is closer to the Talking Mona Lisa and you're right that it would be like if the Mona Lisa was designed to extol her own virtues, at least enough to avoid destruction. In a sense given what we ultimately know of Soong based on Lore, we could perhaps even say Data is wrong and that what he says is ego and vanity, just not his own, but Soong's. (But I do think Data is also clearly correct that he has value, regardless of Soong's inflated self-regard, and I think Data is assessing it accurately.)
William B
(By "based on Lore" I mean, what Lore tells us about Soong scrapping various androids and his willing to keep going forward regardless, I think Lore's ego is a reflection of Soong's, Soong's kind of insufficient attitude toward how dangerous Lore is, etc. In general even aside from Lore lots of things suggest that Soong has a big ego/vanity.)
Sigh2000
@William B; @Peter G.
Can Data conceive of his own death? I think so. If he knows that he can be damaged, then Absolutely.
Consider this contrived Data dialog snippet:
'If my positronic brain is damaged beyond the ability of its reboot default to kick in, then I am, as you humans would say - toast.'
Winnie's discussion is on point.
Data seems to understand that inclusion in Maddox's experiment is a virtual death sentence. He says no to that. The question we should be asking is whether Maddox is alive.
Peter G.
@ Sigh2000,
I don't think anyone said Data cannot comprehend that he can be destroyed/killed. What Winnie was arguing is that Data does not *understand death*, and that therefore he doesn't meet Maddox's critera for being self-aware, which is required for sentience. My position is that this wasn't actually part of Maddox's criteria for self-awareness. I was not arguing that Data doesn't know what death is; but rather was arguing that even if Winnie asserts that he *does not* know what death is, this still doesn't prevent Data satisfying Maddox's criteria. In other words Picard successfully defeated that point.
Winnie
Peter G. You were fun and interesting to debate with and I hope we can debate again. But, please, let this go. I certainly don't want to cause any trouble over this. You disputed my opinion, I explained how I came up with it. That's what I gather this forum is about. I don't expect you to agree with me. Please don't draw any lines here. That would be a bigger waste than anything that could be done to the fictional Data.
Winnie
I was reading through the transcript today and have come to the conclusion that Maddox really isn't the bad guy I perceived him to be. He makes this statement:
MADDOX: If I am permitted to make this experiment, the horizons for human achievement become boundless. Consider, every ship in Starfleet with a Data on board. Utilising its tremendous capabilities, acting as our hands and eyes in dangerous situations.
Here's my question:
Does having Data as a crewman aboard the Enterprise give Picard a very distinct advantage over other starships?
Peter G.
@ Winnie,
"Does having Data as a crewman aboard the Enterprise give Picard a very distinct advantage over other starships?"
TNG and TOS seem to both make a similar case for Data and Spock being a definite serious advantage to their crew. On a literal level this could be seen as an obvious result of them being outright smarter than anyone else on board, but I think the narrative idea is that they are different enough that they add something that was lacking among the 'regular people', a different way of thinking and processing information. Some have argued that there's a neurodivergent metaphor going on here, and I find this very reasonable as an analogy. However it might not follow from this that because it's an advantage to have them on board that therefore every ship needs a 'different' person on board. Literally speaking, yes, having a mass produced android series that can in theory run the entire ship themselves is 'good', maybe in a similar way to how the M5 computer in TOS was supposed to be good. But if we think of it as being a metaphor for someone on the spectrum, especially in Data's case, then it would be a bit awkward to say "one on every ship" as if they're a commodity.
Re: Maddox, I've tried to watch this episode with the idea in mind you mention, that he wants something good, but I feel there's too much evidence that he's just trying to cannibalize Data as a last ditch effort to make up for the fact that his work is going nowhere. The episode seems to me to suggest that there's little chance dismantling Data at this point in time will accomplish anything, and if so it casts Maddox in a highly precarious moral position.
EventualZen
@Winnie
>Does having Data as a crewman aboard the Enterprise give Picard a very distinct advantage over other starships?
He has the strength of 10 men, superior cognitive faculties, and is invulnerable to biological weapons (Including de-evolving in that awful episode "Genesis").
Winnie
@Peter G
Re: Maddox, I've tried to watch this episode with the idea in mind you mention, that he wants something good, but I feel there's too much evidence that he's just trying to cannibalize Data as a last ditch effort to make up for the fact that his work is going nowhere. The episode seems to me to suggest that there's little chance dismantling Data at this point in time will accomplish anything, and if so it casts Maddox in a highly precarious moral position.
I think Maddox was written to be the detestable character in the episode. That much is evident to me. However, Admiral Nakamura, who is Maddox's superior, approved of the dismantling. Because Nakamura is likeable and disappears from the episode fairly quickly, he's somewhat forgotten and the blame falls totally to Maddox. It's really not an issue. He is making the proposal.
Stopping here because I am wondering just how much of Starfleet looks at Data as a machine, despite his being admitted into the academy. Nakamura approved the proposal, so it seems logical that he too, views Data as purely machinery, property of Starfleet.
I don't find a lot of fault with Maddox. He's unlikeable, but I felt he was forthright with Data, Riker, and Picard. He certainly has the background to make the proposal to dismantle Data. I didn't see any hint of deceit to warrant Pickard's mistrust of him. As a side here, Picard never explains why he distrusts Maddox.
Because Picard is a beloved character and straightforward in his own way, I can see viewers relying on his gut feeling and "looking at Maddox with mistrust" as well.
Thanks to you and EventualZen for your input. I appreciate you pointing out the advantages Data brings to the Enterprise. I agree, he's very valuable.
I again pasted Maddox's statement into this post:
"If I am permitted to make this experiment, the horizons for human achievement become boundless. Consider, every ship in Starfleet with a Data on board. Utilising its tremendous capabilities, acting as our hands and eyes in dangerous situations."
Something occured to me after reviewing it and it raised a question in my mind.
Why is it that Picard's Enterprise is the only starship to get Data as part of its crew?
I liked the response about Vulcans, Peter, because I too, looked back at Spock, and to me, while Data isn't a Vulcan, he mirrors a lot of what we all got used to with Spock; logic without emotion.
I also considered that the Enterprise has Worf, a Klingon. Worf also brings some values to the table as well. In fact, there's no doubt that a lot of the starships have some type of alien life aboard, and while they may all be different species and make their own contributions, they are not unique in the way that Data is unique. They can be replaced.
If Data brings a substantial contribution to the Enterprise, wouldn't that mean that other starships then are at a disadvantage because they do not have an android on board? It doesn't sound like much, to have a disadvantage, but let's look at lives. Not having a Data to around to protect the crew by doing whatevere it is he might do to protect them, could result in loss of lives.
Data himself was also not opposed to the dismantling, but was intrigued. When Data asked if the electron resistance across the neural filaments had not been resolved, Maddox's response was a flimsy one. However, no one was letting him get away with it, which is as it should be. Data himself pinned Maddox down nicely. As a result, we know other steps were needed before the dismantling could occur. That is great.
A trial is held and Data is found to be a unique and wonderful creation, who should be able to make his own choices. Also great.
Data, while being proven wonderful and unique, still presents what could be viewed as a huge problem for Starfleet: What disadvantages do other starships face because they don't have their own Data?
BTW, I'm not proposing Data submit for possible destruction as soon as possible. I am questioning the decision to admit him into Starfleet in the first place.
Booming
"I am questioning the decision to admit him into Starfleet in the first place."
That is the weak point of the episode which is hard to ignore considering that they make a point of mentioning his service record and all the medals Data got. How can Data get into Starfleet? How can Starfleet gives him medals if he is not a person in a legal sense? Did Data just walk into the Academy and an admiral, who had a little too much Romulan Ale, let him join on a whim?
They were making one episode every two weeks. Not everything is well thought out. As seen in the exchange between Peter and Winnie (and others). Even if it has some narrative weak points, if it sparks that kind of debate it obviously achieved what a good Star Trek episode should accomplish.
Sigh2000
@Booming
"How can Starfleet gives him medals if he is not a person in a legal sense? "
Exactly. Thank you for saying this. Medals speak to Data's personal achievement. They advertise his value to the group, but bespeak his individuality. No one at Starfleet had any problem treating Data as a person with rights and one worthy of respect before the writers created Maddox.
"They were making one episode every two weeks. Not everything is well thought out." Again you hit the nail directly. I love the episode, but it is based on a contrivance. The writers wanted to threaten Data's very existence. This is what injects the idea of death into the episode. It is the audience that feels this as an unspoken dread. How would we feel if this was done to us?
Data, so obviously a person, is reduced to the level of a mere thing on a whim of the very organization he had served with distinction. No reason is given to him other than Maddox's need to experiment. The trial takes place in a kind of netherworld following Data's death in the eyes of an unfeeling Starfleet. No compassion. Just consignment. The episode is, I think, mostly about the Arbitrariness often visited upon individuals.
Rahul
@Sigh2000
I don't think the idea of medals is a compelling argument here. It's just a minor point about Data having possessions, to give Data some added texture given the nature of this episode. Don't they give dogs medals in some cases? Are dogs legal people or are they strictly owned by somebody or an organization? Dogs can be possessive about things important to them.
But I agree with you on the idea of this episode's premise (one of them) being questioning Data's sentience (a contrivance) and throwing up some thought-provoking pro/con arguments etc. Obviously this episode was a massive success for TNG but it has its flaws -- but the parts that work more than compensate for any flaws in legal process/arguments etc.
What I think this episode wants to achieve (among other things) which I don't think too many people talk about is establishing the friendship bonds Data has with Riker/Laforge/Picard etc. To me, the scene that evokes emotion more than any other is Riker and Data at the end, the pat on the shoulder - their friendship after all they've been thru.
Thinking about the definition of sentience -- what about the argument of metabolism? Just throwing this out there ...
Winnie
""To me, the scene that evokes emotion more than any other is Riker and Data at the end, the pat on the shoulder - their friendship after all they've been thru.""
For me, Data's responses to Riker really are the stuff of logic. He is able to sense Riker's discomfort by taking stock of his body's response to sadness. But I don't think Data understands sadness in the sense of "feeling sad". He seeks to ease Riker's discomfort by logically explaining the components of the problem that caused the trouble in the first place. He then provides the solution, and Riker's discomfort disappears.
As for Data, I don't think he can understand the affections and ties that go along with a friendship. No matter. He eased Riker's emotional pain and by doing so, also eased his sadness. He allowed Riker to give him an affectionate pat. Sometimes, that's all it takes.
Rahul, I liked your comment about metabolism. I'm probably not looking at this the way you intended, but it would be something Data would be able to measure in his fellow humans, and learn from.
I agree that Riker looks at Data as a friend. But then again, why wouldn't he? Data is an appealing individual.
Rahul
@Winnie
I should tweak what I wrote earlier based on what you pointed out re. Data not having emotions, which of course is pertinent -- the key here for me is the friendship Riker, Picard etc. have for Data. It's a 1-way thing whereas Data is, as you say, just acting logically. But that's one of the things they love about him.
Where I was going re. metabolism as some kind of necessary condition for sentience was just that maybe you could argue that a sentient lifeform needs to consume nutrition, grow, expel waste. Perhaps Maddox could have used this argument? Would Data thus fail the sentience test and then does that mean he can be owned by Starfleet? It's probably not that simple of course...
Sigh2000
@Rahul
"I don't think the idea of medals is a compelling argument here. It's just a minor point about Data having possessions, to give Data some added texture given the nature of this episode. Don't they give dogs medals in some cases? Are dogs legal people or are they strictly owned by somebody or an organization? Dogs can be possessive about things important to them."
Thanks. I liked the dog argument. Dogs do deserve medals in my view, even though they don't know what they are.
In TOS, medals and citations were brought up several times by the writers to show how the witness (e.g. Spock) or the defendant (e.g. Kirk in Court Martial) had significant credibility. In Kirk's case Cogley wants the whole string of Kirk's achievements read into the record.
I think that it is part of the contrivance that the writers make Data say that he doesn't know why he packed the medals...to emphasize his humility more than anything else. It certainly adds some pathos givenbthat Starfleet now wants him lobotomized. Nevertheless, it is a "fact" of the episode that it is precisely Starfleet which presented Data with medals. So presumably it believed at one time that Data was an officer who behaved with valor and that he was an individual deserving of praise.
I suppose it is possible under the new metabolism criterion of sentience, that Data would be demoted because he was held in low regard, having never been seen going toward the Enterprise restroom. :)
Winnie
@Rahul
Where I was going re. metabolism as some kind of necessary condition for sentience was just that maybe you could argue that a sentient lifeform needs to consume nutrition, grow, expel waste. Perhaps Maddox could have used this argument? Would Data thus fail the sentience test and then does that mean he can be owned by Starfleet? It's probably not that simple of course...
This is actually a very good argument, and Riker might be able to win on the strength of that alone. I wondered about the Data's metals as well, but something else struck me as rather strange.
Is it a lengthy process for an officer to resign his commission? When Maddox enters his apartment, Data tells him that he has resigned, and he is packing, apparently to leave the ship.
Data is still wearing his Starfleet uniform. If he's already resigned and is not under Starfleet's command, wouldn't he change his clothes?
Booming
@Sigh2000
Why do we give people medals? Because they made a conscious decision in a situation that is seen as positive by society. I guess that is the reason we don't give medals to animals in Germany and while we all love dogs or land mine sniffing rats (true story) it is more a publicity stunt in my opinion. It is certainly interesting that the USA actually has medals just for animals, for example the "Animals in War and Peace Medal of Bravery". But let's ignore that. Human medals are given in the most general sense because people made decisions that are seen as positive by society at the time. So by giving Data medals Starfleet has acknowledged that Data is sentient capable of conscious decision making and if he is capable of that then he should get the rights that come with it.
"It is the audience that feels this as an unspoken dread."
Yes, because in it's essence this episode is about a topic that is always there in some form. Can society take your right as a sentient being away because it sees you as less. Russia treatment of LGBT people is a good example. First they banned talking about LGBT in school. They used the good old protect the children argument because lots of people think that even seeing or knowing that LGBT people exist is harmful. Based on no science but that never stopped intolerant people. Then they banned any signs of affection (kissing, hand holding) in public (same argument as with schools) and recently they banned any portrayal of the existence of LGBT people in media or anywhere in the public space. In other words even mentioning that you are gay or trans is now illegal in Russia.
In episode it is condensed but that somebody just shows up and says:"Let's see if you have the right to exist and if you don't we will take you apart." which is highlighted by Starfleet apparently rejecting Data resigning. So they are not just saying: "We will determine your right to exist." They are already denying him the right to leave. It is like a Kafka novel.
@Winnie
"maybe you could argue that a sentient lifeform needs to consume nutrition, grow, expel waste"
Data does two of the three things. He consumes energy and at some point mentions that his power core runs out eventually, meaning that part of the energy he produces is lost/radiated into the environment.
Defining what life is creates two problems. Fixed rules and infinite regress.
Fixed rules of what a sentient life form is: Why do we need to define what a sentient life form is at all? So that you get certain basic rights. What happens if the Federation encounters a species that does not fulfill the fixed rules. There are for example several machine races in Star Trek. Can the Federation just harvest them? Let's say they change those fixed rules to include machine beings would a sweating Maddox be forced to reassemble Data and whatever is left of Jonny 5?? The problem here is that if you deny a possibly sentient being basic rights then that could do damage that cannot be undone. I would argue that in any case where there is even the slightest possibility that a being is sentient we should treat it as sentient... which, I guess, makes me a hypocrite...because I'm not vegan... most animals that we eat are sentient. So the question the episode actually asks is: When is a life form so sentient that we cannot kill it to produce useful things? Phew that is kind of dark...
Anyways... infinite regress problem: That is a basic scientific problem meaning that any explanation of a scientific construct needs explanations of the parts of the construct which then also need explanations and so on and so on.
To give a quick example: What is a life form? Let's use the consume, expel waste grow definition with a procreate addition. Now we have to define what consume, expel waste, grow and procreation means. Consume in this context means using a form of energy storing material to maintain life functions. Now we have to define what energy storing material and maintain life functions means. In essence this means that there can never be an absolute definition of what a life form is or sentience which when you really think about it creates a gigantic amount of problems because Humanity grants or denies rights based on definitions who are always imperfect.
I think it is no coincidence that later there actually was an episode where Data did procreate in a sense and they were debating if Data had that right.
Sigh2000
@Booming
"So they are not just saying: "We will determine your right to exist." They are already denying him the right to leave. It is like a Kafka novel."
I was thinking the same thing. Kafka on steroids. Complete violation of the right of self-determination. (As in the Russian example). Federation behaving like the Borg would later. Riker takes off Data's arm. Very Borg-like. I'm feeling that dread coming on, and must acknowledge that without certain protections in place, society may choose to say that anyone of us can have our strings cut....like Pinocchio's.
Winnie
@Booming
The problem here is that if you deny a possibly sentient being basic rights then that could do damage that cannot be undone.
You are right. In an environment that consists of societies who are free to choose to begin with, withholding basic rights could do tremendous damage that cannot be undone. But this isn't that situation. This is exclusive to Starfleet and Starfleet should have the right to determine who makes it into the academy and goes on to serve within their organization. They also should have the right to boot anyone out who doesn't cut it if they make it onto a starship.
The real issue here that I see is a Starfleet regulation issue. Maddox didn't just come out of the woodwork. He had enough clout to be on the approval or denial team for entrance into the Academy, and opposed Data's entry in the first place. Wasn't Starfleet concerned enough to bother with looking into Maddox's opposition? Apparently not because Admiral Nakamura agreed to something he never would have, had Starfleet taken the time to review it's regulations and change them in the first place.
Peter G.
While I do think these issues about Starfleet's reasoning in letting Data serve as an officer are relevant to the story such as it is (about Data's rights), I think the broader story may well be about totalitarian valuations, perhaps more so than about how it treats certain people it classes as different, or robots in particular. The idea I see isn't just that Data may be Starfleet's to dispense with as Maddox pleases, but that we are being told some greater good may come about by sacrificing Data. Yes, he may not survive, but just imagine how great things would be as a result. This is a fundamental doctrine that individuals can be sacrificed on the alter of the abstract group. It's true that given the overall TNG narrative Picard and Riker for example are not actually in danger of being taken away for experiments, nor will they be, but I still think the looming danger being presented is that we are seeing a broad utility vs individualism. The secular humanist story always favors the individual, arguing that the group is made stronger by bolstering the protections, respect, and integrity of each individual. Within this framework Maddix is therefore necessarily cast in the role of a villain no matter what 'good' he claims can come of sacrificing someone.
So while on a literal level Maddox may actually believe that Data isn't really a person, I think the particularities of Data's experience are really a secondary concern for him. My feeling about the episode isn't so much that Maddox innocently wants to help the Federation and doesn't realize he is threatening anyone, but rather than he really doesn't care what anyone says about Data, he just wants to do his experiments. Any arguments made about the chance that Data really is a person are merely an annoying obstruction to him. This makes it a nice reprieve at the end when Data openly offers him an ongoing relationship, which perhaps changes Maddox in an important way. By Data's Day they are apparently on collegial terms, despite Maddox never officially acknowledging he was wrong to ever try to dismantle Data.
Under this reading, the arguments made against Data's sentience are more of an excuse than anything else, a convenient explanation of why it's ok to take him away. Because Picard challenges this it turns into an actual debate about Data's nature, but originally it was just going to be a bureaucratic stroke of a pen to use someone for some undisclosed purpose in a lab somewhere.
Winnie
I don't recall an episode of this nature, but did Data ever pursue another woman, in the hopes of becoming romantically involved with her?
Sigh2000
@Winnie
Yes. Data dated in the Season 4 episode named "In Theory" 6/3/1991.
Winnie
I just watched TNG episode, "Where Silence has Lease", and noticed an interesting statement from Dr. Pulaski to Data:
""Forgive me, Mister Data. I'm not accustomed to working with non-living devices that. Forgive me again. Your service record says that you are alive. I must accept that.''"
What silliness this episode, A Measure of a Man, is. Pulaski, a newcomer on the bridge, voiced her views regarding Data, adding, she "must accept that he is a living entity because his service record states he is such."
Seems to me a service record would be positive proof of sentience, no matter what Maddox thinks.
The Queen
Military and police dogs not only are awarded medals, they have rank and I believe are given funerals if killed in the line of duty.
This episode certainly has its faults, but it's so well designed and written that the emotional punch overwhelms all of them (for most viewers). The acting is equally spectacular.
The poker scene at the beginning I understood as a foreshadowing of the core question. Data loses because he doesn't have "instinct," a human quality, and he's told this by the crew member who is least supportive of him. As someone pointed out, it's also foreshadowing that Riker is the one who beats him. I'm rewatching TNG now and am noticing that many of the teasers provide foreshadowing.
It's interesting to read this discussion after the Discovery episode in which that ship acquires consciousness, is deemed a person, and is inducted into Starfleet as a way to ensure its loyalty. In Short Treks, the same ship is clearly meant to be an individual with understandable feelings.
Peter G.
I just watched this again, and one thing struck me about Maddox and his position as argued by Riker. Maddox obviously was making a pointed effort to call Data "it" at every opportunity, to the extent that it even seemed bizarre regardless of his views on Data. Why should denying that Data has a gender be a quintessential feature of asserting that he is Starfleet's property? Of course his real position is that Data is a mere machine, so lacking a gender is a way of saying he isn't a person. But I'm sure there are Federation species that lack a gender and are persons (such as the Bynars). And likewise there are creatures that are no doubt Federation property, such as animals, that have male/female genders. So why should it matter to call Data "him", since even a 'he robot' would still be a robot. Data clearly has a male form, male voice, and male anatomy, so to me calling him "it" is a bit of a clue that goes beyond some philosophical argument about Data's nature. And as I've mentioned above, I don't think Maddox ever particularly cared about Data's actual nature, as my read on his intention was to have his way no matter what.
In the hearing Riker's 'devastating' presentation doesn't really prove that Data is a machine any more than just asking Data would have. Removing his arm was more unnerving and grisly than educational, and shutting him off with a switch doesn't seem much more spectacular than a Vulcan nerve pinching a human. But what Riker's demonstration does do very effectively is make Data *seem* to be just a mechanical object. It dehumanizes him. And that is also what I think Maddox was doing in regular conversation by referring to Data as "it". All along his main argument isn't really an argument, it's just an attempt to dehumanize Data to make what he wants to do seem ok. Interestingly this same process can and has been done to actual humans as well. You can do various things to people to make them appear less human, less dignified, even less sentient, to onlookers. You can make what you're about to do seem ok.
I've always viewed Maddox in this episode as a pretty naked villain, but I feel like in light of these realizations it's even worse than I thought. And yes, I am attributing to Maddox certain arguments made by Riker in court, but I think that's intentional in the writing. Picard's entire initial speech after the recess was to re-humanize Data by showing various things that had meaning to him, including relationships with Picard and Yar. The philosophical debate comes after this, but I find it quite apt that Picard's first move is to work to undo the dirty tricks. And what makes the ending of the episode so poignant is that Data doesn't do the same thing to Maddox, casting him merely as a villain, but treats him as a person with dignity anyhow. It's not just a friendly mercy, but an actual repudiation of the tactic of trying to reduce the appearance of humanity in someone opposing you.
Another nice detail I noticed is that the Phillipa/Picard back story directly mirrors what happens in the hearing: Picard was personally hurt that Phillipa went after him in his court martial with such gusto, as if wanting to destroy him. He had expected a more humane or friendly interrogation from someone who cared about him. And this is exactly what Riker's duty forces him to do to Data in court. I imagine Phillipa likewise felt that she was obliged by duty to do her absolute best against Picard, as she forces Riker to do here. It's not clear whether she felt as guilty about it as Riker does at the end, but Data's recognition that Riker saved him by doing his duty seems mirrored in Picard forgiving Phillipa and offering to take her out to dinner.
Jeffrey Jakucyk
"Why should denying that Data has a gender be a quintessential feature of asserting that he is Starfleet's property?"
I can see how it might be interpreted that way in today's context, but I never read it as denying gender but denying personhood. It = machine, automaton, non-sentient, robot.
Peter G.
@ Jeffrey,
Yes, but I think that even if everyone involved was agreed that Data wasn't a person, it would still be conversationally evident to call him "he" just because of his appearance and clear male anatomical detailing. It seems to me artificial and contrived to continue calling Data "it", especially in context of literally everyone else around referring to Data as him. I consider it to be a pointed gesture that goes beyond merely saying that Data is a machine. Just as an analogy, let's say a bunch of little kids are all calling their dolly "her" due to the female detailing, maybe it's a Barbie or Elsa doll. For someone to come along and insistently call the doll "it" isn't just some ontological statement, since obviously they all know it's a doll. I think doing that the grammatical de-gendering would be an overt move to claim the doll is not valuable or not dear to anyone, like saying "it's not your friend, it's just a piece of plastic." That's not really an attempt to state what the doll really is, but to sabotage the 'personness' of the doll in the eyes of the other children. And I think that's what Maddox is doing. He had a definite agenda and I think calling Data "it" is part of his maneuvering to dehumanize Data.
Sung
If the number of comments, and the length of said comments, are any indication of the quality of an episode, this one has got to be one of the top, no?
Two comments from me that will go doubt never be read, but I'll join in the festivities nonetheless:
1) The negative for me was Cmdr. Louvois. Not only was she not a very convincing actor, but the past history with Picard, especially the relationship aspect, was a terrible distraction.
2) For those who watched this episode live back in the day, what was your reaction when the previous episode, A Matter of Honor, was almost as magnificent as this one? As far as I can tell, this is the first time in the shows current history that there were two stellar episodes in a row.
@Winnie, you will probably never see this, but great catch about Pulaski's alive comment regarding Data! Picard just should have referenced that episode as part of his case. 😁
Winnie
Sung wrote:
""@Winnie, you will probably never see this, but great catch about Pulaski's alive comment regarding Data! Picard just should have referenced that episode as part of his case.""
Thank you, Sung. Your statement brings up the reason why this so called trial should have never been enacted in the first place. I am surprised that Data himself didn't voice opposition, but then that would throw out all of the dynamics of the Picard-Riker-Data relationship triangle.
I continue to wonder just how Data made it through Starfleet. It would seem the only way would have been for him to have been pre-programmed before entrance. If he were, then he would have been given an unfair advantage over ever cadet.
LatexZebra
The thing that bugs me most about this episode and others involving the poker game.
Why are they all still in their uniforms?
Booming
They are very comfy. The leading comfortability experts of the Federation designed them.
Seth
“DATA: As Doctor Pulaski would at this juncture, no doubt, remind us, life is rarely fair.
LAFORGE: Sorry, that just doesn't make it any better.”
Damn, you can tell the writers really had it out for Pulaski. Even when she’s off screen, she’s dispensing bad advice!
Bote
LegalEagle, a practicing attorney, provides his analysis of "Measure of a Man" from the standpoint of trial procedure in this YouTube video:
https://youtu.be/XVjeYW6S8Mo?feature=shared
Ramon
that look was so dirty
Ramon
*correction*
THAT Rob Schneider Look!
NoPoet
Ah, another so-called "greatest episode" which only confirms my belief that the Internet user base is on crack.
A trial critical to Data's continued existence is forced to proceed by a judge who clearly enjoys conflict, despite lack of appropriate personnel to conduct said trial. Said judge is apparently a stickler for the law except when she's interpreting it to suit her need for battle.
Riker and Picard, who are not trained lawyers and are Data's friends and colleagues, are forced to take the role of lawyers in a courtroom that might well end in execution.
Riker performs his role under duress despite this being a tremendous conflict of interest.
Maddox criticises Picard's attempts to save Data, calling it sentimental nonsense, then goes into a rant about his right not to have his work invalidated.
Data is retroactively denied quitting Starfleet even though he was allowed to join of his own free will. Unless they bullied him into that.
All of this is conducted at a new starbase deliberately placed provocatively close to the neutral zone.
And this is Starfleet's @#£%ing JAG! These are the upholders of Federation law!
Yeah, "greatest episode" my arse, the whole thing is a house of cards that cannot stand up to any kind of scrutiny.
Carry on being such deep thinkers, internet fandom.
NoPoet
^^ oh, additional.
The scene where Riker deactivates Data by pressing a button?
Why didn't Picard just bring a Vulcan in to perform the neck pinch on a human?
Why is Data's ability to bend a steel bar evidence he is a machine? Klingons, Vulcans, then later on the Borg, Hirogen and Species 8472 all possess tremendous strength. What's their point?
gary w
I’m sure with the volume of comments in this episode section, my comment probably has already come up. But here it is anyway.
During the Data’s court hearing why didn’t Picard go one question farther when speaking on whether Data is “self aware” by asking one additional question:
He asked him did he know why he was there at that hearing, but Picard stopped short of what I would’ve thought the most important question would have
Been. Did Data WISH to be there, and dig into the basis of Data’s desire NOT to be there or be transferred to be disassembled, and how he arrived at that position, i.e., was the notion his only or did some program that desire into him.
trajan
@Leif
Not sure if you're still active here - just caught up with your question of years ago!
I stand by my loathing of this episode. It beggars belief that a supposedly more civilised society than ours can have a judicial system that is utterly shorn of anything resembling justice and have no semblance of any kind of proper procedure.
Black Oat
As Peter G. alluded to, this episode isn't about answering the question or Data's sentience, it's actually about our propensity to rationalize the exploitation of others.
Starfleet didn't question Data's sentience while attending the academy or during his long years of service; only once they learn that there's a possibility that he can be duplicated on demand does the question arise. Actually, it doesn't arise until Picard demands a hearing.
Picard can't prove that Data is sentient, but that's ok because, while the trial is ostensibly about his sentience, it's really about answering the question "Are we willing to subjugate a "race" of beings who MAY be sentient?"
Picard has two moments of clarity during the episode. The first is during his conversation with Data when he realizes that he is treating him as something less than human. The second is during his conversation with Guinan when he finally understands the true magnitude of the immoral act Starfleet is attempting. The trial ends when the judge comes to the same realizations.
NoPoet asked: "Why is Data's ability to bend a steel bar evidence he is a machine?"
Riker was systematically dehumanizing Data. He gets him to say he was built by a man, he gets him to demonstrate his superhuman strength which makes him look like nothing more than a tool, he shows that he can be painlessly disassembled like a household appliance, and then he unceremoniously turns him off like a light switch. He even makes a little joke about Pinocchio.
Black Oat
Great episode. Patrick Stewart gives a fantastic performance. It's a shame that they didn't (and still don't) provide him with more scripts worthy of his talents. Johnathon Frakes' acting has really improved by this point, too.
Really recommend people watch the extended version if they haven't. I was only going to skim through it to grab some screenshots, but I became so absorbed that I forgot to take any pics.
Submit a comment