Star Trek: The Next Generation

"The Measure of a Man"

****

Air date: 2/13/1989
Written by Melinda M. Snodgrass
Directed by Robert Scheerer

Review by Jamahl Epsicokhan

In TNG's first bona fide classic, the nature of Data's existence becomes a fascinating philosophical debate and a basis for a crucial legal argument and Federation precedent. Commander Bruce Maddox (Brian Brophy), on behalf of Starfleet, orders Data to be reassigned and dismantled for scientific research in the hopes of finding a way to manufacture more androids with his physical and mental abilities. When Data says he would rather resign from Starfleet, Maddox insists that Data has no rights and takes it up with the region's newly created JAG office, headed by Capain Philipa Louvois (Amanda McBroom), who serves as judge. Picard takes on the role of Data's defender.

This episode plays like a rebuke to "The Schizoid Man," taking the themes that were intriguing in that episode and expanding upon them to much better effect. What rights does Data have under the law, and is that the same as what's morally right to grant him as a sentient machine? Of course, one of Maddox's arguments is that Data doesn't have sentience, but merely the appearance of such. The episode cleverly pits Riker against Picard; because the new JAG office has no staff yet, the role of prosecution is forced upon the first officer. Riker finds himself arguing a case he doesn't even believe in — but nevertheless ends up arguing it very well, including with a devastating theatrical courtroom maneuver where he turns Data off on the stand.

Picard's rebuttal is classic TNG ideology as put in a courtroom setting. The concept of manufacturing a race of artificial but sentient people has disturbing possibilities — "an entire generation of disposable people," as Guinan puts it. Picard's demand of an answer from Maddox, "What is he?" strips the situation down to its bare basics, and Picard answers Starfleet's mantra of seeking out new life by suggesting Data as the perfect example: "THERE IT SITS." Great stuff.

Still, what I perhaps love most about this episode is the way Data initially reacts to being told he has no rights. He takes what would for any man be a reason for outrage and instead approaches the situation purely with logic. He has strong opinions on the matter, but he doesn't get upset, because that's outside the scope of his ability to react. His reaction is based solely on the logical argument for his self-protection and his uniqueness. And at the end, after he has won, he holds no ill will toward Maddox. Indeed, he can sort of see where Maddox is coming from.

Trivia footnote: This is also the first episode of TNG to feature the poker game.

Previous episode: A Matter of Honor
Next episode: The Dauphin

◄ Season Index

94 comments on this review

Tres
Fri, Apr 17, 2009, 10:21pm (UTC -5)
just wanted to agree with your review of "Measure of a Man" and wanted to add, the final secne, where Riker and Data speak in the conference room, when Data says, "you're actions injured you to save me. I will not forget that." Chokes me up every time. Wonderful writing.
Damien
Wed, Apr 22, 2009, 10:47am (UTC -5)
"Measure of a Man" - absolutely a classic and one of my favourites, possibly favourite.

I only have one small quibble (well, it's a biggie, but not so that it detracts from the overall ep).

It just didn't seem realistic that a case of such huge potential importance would be prosecuted and defended by two people that have no legal training, have never tried a case and are friends and colleagues serving on the same ship! It beggars belief that such a trial could take place, especially given its importance.

Surely the logical thing to do would have been to delay the trial until such time that trained lawyers could be gathered and a legally binding decision could be made, rather than leaving the decision open to appeal/overrule in the future on the grounds of improper procedure.
Trajan
Tue, Mar 30, 2010, 3:49pm (UTC -5)
'The Measure of a Man'. Ugh! I'm sorry, I hate it. I have no problem with the theme but as a lawyer, I really, really hate it. It's a bit like the entire Doctor Who series 'Trial of a Timelord' where the greatest and oldest civilisation in the galaxy apparently has a judicial system that bears no relation to any reasonable concept of 'justice'. No JAG officers so you must prosecute? If you don't he's 'a toaster'? No, if you insist on making that ruling I'll ensure that I have your head on a plate by the end of the day and you'll never practice law in the Alpha Quadrant again. As for turning Data off; it was his rights as a sentient being that were for the court to decide. Allowing him to be turned off constitutes assault, battery, actual bodily harm and possibly attempted murder if he had no reset button. And this is allowed in A COURTROOM?

Sorry. No stars from me. Actually, can I award negative stars??
J.B. Nicholson-Owens
Sat, Jul 17, 2010, 1:20am (UTC -5)
More about "Measure of a Man": In addition to Trajan's objections, I'll also add that the episode strikes me as a huge dodge of the issue they set themselves up to decide.

One wonders how Data got to serve at all if his entrance committee only had one objection -- Cmdr. Maddox's objection -- and that committee based their decision on sentience like Data said.

But there's another problem: the slavery argument (should we or should we not let Maddox make a race of copies of Cmdr. Data?). The slavery argument only works if you already agree with what Capt. Picard had to argue. The slavery argument fails if already agree with what Cmdr. Riker had to argue (Data is property, he can no more object to work than the Enterprise computer can object to a refit). It seems to me that the slavery argument presupposes the very thing the hearing is meant to decide and therefore this argument has no place in this hearing.

And Cmdr. Louvois' finding essentially passes the buck (as she all but says at the end of the hearing): she has no good reason to find as she does but she apparently believes erring on the side of giving Data "choice" is a safer route.

I think this episode might merit as the most overrated TNG episode.
Dave Nielsen
Mon, Aug 8, 2011, 10:58pm (UTC -5)
"The Measure of a Man." I too loved this episode, but I can't help wondering how this question wasn't already decided years earlier. It seems to me Data's status would have had to be decided before he could join Starfleet, or at least before he could be given a commission. Then I partly agree with Maddox that Data could just be simulating sentience. With a sufficiently sophisticated computer there would be no way to tell. I guess the point is that there's no way to tell with anyone, but then there's a difference between the "programming" of biological life and that of an articial, constructed life form. It's also a bit cheeseball that they would have no staff just so that some of the principal actors won't just be getting paid to sit in the background. I wonder too if it was necessary for the Philippa to have been an old flame of Picard's. Still, with all that I still love and it still stands as one of TNG's best episodes.
Dave Nielsen
Mon, Aug 8, 2011, 11:16pm (UTC -5)
Trajan: "As for turning Data off; it was his rights as a sentient being that were for the court to decide. Allowing him to be turned off constitutes assault, battery, actual bodily harm and possibly attempted murder if he had no reset button."

Since Data's sentience was the question here, Riker couldn't be charged with anything for turning Data off so that would be perfectly fine to do in a courtroom. Even after the ruling, it wouldn't have mattered - only if he did it again. If Maddox's arguments had been upheld, and Data was property, who could be dissected against his will, he can't have the rights of a sentient being.
Trajan
Wed, Jan 25, 2012, 3:08pm (UTC -5)
Dave Neilsen: Since Data's sentience was the question here, Riker couldn't be charged with anything for turning Data off so that would be perfectly fine to do in a courtroom.

I disagree. You could 'turn off' an alleged human being with a baseball bat but it would produce no more evidence of his sentience than Data's off switch does of his.
X
Sun, Mar 25, 2012, 7:03am (UTC -5)
Trajan: You could 'turn off' an alleged human being with a baseball bat but...

No. You could not 'turn off' a human being with a baseball bat in the same way that you can turn off a machine. You can either turn off a conscious part of a man's brain (brain and the entire organism is still functioning) or kill him. You cannot turn off a human completely, as you can turn off a machine, an then turn him on again.
Trajan
Tue, Apr 10, 2012, 3:32pm (UTC -5)
X: You cannot turn off a human completely, as you can turn off a machine, and then turn him on again.

Sure you can. Just don't be so enthusiastic with the baseball bat and knock him unconscious with it. (Which, in my courtroom, will still get you locked up for grievous bodily harm...)
Patrick
Mon, Aug 27, 2012, 9:28pm (UTC -5)
The Original Star Trek had a second pilot and in some ways, TNG did too--"The Measure of a Man". This was the watershed turning point of the series it's thoughtful story was uniquely it's own and not another riff on Classic Trek. The actors were truly becoming comfortable in their character's skin; call backs to the series own mythology from Tasha Yar's intimacy with Data, to a mention of Lore made the fictional universe of TNG more real. Secondary characters like O'Brien and Guinan were weaving their way through the mythos. And last but not least: THE FIRST POKER GAME--a brilliant edition to the series that provided some of the best characters moments and the classic final scene of the series.

"The Measure of a Man" and "Q Who" were an effective one-two punch that made the show the one we know and love.
Peremensoe
Mon, Sep 3, 2012, 7:11pm (UTC -5)
"I partly agree with Maddox that Data could just be simulating sentience. With a sufficiently sophisticated computer there would be no way to tell. I guess the point is that there's no way to tell with anyone, but then there's a difference between the 'programming' of biological life and that of an articial, constructed life form."

Is there?

It's a somewhat deeper question than the episode really addresses, but... what *is* sentience?

Is it physically contained in the actual electrical and chemical processes of neurons?

Or is it the *product* of a certain complexity of such processes?

If the latter, then not only is there no fundamental difference between biological and synthetic processors giving rise to the sentient function--but there is also no such thing as 'simulated' sentience. If the complexity is there, it's there.
xaaos
Tue, Nov 13, 2012, 12:37pm (UTC -5)
Data is the best!!! Loved the final scene between him and Riker.
ReptilianSamurai
Fri, Nov 30, 2012, 10:53am (UTC -5)
Just saw the new extended, remastered version of the episode the other night, and it was absolutely fantastic. It really gives the story a bit of room to breathe, and better develops the guest characters (especially Philipa's backstory with Picard) as well as really exploring Data's dilemma and the nature of being sentient. This version of the episode, in my opinion, is one of the best in all of Trek and I'm really glad they were able to give us this extended cut.

Hope Jammer reviews the extended version at some point, I'd be interested to hear his take on how it changes the episode.
Rikko
Sun, Feb 24, 2013, 9:52am (UTC -5)
What a wonderful episode!

@ Trajan: I don't want to beat a dead horse, but I think you're being too hard on this ep for something it isn't. TNG is not trying to be 'law and order in space'. It's always about the bigger questions.

I can suspend my disbelief with stories like this, specially when I compare 'The measure' to total fantasy wrecks like the black pond of tar of 'skin of evil' or the many energy life -form from countless episodes.

Still, I wont deny that the lack of crew for a trial of this gravity was hilarious. The production staff must have been in dire straits during this season.
Shawn Davis
Fri, Mar 8, 2013, 7:00am (UTC -5)
Greetings to all. I love this episode. One of TNG's classic and features one of my favorite character, Data, in an most interesting position ever.

I have one question though, Riker as Data to bend a metal bar in an attempt to disprove that he is not sentient and Picard object to that by stating that there are many live alien species that are strong enough to do that, Capain Philipa disagreed with him and told Riker to continue with the demonstration. My question is why is Picard wrong? I though what he said about some aliens being strong enough to bend the metal bar along with robots and androids like Data was logical to me.

Thanks.
PeteTongLaw
Fri, Mar 8, 2013, 4:09pm (UTC -5)
It seems to me that the space station and the Enterprise-D are not appropriately scaled.
William B
Wed, Mar 27, 2013, 3:35am (UTC -5)
I do like this episode, perhaps even love it, but I admit that I do find it hard to suspend my disbelief in portions of it related to the legal proceedings.

It does seem, as others have mentioned above, as if Starfleet should have settled this issue before; but on some level it does make sense that maybe they didn't, because Data's status is so unique.

That said, I do think the idea that Data would be Starfleet property because he went through the Academy and joined Starfleet is disturbing, because Data is only in Starfleet because he chose to do so. The Enterprise computer never *chose* to be part of Starfleet.

I suppose one resolution to this would be that since Data was found by Starfleet personnel (when he was recovered from Omicron Theta), at that point he 'should have' entered into Starfleet custody as property. It would also make sense if the reason that Data's status as having rights/not having rights was not extensively discussed (e.g. whether Data constitutes a Federation citizen) was that he spent all his time from his discovery on Omicron Theta to his entrance into the Academy with Starfleet personnel in some capacity or another, so that there was never a time in which he would need official Federation citizenship.

On some level it does make sense to me that Data would hang around the Trieste (I think it was?) after they discovered him until eventually a Starfleet officer there sponsored his entry into the academy.

I suppose that if Data had no sentience all along, and had a mere facsimile of it -- if Data genuinely WAS an object and not a person -- perhaps he would go to Starfleet ownership merely for the fact that Data was salvaged by a Starfleet vessel after the destruction of Omicron Theta, and since there are no living "rightful owners" with the colony destroyed (and Soong and his wife for that matter thought dead) it makes sense that Starfleet could claim something like salvage rights.



Re: the point raised by J.B. Nicholson-Owens, it is true that IF Data is property, then so would a race of mechanical beings created in Data's image. It does not actually affect the case directly.

However, I do not think this is a flaw. Picard makes the point that one Data is a curiosity, but a fleet of Datas would constitute a race. Perhaps that was a leading phrase -- but instead we should say that a fleet of Datas would constitute a much larger set. The main purpose of this argument is, I think, to demonstrate that the consequences extend far beyond Data himself.

Put it this way: if there is a 99% chance that Data is property and a 1% chance that he is a sentient being with his own sets of rights, then taking Data by himself, there is a 1% chance that a single life will be unfairly oppressed. But if there are thousands and thousands and thousands of Datas in the future, that becomes a 1% chance that thousands and thousands of beings will be oppressed. That is simply a much bigger scale and a much bigger potential for tragedy. If Luvois ruled that Data were property and he were destroyed but was the only creature destroyed, it would be tragic, but still only a single being. If Luvois ruled that Data was property and thousands of androids were produced and Luvois was wrong, then _based on that ruling_ a whole race would be condemned to servitude. The possible cost to her decision is much greater, and the importance of erring on the side of giving Data rights becomes greater as a result as well.
N.I.L.E.S.
Sat, Apr 13, 2013, 4:50am (UTC -5)
Has anyone considered that the basic premise of this episode is unnecessary based on the shows own rules. The premise is that Data needs to be dismantled so that more androids like him can be created but Data is dismantled every time he uses the transporter. Since the enterprise computer is able to dismantle Data and reassemble him it must have detailed information about his construction. Surely all Maddox needs to do is access the information stored in the transporter logs and he would have all the information he needs to replicate Data.
The above point aside I really love the episode and the questions it raises about the point when a machine becomes conscious. I agree with those that have stated that his issue would have been settled before Data entered Starfleet, especially since the sole bases for Maddox objecting to Data's entrance into Starfleet was because he did not believe that Data was sentient. The fact that the others on the committee allowed Data to enter Starfleet anyway suggest that they believed he was sentient.
I also agree that there were some aspects of the court scenes that were not as convincing as they could have been. For instance, since the issue to be decided is whether or not Data is sentient I find it odd that no psychologist were asked to testify since consciousness is part of what psychologist study. I also find it odd that there were no cross exam when a witness testified. For example, when Data was on the stand Picard asked him what logically purpose several items Data had packed served. In reference to his medals Data replied that he did not know he just wanted them and in reference to a picture of Tasha Data replied that she was special to him because they were intimate. Clearly Picard was trying to imply that Data had an emotional connection to the things he had packed much as humans do. Riker could have easily undermined that premise on cross exam by asking, "When you say you wanted these medals do you mean you felt a strong desire to take them with you?" Data would have had to have answered no because by his own admission he does not feel anything. This would have reminded the audience that Data is a machine.
Grumpy
Sat, Apr 13, 2013, 4:51pm (UTC -5)
"...Data is dismantled every time he uses the transporter."

If you're suggesting that Data could be replicated like any other hardware, a fair point. Presumably something in his positronic brain is akin to lifeforms, which can be transported at "quantum resolution" but not replicated at "molecular resolution." But the issue was never addressed in the series.

Also, apparently Data's positronic brain is an "approved magnetic containment device," which the tech manual says is the only way to transport antimatter without "extensive modifications to the pattern buffer."
istok
Fri, May 10, 2013, 7:24pm (UTC -5)
This is the best TNG episode I've seen so far. Admittedly, I've only seen season 1 and half of 2. Nonetheless it is very compelling and it suspended my disbelief just fine. I don't care to dissect mainstream scifi television in great detail. Something will inevitably fail to add up. But overall, uncharacteristically for the said mainstream television, this episode actually raised some deep issues, and it was done well, in its own context. It actually got me thinking, what is life? No, really? Seems very easy but I have no more concrete answers to that, than I do to the questions, "what is the universe", or, "what is the earth's core really like".
All in all, this was good television.
Frank Wallace
Mon, Jul 8, 2013, 9:54pm (UTC -5)
Wonderful episode.

I never saw any reason to question the legal elements of the episode. For one, Starfleet officers are multi purpose types, given that the Federation doesn't have "police" or "armies" in the truest sense. Secondly, The reason for Picard being involved is explained early, and the other captain is a JAG member.

Lastly, the person that wrote the episode has actually trained and practiced law as a career for several years. She will know enough about it to make it believable, and it DID seem believable. Plus, it's the idea behind the episode that matters. :)
Sam S.
Sat, Aug 3, 2013, 11:36pm (UTC -5)
I just wanted to add that this episode provides the term toaster for artificial life. This apparently is where Battlestar Galactica reboot gets the concept for its artificial lifeforms.
SkepticalMI
Thu, Sep 19, 2013, 9:48pm (UTC -5)
This was basically an all or nothing episode. A concept like this could either succeed magnificently in raising philosophical points or fail miserably in cliches and preachiness. Thankfully, it hit the former far more often than the latter.

Yes, the courtroom scenes were hardly very legally precise (but heck, lawyer based TV shows aren't very legally precise either). Unfortunately, I don't think either Riker or Picard did a very good job. Maybe that was due to the fact that it had to be short to fit in the episode. Of course, they could have cut out some of the Picard/JAG romance backstory for a better courtroom drama.

But it probably would feel incomplete no matter how long they took. In reality, it would probably be a very lengthy trial, so no showing in a 43 minute TV show could fully expand whether or not he's sentient.

And frankly, it isn't necessary. We already know the arguments already. It really does boil down to a few simple facts: On the negative side, he was very clearly built and programmed by a person. On the positive side, he very clearly acts like he's sentient. And frankly, we don't know.

And that's probably what makes this episode work. They acknowledge and reinforce that. Picard's realization (actually Guinan's realization) to make the argument but avoid defining the scope in favor of the bigger picture was pitch perfect. This is a simple backwater JAG office. Should it really be deciding the fate of a potential race? Picard made that point beautifully in the best speech he's had so far. And it was that speech, that implication, that resonated.

The point was not to decide whether or not Data was sentient, but to consider the consequences. And to err on the side of caution.

Of course, in the real world, Maddox would undoubtedly appeal to a higher court, and this would make its way to the Federation equivalent of the supreme court. But you know what? I'm glad it ended here. Another good aspect of this story was that, despite going full tilt towards making Maddox the Villain with a capital V, he seemed to get Picard's point as well. I'd like to think that Maddox does have a conscience and was willing to stop his pursuit based on even the chance that Data is sentient.

This episode seemed to skirt the edge of being melodramatic, preachy, and cheesy, but always managed to avoid falling into it. Most importantly of all, it hit exactly the right tone on the fundamental question. There's a few nagging doubts in terms of the plotting and the in-universe rationale for all of this (which others have pointed out). I think that keeps it from being elevated too highly, but it's still the best episode of the series so far.
Latex Zebra
Sat, Sep 21, 2013, 6:45pm (UTC -5)
This might actually be the best episode of any Trek series.
Nick P.
Mon, Sep 30, 2013, 4:11pm (UTC -5)
OK, first amazing episode! One of the best of the series...However, I am not sure that I agree with the central theme, that it is wrong for starfleet to create a race of slaves. The enterprise is as sopshistacated as data, and has already been able to create sentience (elementary, dear data), and there is a fleet of them, further, data numerous times saves the ship, why is it wrong to want to mass produce him for starfleet needs?
K'Elvis
Thu, Oct 24, 2013, 10:26am (UTC -5)
Sure, you had to suspend disbelief, but this was one of my favorite episodes of TNG. This wouldn't have been resolved on some Starbase, but by properly trained legal officials in a proper court.

This should have been resolved already, Starfleet had already accepted Data as a person by allowing him to enter the academy and commissioning him. Data's ability to bend a bar is not evidence that he is a thing. As counter evidence, Picard could have brought in a bar of his own, and showed that some members of his crew were strong enough to bend it, while others were not.

To counter the off-switch argument that Riker made, one need only have someone perform the Vulcan nerve pinch, which effectively turns a humanoid off.

If Data had been declared to be property, that wouldn't mean that he was Starfleet's property. Starfleet didn't make him, if he was anyone's property, he would be Dr. Soong's property.

Still, this is an episode well worth suspending disbelief, because the ideas are so profound.
Cammie
Wed, Dec 4, 2013, 5:36pm (UTC -5)
I don't think Riker would have liked it if Data did a Vulcan Nerve Pinch to turn him off.
Cammie
Thu, Dec 5, 2013, 9:29pm (UTC -5)
I love any Star Trek episode with Q,Data,or Spock in it.I think they are the highlight of the show.
Jons
Mon, Feb 17, 2014, 2:53pm (UTC -5)
There is no "we don't know" about him being sentient - the very fact that he spontaneously says (and insists) he's sentient means he is.

And an argument which I think should have been pushed further: Organic life isn't any less a machine than Data. The only difference is that it's a self-replicating machine. Animals (humans included) are organic machines whose building and functioning is determined by dna sequences (GACT instead of 0 & 1).

As for the comparison with the ship's computer: As a matter of fact, not all organic life is sentient: We have somehow determined, for diverse reasons good or bad that non-sentient life isn't as respectable as sentient life. In that, the ship's computer isn't Starfleet's property any more than a dog belonging to Starfleet would be. Still, just as a dog isn't a human being, the ship's computer isn't a sentient android. The fact they're both non-organic has no bearing on this.

In any case, whether it's here or during the Doctor's trial in Voyager, I cannot even begin to understand the arguments of the "they're machines" side. Obviously as portrayed in Star Trek, they ARE sentient (whether we will one day be able to replicate a brain's complexity well enough that this would be possible is another matter entirely).
Yanks
Wed, Mar 5, 2014, 10:21am (UTC -5)
Where did my(our) discussion go?
Shannon
Mon, Aug 11, 2014, 5:00pm (UTC -5)
Totally agree! This is probably one of the best episodes of Star Trek across ALL of the series. And I love that it didn't involve phasers, torpedos, or silly looking aliens. This was a moral story about the rights granted to a sentient being of our own design. This is class Trek, with themes that stretch deep into our society... Patrick Stewart was amazing in this episode, with his oh so controlled passion when he was arguing Data's case. "Your honor, the court room is a crucible, and when we burn away irrelevancies, we are left with a pure product, the truth for all time." Great stuff! I only wish I could give it 5 stars, because this was an amazing story!
Yanks
Thu, Aug 14, 2014, 5:10pm (UTC -5)
I'll also add that the episode strikes me as a huge dodge of the issue they set themselves up to decide.

One wonders how Data got to serve at all if his entrance committee only had one objection -- Cmdr. Maddox's objection -- and that committee based their decision on sentience like Data said.

But there's another problem: the slavery argument (should we or should we not let Maddox make a race of copies of Cmdr. Data?). The slavery argument only works if you already agree with what Capt. Picard had to argue. The slavery argument fails if already agree with what Cmdr. Riker had to argue (Data is property, he can no more object to work than the Enterprise computer can object to a refit). It seems to me that the slavery argument presupposes the very thing the hearing is meant to decide and therefore this argument has no place in this hearing.

And Cmdr. Louvois' finding essentially passes the buck (as she all but says at the end of the hearing): she has no good reason to find as she does but she apparently believes erring on the side of giving Data "choice" is a safer route.

But you have to realize the only reason we got that conversation in 10-Forward was because Whoopie is black.

Here is the transcript:

"GUINAN: Do you mean his argument was that good?
PICARD: Riker's presentation was devastating. He almost convinced me.
GUINAN: You've got the harder argument. By his own admission, Data is a machine.
PICARD: That's true.
GUINAN: You're worried about what's going to happen to him?
PICARD: I've had to send people on far more dangerous missions.
GUINAN: Then this should work out fine. Maddox could get lucky and create a whole army of Datas, all very valuable.
PICARD: Oh, yes. No doubt.
GUINAN: He's proved his value to you.
PICARD: In ways that I cannot even begin to calculate.
GUINAN: And now he's about to be ruled the property of Starfleet. That should increase his value.
PICARD: In what way?
GUINAN: Well, consider that in the history of many worlds there have always been disposable creatures. They do the dirty work. They do the work that no one else wants to do because it's too difficult, or to hazardous. And an army of Datas, all disposable, you don't have to think about their welfare, you don't think about how they feel. Whole generations of disposable people.
PICARD: You're talking about slavery.
GUINAN: I think that's a little harsh.
PICARD: I don't think that's a little harsh. I think that's the truth. But that's a truth we have obscured behind a comfortable, easy euphemism. Property. But that's not the issue at all, is it?"

Nothing in this conversation has ANYTHING to do with proving Data's sentience.

What one could do with a technology or a thing should in no way have any bearing on this trial.

They should have been trying to prove Data was sentient because then he could be identified as something more than 'property', not that we they could make a bunch of him so it isn't right. If Data was proven not to have sentience, then why wouldn't Star Fleet want one on every bridge?

This is why this episode, in my view, receives more acclaim than it deserves.

It's nothing more that the liberal machine injecting slavery into a situation where it didn't exist because they wanted to make this episode "moral". It pales in comparison to Uhura's conversation with Lincoln:

"LINCOLN: What a charming negress. Oh, forgive me, my dear. I know in my time some used that term as a description of property.
UHURA: But why should I object to that term, sir? You see, in our century we've learned not to fear words.
KIRK: May I present our communications officer, Lieutenant Uhura.
LINCOLN: The foolishness of my century had me apologizing where no offense was given."

See in this exchange, Uhura responds how one would expect one to respond in the 23rd century, where Gene's vision is true. It doesn't faze her in the slightest, because it shouldn't. They bring a pertinent point up, but not in an accusatory way. In TNG, they inject something that happened 400 years ago in an attempt to justify something it doesn't relate to.

Why was Maddox a self-interested 'evil' white guy? Why did the epiphany for Picard come from a black Guinan? How does that epiphany relate to this case at all? Liberal hollywood. Poor writing.

Look at Picard's argument.

"PICARD: A single Data, and forgive me, Commander, is a curiosity. A wonder, even. But thousands of Datas. Isn't that becoming a race? And won't we be judged by how we treat that race? Now, tell me, Commander, what is Data?"

If Data isn't sentient, why is "it" different that a toaster? Because "it's" programed to talk? Do we regret making "races" of Starships? ... desk-top computers ... etc? The issue of "a race" is irrelevant, and only it and slavery are injected in his argument because Guinan was black.

Riker's argument WAS impressive, because he put the FACTS on display.

Picard's was not because he did nothing but put up a "feel bad" smokescreen that had nothing to do with proving whether or not Data, our beloved android, was indeed sentient or not.

So Picard put on a good dramatic show and Data won, which made us all and Riker feel good, but for all the wrong reasons. The Judge rules that Data was not a toaster, but why - because we might do something wrong with toasters in the future?

If they had proven Data was sentient (or some equivalent), then they could have addressed the whole cloning issue and that should be why Mattox can't "copy" Data, not because we might mistreat a machine in the future because we will make a bunch of them. But they didn't.
Josh
Thu, Aug 14, 2014, 9:26pm (UTC -5)
"But you have to realize the only reason we got that conversation in 10-Forward was because Whoopie is black."

Well, that is your interpretation, but I think it's fairly clear that Picard considers Data to be self-evidently sentient, yet was unable to argue this from a legal perspective adequately at that point in the episode. The essential argument of the episode - on my reading - is simply that Data is a self-aware being who is entitled to the presumption of sentience like anyone else, even though he is a machine. The corollary is that although Data cannot be "proven" to be sentient, there does not exist any test that can be prove it for anyone else either.

As for the slavery angle, Picard chooses that word to express his abhorrence at the idea of a race of sentient beings who might be "owned" and used like, to take your example, desktop computers. This doesn't have anything to do with any "liberal machine". It strikes me as a peculiarly Americentric reading to assume that this must have anything specifically to do with historical slavery in the Americas.
Elliott
Thu, Aug 14, 2014, 9:28pm (UTC -5)
@Yanks: Whoopie's race is relevant to the scene you described, but in a way that breaks the fourth wall, not "because she [Guinan] is black." If Guinan had been, say Beverly in this scene, the lines would read exactly the same and the truth of the statement would be no less, but the emotional *punch* wouldn't be quite so severe. It is purposefully uncomfortable for that brief moment she looks at the camera and we remember that these are actors with history, and out history with regards to how we treated other races, especially black people, has been mostly appalling. Again, the substance of the dialogue is what it is, but there's an extra layer to the scene because of Goldberg's race. It's in many ways what "Angel One" failed so miserably at.

Now, on to your other points:

" The slavery argument only works if you already agree with what Capt. Picard had to argue."

Well that's the point. If one hedges on the issue of Data's sentience, one can neatly hide behind the euphemism of property until the full implications of that process are pointed out by the slavery argument. You may not think Data is sentient--maybe he has a soul, maybe he hasn't (as Louvois said)--but if you're wrong, the issue is not the fate of one android, but the implications for how we treat an entire new form of life. Thus, the gravity of respecting this one individual android's sentience is enormous.

"Picard's was not because he did nothing but put up a "feel bad" smokescreen that had nothing to do with proving whether or not Data, our beloved android, was indeed sentient or not."

I'm kind of baffled by this: Picard asked Maddox what the qualifications for sentience were. He named them: intelligence, self-awareness and consciousness. Picard then went on to demonstrate how Data met those requirements, thus proving his sentience. The issues of race and slavery, as I said, have to do with the *implications* of the ruling, not winning the case. Picard's argument was that it wasn't merely a case of pitting the rights of Maddox against those of Data, but humanity's responsibility to the form of life of which Data is a vanguard.
Peremensoe
Fri, Aug 15, 2014, 6:46am (UTC -5)
Josh and Elliott are correct. Also, Guinan refers to "many worlds," so while we the audience recognize the significance of Whoopi's blackness *for us*, in-universe it is explicitly *not* about just black slavery on Earth, or "400 years ago."
Peremensoe
Fri, Aug 15, 2014, 6:52am (UTC -5)
Oh, and it's the TOS depiction of 'Lincoln' that was disingenuous and 'PC.' The real Lincoln assuredly did not consider black people to be humans of equal worth and dignity to himself.
Yanks
Fri, Aug 15, 2014, 8:38am (UTC -5)
Josh & Elliot,

Elliot, you said it yourself. The "implications" can't be used to make the decision here, so the argument is fluff. That angle should have been ruled inadmissible. (Read J.B. Nicholson-Owens' post above.) Take the entire slavery/race thing out and Picard's argument doesn't change at all. She doesn't even mention it in her ruling.

"PHILLIPA: It sits there looking at me, and I don't know what it is. This case has dealt with metaphysics, with questions best left to saints and philosophers. I'm neither competent nor qualified to answer those. I've got to make a ruling, to try to speak to the future. Is Data a machine? Yes. Is he the property of Starfleet? No. We have all been dancing around the basic issue. Does Data have a soul? I don't know that he has. I don't know that I have. But I have got to give him the freedom to explore that question himself. It is the ruling of this court that Lieutenant Commander Data has the freedom to choose."

I also don't know what "soul" has to do with anything. But that's neither here nor there I guess...

Her ruling is concerning Data and Data only. (as it should have been, that's all this trial was about)

Picard is beaten, goes down to 10-forward, talks with Guinan and comes away with the race/slavery argument. He didn't pop down and discuss this with Beverly. Like I said, this was just injected here as a "moral boost" to an episode that really didn't need it. How a society will treat more "Dats's" is irrelevant here, but most certainly would be a matter to be addressed later if Data's positronic brain can be replicated.

Picard's argument with Mattox anout sentience was all that was needed.

Another thing that gets me is how can Data be a commissioned Star Fleet Officer and not have the right any other officer has? I personally don't think this trial should have ever happened. No idea how he can have all the responsibility and no rights. the Enterprise's computer doesn't have any responsibility.

Let's end on a good note.

I personally think the best part of this episode is this:

"DATA: I formally refuse to undergo your procedure.
MADDOX: I will cancel that transfer order.
DATA: Thank you. And, Commander, continue your work. When you are ready, I will still be here. I find some of what you propose intriguing."

Data is one decision away from being dismantled and he reveals his decision not to participate was not a completely selfish one, he just was not convinced Mattox was competent. He is not opposed to research on him.

If Mattox was competent, would we have had this trial at all?

Intriguing.
Yanks
Fri, Aug 15, 2014, 8:42am (UTC -5)
Peremensoe,

"we the audience recognize the significance of Whoopi's blackness" That's the whole point. We don't get this irrelevant discussion unless Whoopie is in the conversation. It's Hollywood's way.

The "Real" version of Lincloln was not the point. Uhura's response was.
Elliott
Fri, Aug 15, 2014, 1:00pm (UTC -5)
@Yanks :

"The "implications" can't be used to make the decision here, so the argument is fluff. That angle should have been ruled inadmissible. (Read J.B. Nicholson-Owens' post above.) Take the entire slavery/race thing out and Picard's argument doesn't change at all. She doesn't even mention it in her ruling. "

If you're looking at this episode purely as a court case to decide Data's status (I grant you your point about his appointment to Starfleet, by the way), then I guess you can call the slavery arguments fluff, but if, like me, you're looking at it like a piece of drama, the ideas are integral to the story. The discussions are a window across time--can't you imagine similar discussions happening during the slave-trade on earth? A man, for example, who took a slave for a wife, extending to her rights and freedoms he denied to his other slaves because she was special to him? Don't we see in "Author, Author" how the narrowness of Louvois' ruling left the door open for further injustices to AI?

It's good to think critically, and not be myopic about issues like this and it was rewarding to see this kind of thinking on the screen.
Yanks
Fri, Aug 15, 2014, 1:10pm (UTC -5)
Elliot,

I might agree with you if there were 100's of Data's running around but there is not. This argument would be more applicable to the EMH in Voyager. Data is unique (positronic brain) so the implication is not needed nor applicable here. For this reason it is injected Hollywood drama for the sake of implied "moral justice". Nothing more.
Robert
Fri, Aug 15, 2014, 1:29pm (UTC -5)
@Yanks - So if there were only a single black man in the new world (because for some reason we'd only brought one over so far) we couldn't discuss the broad range implications of denying him rights and how that would affect what happens when we got a whole lot more of them? Wasn't the whole point of Maddox's research to figure out how to make 100s of Data's run around?
Yanks
Fri, Aug 15, 2014, 1:56pm (UTC -5)
Robert,

Not relevant to this trial. Jesus, even you guys try to inject race/slavery into this. Hollywood has you trained well. It's not about a discussion.

Like I said in my 1st post. You can't make a ruling on a “what if”? This hearing was on Data's right to choose, not "if there were 100's or 1000's of Data what would we do".
Robert
Fri, Aug 15, 2014, 2:27pm (UTC -5)
Yanks,

Not relevant and not central are two different things. I will agree that it is not the central argument. I think "no relevant" is stretching here. Maddox's goal was to brand Data as a creature with no rights and then make lots more. Guinan (rightfully) deemed that slavery. Picard (rightfully) pointed out during the trial that the Maddox intended to create a race of androids slaves and that he couldn't even determine (with certainty) that Data wasn't sentient by his own criteria. Yes, taking away an individual's rights to choose are troubling enough and was the core issue of this trial, but Maddox DID intend to create a race of superhuman sentient slaves. That was the point of the episode and addressing it isn't a hyper-liberal Hollywood poppycock conspiracy :).
msw188
Fri, Aug 15, 2014, 5:44pm (UTC -5)
Yanks,
I believe your opinion here is too idealistic. A trial such as this one is more than a logical investigation - it is an attempted interpretation of both the meaning and the purpose of law. Not all trials need to be this way; in some cases (most cases?) a trial should be about logically or reasonably uncovering truth. But in cases such as these, truth is not easily definable, and so the meaning and purpose of law become as important as the logical statements themselves.

Picard's arguments (grossly simplified - I'll agree that this episode is slightly overrated) can be viewed as:
1. No satisfactory definition for sentience exists that will allow for a logic-based ruling.
2. Judging non-sentience in error will have consequences akin to slavery, which undermines the meaning and purpose of Federation law.

He 'proves' 1 first (flimsy), and proceeds to claim 2. Given the time constraints of the episode, I think this is solid writing. Louvois' comments cement this purpose of the writers, in my opinion. She cannot hope to logically decide whether or not Data has sentience (or a 'soul', whatever she means by that), and so the only recourse is to "err on the side of caution."
Peremensoe
Fri, Aug 15, 2014, 7:56pm (UTC -5)
Yanks, a scene can have two meanings or purposes at the same time. The better the episode is, the more likely it will feature such scenes (or perhaps it's the other way round).
Yanks
Mon, Aug 18, 2014, 9:15am (UTC -5)
Robert / msw188,

I think you're basing your argument on Mattox's potential for success which by Data's own admission was slim at best; hence his refusal to participate. (and protexting his memories) I don't even think we know about Lore yet at this point in the series (could be wrong there).

And, if Data doesn't get the right to choose (loosely based on sentience) "he" IS no different than a toaster, or a computer driven assembly plant etc.

Funny how this very same issue was brought up many times before I chimed in and no one had any issues, but I bring up the truth about Guinan (watch the scene in 10F if you don't believe me, it's plain as day) and folks all of a sudden have objection.

The potential for a "race" only exists if Data wins. It had no place in this hearing aside from courtroom fluff.

I enjoy the discussion with you folks. I guess we’ll have to agree to disagree on this issue.

Peremensoe,

Sure ... or not. :-)
Robert
Mon, Aug 18, 2014, 9:26am (UTC -5)
By your own admission though, your issue with it is that it's fluff in the courtroom. But I think the whole point of THIS court case was just to make us think.

On the whole I chimed in because if anyone's argument was fluff and irrelevant it was Riker's and you didn't complain the same about that. I mean... one can detach and reattach your arm too (although perhaps not so easily), and with certain drugs I can turn you on and off as well.

His strength and processing capabilities are also irrelevant, I have a strength (though not so great) and processing speed (again, not so great) as well :P I assume you and I are sentient though!

And I totally agree with you about Guinan/Whoopi and the 4th wall breaking being the reason for the conversation. To me personally though, that doesn't detract from the episode (and perhaps improves it), but you can find it jarring if you do... obviously we can disagree :)

As for Riker, I suppose in headcannon we can pretend he was making an argument that looked impressive but had no substance so that Picard could easily beat him.
msw188
Mon, Aug 18, 2014, 10:43am (UTC -5)
Yanks,
"And, if Data doesn't get the right to choose (loosely based on sentience) "he" IS no different than a toaster, or a computer driven assembly plant etc... The potential for a "race" only exists if Data wins."

I think this is the base cause for my disagreement with you. This attitude seems to suggest that when a conclusion is reached in a court, it is automatically correct. But if Data is declared to be a toaster by the court, while in fact being sentient, then the potential for a "race" does exist. It is this possibility that has the worst possible ramifications, regardless of how slim its chances are (and that slimness is in the eyes of Data only, not the Judge). In the absence of a workable definition for sentience and/or race, the avoidance of this possibility becomes the focus of Picard's argument, and I don't think that's out of place for a courtroom.

I don't have a strong opinion on the Guinan issue. It felt a little bit contrived to me to make sure that the black person brought this up on the Enterprise, but it does fit the in-universe characters. Guinan is wise and thinks about the bigger picture, not about singular logic. Picard trusts Guinan and always values what she has to say. They're also the two best actors in the series, and this is meant to be a big 'turning point' scene. If one is willing to allow potential consequences into a courtroom in cases where definitions are unclear (as I am), than Guinan's scene fits in the plot of the episode just fine regardless of the 'message' for today's audience. And that's the way such messages should be handled, I think.

PS: I only just found this website on the day I posted my first response (if there's some way to check, you can see it's my first ever post here). I can't say for sure if I would have brought any of this up before the Guinan comment, but I think I would have if you had ignored that part and just put forward the "trial should focus on Data and only Data" argument.
Yanks
Mon, Aug 18, 2014, 1:37pm (UTC -5)
Robert,

Just hate that being thrown in our faces. Too much like real life "news". But you’re right, probably just me.

I thought Riker aptly made his case, hell even the Judge said Data was a machine. While I'm not attorney, when I watched this Riker seemed to lay it all out there, very clearly. I mean without a clear definition of what sentience is how would he disprove data having it? Probably better to not bring it up and let Picard climb that mountain if he chooses.

msw188,

One can click on your name and see all your posts. (great points about 'Yesterday's Enterprise' and 'The Offspring' BTW)

I tend to be brutally honest :-) A failing I'm sure and you are correct, there are politically correct ways to express observations, I just chose to "post from the heart" :-)

When I first watched MoM early in 2002 (I think) it didn’t faze me, but now that Woopie is on The View etc… It changed my “viewing pleasure” I guess.
msw188
Mon, Aug 18, 2014, 2:09pm (UTC -5)
Yanks,
Cool, thanks for your kind words. I'm glad somebody might feel similarly about Yesterday's Enterprise.

Measure of a Man is one of those episodes that I'm pretty certain I saw back in the late 80s or 90s, but I didn't really remember. It was probably a bit over my head then. Looking at it now, and after all of this discussion, I think I'd give it a solid 3.5 stars. The arising of the conflict strains believability, and even though I think Guinan's remarks fit into the story and courtroom well, they still feel just a bit too much on the nose. The episode asks some very worthwhile questions and explores them well enough for me, but it still lacks the purely emotional element that the best episodes do carry.

For comparison, I'd currently give 4 stars to Offspring, Best of Both Worlds, and Darmok. I'm in mid-season 5 at the moment.
Yanks
Mon, Aug 18, 2014, 4:13pm (UTC -5)
msw188,

I'm currently just finished watching DS9 and am trying to catch up with my reviews. When I get to TNG, Offspring will most definitely get 5 of 4 stars. I'm a sap and tear up everytime I watch it. :-)
$G
Sat, Dec 13, 2014, 3:49pm (UTC -5)
A lot of lively discussion here. I wish I'd been around to partake!

As for this episode, it's TNG's "Duet" as far as I'm concern. The "so this show IS worth watching" moment. Even barring that, a fantastic hour. 4 stars easy. Top tier Trek.
Nic
Sat, Dec 20, 2014, 8:39pm (UTC -5)
I finally got the chance to see the extended version of this episode on Blu-Ray. I must echo ReptilianSamurai's comments. I didn't think it was possible to improve on a classic, but I was spellbound. Of particular interest is a scene where Riker and Troi discuss whether their view of Data's sentience is true or imagined (whether they anthropomorphize him). Even as a telepath, Troi isn't sure. This scene makes Riker's arc much more involving, and I paid much greater attention to Frakes' performance.

All in all, I think this episode matches BSG's "Pegasus" in emotional impact and social relevance.
Trajan
Wed, Jun 17, 2015, 1:08pm (UTC -5)
I've just caught up with this discussion following my original comments from years ago. I still hate the episode but appreciate that I'm in the minority. Incidentally, I liked the suggestion of using the Vulcan neck pinch on Riker. Much kinder than my earlier suggestion of a baseball bat!

I didn't know the writer of the ep. was a lawyer. One day I'd like to have a discussion with her as to the fairest form of judicial system for a post-scarcity society. However, I'll refrain from commenting on Measure of a Man again.

Anyone interested in alternative depictions of future legal systems could do worse than to check out Frank Herbert's 'The Dosadi Experiment' which always appealed to me when I was practising.
Macca
Sat, Jul 18, 2015, 5:17pm (UTC -5)
Disclaimer: I've only scanned through the comments above so sorry if this has already been discussed.

I revisited this episode yesterday after starting to watch the excellent Ch4/AMC series 'Humans'. A series about robotic servants who gain self awareness.

I thought Picard's arguments were excellent but I was interested to note that the judge only ruled that Data has the right to choose, not that he is a sentient life form. I'm scratching my head about a later episode of TNG that deals with this issue in more depth. Maybe someone can help me out?

I was also interested in one of Maddox' arguments that was largely ignored by the episode. That if data was a box on wheels with largely the same skills and ability to communicate would we be having this debate?

Are the crew motivated by the fact that Data looks like them and would they fight with the same zeal if data looked like the robot from Lost in Space?
Taylor
Sun, Aug 23, 2015, 3:05pm (UTC -5)
OK, I'm also an attorney but I can't say I care that the legal "procedures" may seem unrealistic - it's Star Trek and there are so many things that we could nitpick in any episode, many of them considerable improbabilities. Admittedly this is an individual thing, depends on how some episodes strike us, and sometimes I'm the nitpicker as well ...

The subject matter is compelling. Re-watching this episode I kept thinking of Blade Runner ... which very much taps into the "slavery" issue. That's an emotionally powerful angle, although the most intriguing aspect for me is simply that dividing line between humanity and AI, and how we envision an ultimate future when that line is blurred ... thus why's it's been the subject of so much sci fi over the decades.
Diamond Dave
Mon, Aug 24, 2015, 3:36pm (UTC -5)
A brave episode, tackling as it does some deep philosophical points that reaches its climax in the static form of a courtroom drama. It grounds itself in a thorough examination of both arguments - a particular highlight is Riker's unwilling but devastating critique. But more so is Picard's absolute certainty of Data's right to choose and his willingness to support him. And perhaps this is one theme that isn't explored - that as a human, we would feel no emotional attachment to a toaster, or a ship's computer, but would when serving with Data. And is that not a measure of Data's sentience?

To my mind the episode does have flaws - the presence of a handy legal officer who, surprise, has a back story with Picard, as well as the flimsy excuse to pit Riker and Picard against each other. But in its intelligent examination of the issues, this is a cut above the average episode. 3.5 stars.
Gabriel
Sat, Sep 12, 2015, 10:32am (UTC -5)
I'm here in 2015 watching it and just wanted to say that I dropped some tears. Really. Enough said.
Chef
Sat, Sep 19, 2015, 5:55pm (UTC -5)
And then a decade later a race of disposable beings were put to work by Starfleet, scrubbing plasma conduits.
Kiamau
Thu, Sep 24, 2015, 9:53pm (UTC -5)
This is TNG's first great episode but it is hardly TNG's "Duet."
grumpy_otter
Mon, Oct 19, 2015, 6:39am (UTC -5)
I love that there are actual lawyers above me discussing the legalities of this trial! I think they raise good points, but I still love this episode.

To me, this episode is much more about friendship than law, or Data's sapience. I teach history, and I actually show this episode to my students to demonstrate how important it is to be able to argue the other side of your thesis. If you do not recognize the strength of the claims on the other side, you cannot effectively argue your own case. If Riker could not put aside his belief to make the argument, his friend might have been lost.

I have one nitpick about this episode, though I recognize it is a common mistake made by many people, and is so common that the OED might as well change the definition. Data has no sentience at all because sentience refers to the ability to feel. What he DOES have is sapience--the ability to think. Self-awareness is a component of sapience. Even many animals have sentience, but do they have sapience?

CircleofLight
Mon, Nov 2, 2015, 3:56pm (UTC -5)
I love the arguments presented in the episode, but you really need to take off your lawyer hat before watching. And Star Trek is notorious for misunderstanding the law, from this episode to "The Drumhead" to "Rules of Engagement".

I'll just make one writing critique before I move onto the good. Forcing Riker to litigate against his friend and fellow officer is terribly forced and unnecessary drama. There's a huge conflict of interests between Riker losing a valuable officer which is going to make him perform his role badly no matter what threats some JAG officer throws out, so why does she even put him in that position to begin with? And, if Maddox was so apt on winning, why didn't he present the case on his own, or hire a professional who he could trust over Riker.

That aside, this episode is great because of the morality issues Jammer brought up in his review. Patrick Stewart's speech is well-delivered, and convincing despite his character's admitted misgivings towards law. Finally, this episode brings out a lot of interpersonal relationships Data has among the crew, and shows just how much of an impact the possibly-sentient android has.
K'Elvis
Tue, Mar 22, 2016, 9:49am (UTC -5)
As Data behaves as if he is sentient - he passes the Turing Test with flying colors - and has been accepted as a sentient being by the Federation and Star Fleet to this point, the burden of proof lies with Riker to prove Data is not sentient. Riker establishes that Data was built by a human, that Data is physically stronger than a human, and that unlike a human, Data can be turned off demonstrates that merely that Data is not human, which is neither in dispute nor relevant. It does not demonstrate that Data is not sentient. I suppose it would have been unsatisfying to simply rule in Data's favor because Riker failed to make the case that Data was sentient. Data maintains a presumption of sentience, so the slavery analogy remains relevant.
Robert
Tue, Mar 22, 2016, 11:02am (UTC -5)
"Its responses dictated by an elaborate software programme written by a man"

This part at least is relevant. I personally would not consider anything to be sentient if this was true. Riker is wrong, in "In Theory" Data adds his own subroutines for dating. And it's not the only time.

Computer programs that can improve/learn might be sentient. Computer programs whose "responses dictated [SOLELY] by an elaborate software programme written by a man" are not IMHO. I think what Riker was going for was that it was all a really convincing "act".

That said, he has no proof for that. And certainly not removing his arm or turning him off. Doctor Crusher could turn Riker off with a hypospray.
desmirelle
Wed, Jun 22, 2016, 1:09am (UTC -5)
This WOULD HAVE BEEN a great episode if it had been a flashback on Data getting into Starfleet Academy. A non-sentient being (flesh or machine) will not be allowed to go through the academy. It's an unspoken requirement. Were it not, the Enterprise would be making decisions, Picard would be a seat warmer, Riker would be decorative but not useful and Geordi relevant only because the ship doesn't have a pair of hands for the little things. As this episode stands, I expect the next episode of the JAG officer's life involved a court martial for trying to make law when she's a judge; but more importantly, for violating the civil rights of a being simply BECAUSE OF HIS RACE!!!!! (And I'm sure that would involve Federation charges, not military; there must be some guarantee of rights for the various species involved with the UFP.)

Chrome
Wed, Jun 22, 2016, 10:12am (UTC -5)
@desmirelle

The implication is that Data is a new type of being and they had no reason to bar him from joining Starfleet. The *legal matter* of his citizenship was never raised.

I could totally buy this happening, actually. There have been recent news stories where students in Texas have become valedictorians of their classes only to reveal that they're undocumented immigrants. The university these students got accepted to never exposed them, and in fact offered to support them if difficulties arose.
desmirelle
Wed, Jun 22, 2016, 9:58pm (UTC -5)
Chrome

Wrong analogy, The question was not "Was Data a citizen" the question was "Was Data Sentient" (which had as the unspoken codicil "and therefore master of his own destiny"). If we're treating the show seriously, as this episode wanted so badly to be thought provoking (and only ended up provoking me); then he had to be sentient to attend the academy, his citizenship be cursed for eternity.

My point is not that it was a bad concept (proving Data sentient), but that this particular question HAD to have been settled before he was allowed to attend). The number of students is limited, no parent is going to let a machine have their child's place without said machine proving it's just as "real mentally" as said child. The Command line study includes judgment calls; hence the episode where Troi keeps failing. Sentient is required. Citizenship was never addressed.
Chrome
Wed, Jun 22, 2016, 11:05pm (UTC -5)
@desmirelle

I think you took the analogy too literally. The point is even lofted institutions like Starfleet will overlook fundamental details we all take for granted. Starfleet might have been happy to let in Data just to boast to other institutions that the galaxy's only functioning android goes to Starfleet for its reknowned training.

Also, Judge Louvois wasn't ruling on sentience (stating sentience was a question better left to philosophers). All she cared about was whether Data, even as a possibly sentient machine, was property of Starfleet.

So, yes, you bring up good evidence that Picard also brought up: Data's service record. But that alone didn't settle the matter of Data's rights here.
desmirelle
Thu, Jun 23, 2016, 1:19am (UTC -5)
Chrome

I respectfully disagree. Melissa Snodgrass had a wonderful idea; they just misplaced it in time. I'm not arguing that the question shouldn't have been asked; I'm saying in order for Data to enter Starfleet, it had to be asked BEFORE he entered - otherwise they could've let a trained monkey attend and bragged on how well it did.

The underlying issue (and what she ruled upon) was sentience. The judge saying she wasn't ruling on it is ironic, since she would have done what she obviously wanted: made sure she made her name as a judge. Since Starfleet has no slavery policy, the only way Data could be property was if he was not sentient; ergo, sentience is the primary decision being made. The Enterprise, DS9, the starbase the case was being tried on, have no sentience and therefore are property. And if I'm going to take the premise seriously, I will say again, this was properly a flashback on how Data got into Starfleet academy. Starfleet is (as they kept hammering away at us with Wesley's attempts to take entrance exams) an elite organization with lines around the block waiting to get in. If we're taking this seriously as a story, it has to get me to suspend my disbelief. I can't if I'm to believe that this question wasn't answered well before this. It isn't logical.

Taking it seriously is the point of all this. (Which makes 'Genesis' so much more worse than the review says....)
Chrome
Thu, Jun 23, 2016, 4:39am (UTC -5)
You can disagree all you want, but Data's sentience was never decided at that hearing.

The original draft of the script (Google it), actually has Picard explaning before the hearing that he needs to prove Data is a sentient *life form". Data hears this and insists that he's only a machine.

And slavery was never an issue if Data was just considered a conscious machine. Conscious machines may be able to attend the academy, but Picard needed to press that Data was a life form for the sake of the hearing.

I suppose they could've kept the original script intact, but it was probably too lengthy. And I don't see how a flashback would fit, especially because TNG never uses them. This isn't Lost.
Desmirelle
Thu, Jun 23, 2016, 12:01pm (UTC -5)
*sigh*

You can keep writing it all you want, but sentience was the UNDERLYING question being decided. In order to be property, Data could not be sentient. If Data is not sentient, he has no business being in command of sentient beings. Now I'm back to why the Enterprise has no rank. What was written in an early draft is rewritten for a reason (like that exchange between Picard, Geordi & Data is illogical).

As a flashback episode, it would have excellent (and possibly given us more "what happened when" episodes which would have done us in good stead as an alternative to "Genesis").

Slavery was exactly the issue, as Guinan/Picard exchange highlighted. The problem was that their argument was circular and depended upon the sentience of the beings in question. Maddox wanted to treat a fellow Starfleet officer like his own personal Lego set. You can't do that to a sentient being; Crusher can't do exploratory on Worf just to see how Klingons work... It's insulting to expect me to believe that the issue hasn't ben decided BEFORE this point. That's my point.

You keep referring to Data as a "conscious" machine. To me that merely states that he's "on" as opposed to "off" - if you're using it to indicate he's aware that he's a machine and operating - since he did not want to be turned off , you're actually saying he's sentient.
Chrome
Thu, Jun 23, 2016, 12:05pm (UTC -5)
"sentience was the UNDERLYING question being decided. In order to be property, Data could not be sentient."

This was never stated in the episode. It was just the argument Picard used to push that Data was a life form.

"If Data is not sentient, he has no business being in command of sentient beings."

This is your opinion. Please back this up with statements from the series or comments from the writers. Otherwise, you're just stating your personal preferences, without giving us any objective reason why we should agree with you.
William B
Thu, Jun 23, 2016, 12:25pm (UTC -5)
I think the slavery analogy holds, because slaves *were* considered property, despite being sentient. The slavery argument is not actually circular -- it is important because it changes the scope of the conversation from one Data, who Picard says "is a curiosity" to an entire race. If androids are not sentient or sapient, "have no soul" to use Luvoix's final point, then it does not matter how they are treated. If it is *possible* that they are sentient or sapient, then what would be a regrettable outcome if *one* android had his rights trampled on would become a horror if an entire race came into being with their rights denied. The argument Picard puts forth is that there will be far-reaching consequences beyond the fate of one single android, and thus that the bar for proving that it is permissible to treat androids as property should be much higher. One can disagree with this, for example by arguing that it is just as much an injustice if *one* Data has his rights trampled on, but that is what Picard is saying.

Along those lines, my admittedly weak understanding is there are times in history where slaves *did* serve in the military (i.e. the American Revolutionary War).

One would expect Starfleet to have higher standards, of course. And certainly I think that the episode would be strengthened by some sort of explanation of how Data's status could be so unsettled at this point in time. But basically I think that there are some ambiguous areas of the law where people more or less follow something like habit until someone specifically challenges them. People who were not considered full persons could serve in the military at different times in history, and I see it as believable, on some level, that the decision of whether or not to admit Data to Starfleet and the decision of whether or not he was considered a person and even a Federation citizen were basically not considered the same one. We know that people who are not Federation citizens can join Starfleet (Nog, e.g.). Given that Data was found by Starfleet officers, it seems possible that they strongly advocated for him, perhaps pushing for some of the red tape to be pushed aside.

I think Maddox' argument would be strengthened if he claimed that Data was Starfleet property not because he's a non-person who joined the service, but because he was salvaged by Starfleet officers.

That said, I do find it a bit hard to believe that Data's status is quite so unsettled. The main reason I believe it as much as I do is that I am willing to accept a fair amount of bureaucratic/legal incompetence and uncertainty in dealing with Data in the years before this. In fact, a recurring theme of the series is that no one is really ready for Data -- they are unprepared for what happens if Data goes rogue (as in Brothers, Clues, etc.), they are unprepared for Data to "procreate," etc. I think it is believable in that Data is so carefully designed to placate people's concerns about him that people go into denial about the thorny issues that he poses; of course, Lore articulates that Data was specifically created to be less threatening to those around him, and while Lore's spin on it is partly because he's an evil narcissist, I don't think he's entirely wrong. I do wish that a bit more background on Data could have been provided, in particular how he spent his time between being found on Omicron Theta and on the Enterprise; he says in Datalore that he spent so many years as an ensign, so many as a lieutenant, etc., but we don't really know where and he is so...new, undeveloped in Encounter at Farpoint that I have seen people suspect that Starfleet kept him in relative isolation for several years. He says that Geordi was his first-ever friend.
William B
Thu, Jun 23, 2016, 12:43pm (UTC -5)
Which is to say, there are some logical holes in the arguments in this episode which one has to get past in order to appreciate it -- I am willing to suspend my issues because I think it is fantastic, and the episode at least *does* acknowledge, to some degree, that Data, Picard, Riker, Maddox, and Luvois are somewhat out of their league in even articulating the issues, let alone fully arguing them. Anyway, one of the issues that is not often brought up is that if you accept that Data is not a person but property, then one has still not established that he is *Starfleet* property, rather than, say, Picard's personal property. I get why this is skirted over, because, as I said, he was found by Starfleet officers and no *non*-Starfleet people have any reasonable claim on him besides himself, with Soong and the rest of the Omicron Theta colonists (apparently) dead.
Chrome
Thu, Jun 23, 2016, 12:45pm (UTC -5)
@William B

Once again, you make some excellent points. This is definitely a case of "ambiguous areas of the law where people more or less follow something like habit until someone specifically challenges them". Viewers may be astonished by Starfleet not answering an old and obvious question, but even in our own laws there remain a lot of legal uncertainties. The right for a person to decide who they can marry was only established in the U.S. last year after thousands of years of marriages.

Also, the "bureaucratic/legal incompetence and uncertainty" seem to recurring themes not just with Data, but with other legal questions. Surely how the law treats Data in this episode is a travesty to intelligent life forms, but then we get episodes like "Rules of Engagement" where were shown that Starfleet is very ready to throw of the rights of its own sapient officers to an aggressive power for political reasons.

Some background into Data would've been nice, but I don't think it was necessary for this episode to work. The judge ends with the verdict of Data's nature being an open-ended question. If the episode gave us the answer in a flashback to an earlier time when Data was established sentient, there'd be absolutely no reason to consider Riker or Maddox's arguments.
Peter G.
Thu, Jun 23, 2016, 1:07pm (UTC -5)
The funny thing is, the episode really isn't about whether Data in particular is sentient, but about how to define sentience in the first place. And since the writers don't have an answer for that I can see why their resolution was open-ended. What IS the difference between Data and the Enterprise computer? A more sophisticated neural net? Simply the directives each is given? We already know that Data is 100% susceptible to any change in programming, completely undermining the Data we knew before. Then again a person's mind can be messed with as well. However no one programmed that human from scratch, whereas 100% of Data's personality stems from programming that learned and expanded itself.

What if the Enterprise computer was given directives to teach itself too? Would it have a right to decide where it wants to go? It's simply a matter of programming the AI. So to me, the real question is about AI, not about man vs machine. Since Star Trek has a virtually non-existent track record on the issue of AI this was obviously not going to be addressed, even though it's the only issue to discuss here. Is it possible to create sentient life just by chaining together strings of code and clicking "save file"? If so, the Federation might need to have some strict laws about irresponsible creation of life by programmers. It's hard enough to argue that a string of code is life at all, no less sentient, since it's appeals to having wants are reflections of code inserted.

For instance I can write a 20 line code in BASIC that says "I am alive", and when asked if the program wants to die, it will reply with "Please, I do not want to die." Just seeing that phrase on the screen might pull heartstrings, but I think defining a line of code that says "I want to live" as being sentient scraps any meaningful sense of the word. Is Data inherently different than this 20 line piece of code, really?

The ending I would have liked would have been for them to say they could not make a determination on Data since his technology was beyond their understanding. The reason to keep him in Starfleet with his own set of rights should have stemmed from a mutual decision by all parties to *choose* to recognize his rights as an act of goodwill towards a potential life form; to err on the side of respect even in the face of the unknown. That is the Federation way, and that's what should have made the final determination.
FlyingSquirrel
Thu, Jun 23, 2016, 1:28pm (UTC -5)
I'm not sure the Enterprise computer would really be considered an AI. My recollection is that it does have certain "canned" responses when asked a question it doesn't understand, suggesting that it is programmed to understand a variety of speech patterns but doesn't actually think on its own. It's perhaps closer to what would be called a virtual intelligence in the Mass Effect universe:

masseffect.wikia.com/wiki/Virtual_Intelligence

As for Data and the question of AI sentience, I'm not sure that's a question that anyone, no matter how far in the future, can answer, simply because consciousness is a subjective experience. You can't prove that Data is actually self-aware and conscious, but you can't prove that about anyone other than yourself either. Yes, he's vulnerable to being reprogrammed, but humans have been known to exhibit personality changes due to brain injuries, and nobody would argue that they're no longer sentient or conscious at that point. My feeling is that any AIs with the same range of behavior as what Data (or the Doctor on Voyager) exhibits should have the same rights as humans out of a principle of erring on the side of caution - I'd rather grant human rights to non-sentient beings than deny them to sentient beings.
Peter g.
Thu, Jun 23, 2016, 1:43pm (UTC -5)
@ FlyingSquirrel.

I guess I shouldn't bring up "Emergence", in which the Enterprise computer (or maybe all its integrated systems along with the computer) develops signs of life. The reason I shouldn't is because the episode is dumb.

Anyhow, I get why it's tempting to say "we'll never know", but at the end of the day a determination has to be made about which kinds of code would or would not count as sentient life. You might want to be agnostic or just say they're all sentient, but then can you arrest and jail someone for writing a program and then deleting it? This is the kind of issue we're talking about. Can someone 'murder' Data in the legal sense, or merely destroy him in an act of vandalism? And what if Data's neural net was contained in a box instead of in a humanoid body? Same answer?
William B
Thu, Jun 23, 2016, 2:06pm (UTC -5)
@Peter G., absolutely it should be made clear (in-universe, and would be good to have been made clearer for the audience) where the lines are drawn between Data and the computer and other technological life forms. That said, I think it is analogous to arguments about biological life forms. Humans have certain rights. Non-human animals, particularly mammals, have some very limited rights. Plants have virtually none, with a few particular exceptions (protected forests, and sometimes individual trees). The difference between Data and a twenty-line length of code might be equivalent to the difference between a human and a virus. That Data is the only settled issue in this episode strikes me as believable; the Federation should be a more enlightened body, but they are fumbling in the dark here, and the lack of rigour in the human process of defining the legal differences between humans and other animals and the reason behind it makes me find the halting way in which "AI rights" are dealt with on a case-by-case basis in Trek.

I agree that a little more focus on what the difference between Data and the Enterprise computer *is* would be appreciated. That said, I think that the tactic that the episode takes, which is to ask what distinguishes Data from a human, is also pretty valid. The main qualitative difference between Data and a 20-line bit of BASIC code is that Data has an adaptive program which is, as stated in the episode, able to adapt to new information. Data believes that there is an ineffable quality that goes with it, and Data's friends would tend to agree. There is no way for us to guarantee this. The positronic brain is designed to replicate aspects of the human brain, in part (as well as other qualities) in an attempt to not just emulate but also reproduce humanity. All right, so the question is what distinguishes Data's brain from a human brain. There are a few possibilities:

1. Humans are sentient in ways that require some sort of metaphysical element. There is some element of humans that make *any* "constructed" being impossible to program to have human level sentience, perhaps because there is something in humans (and perhaps other living beings) that is not dependent on the physical at all.
2. Humans are sentient in some way that obeys entirely the physical laws of the universe. It may be possible to create a "constructed" sentient being, but Data is not one.
3. Humans are sentient in some way that obeys entirely the physical laws of the universe, and Data also is a sentient being who similarly is able to exist (as an emergent phenomenon). Some "constructed" intelligences do not have this quality.
4. Humans are sentient, as are all "constructed" intelligences, forming some sort of spectrum.
5. Sentience in the way that we tend to describe it does not actually exist; it is an illusion common to humans that they are sentient but it is not true in any particular way. Data is not sentient either, of course, and so what happens to him hardly matters, but the same extends to humans.

5 is mostly eliminated because we *experience* sentience in ourselves, and conclude that other humans are likely to have a sufficiently similar experience to ourselves. However, it can also go the other way. I could certainly imagine Lore, if he were so inclined, arguing that it is impossible for biological beings created by chance with brains running on electron rather than positron signals to be truly sentient.

Anyway, my impression is that the positronic net of Data's is sufficiently similar to the human brain in physical functioning, despite in other ways being very different, that it is reasonable to believe that whatever process that endows humans with sentience endows Data as well. Maddox is, of course, right that some of the reason for concluding this about Data rather than a box with wheels is that Data is designed to look human and to be anthropomorphized. That is rather the subject of The Quality of Life (important though obviously flawed).

That Data is a constructed being does not seem to me to be necessarily all that important. Certainly, it may be that it is impossible for a human to construct something with sufficient physical sophistication to match the complexity of the human brain; essentially humans are competing with millions of years of natural selection. However, if Data has an internal life and sentience, then he has it, and it does not seem to me that it diminishes his internal life that his brain was constructed with conscious intent. In any case, if the argument is not about the experience of sentience and internal life but a matter of free will and ability to break free of programming, I do not think it is a settled matter that humans are able to break free of the physical states in the brain, or of broader biological programming; that people are unpredictable can be a matter of all variables being too complex to account for, or of simply random processes which are similarly not controlled (i.e. quantum indeterminacy does not actually imply that random outcomes are *chosen* by consciousness). I think this is what Luvois is saying when she argues that she does not know if she has a soul. While she presumably does believe that she is

I tend to think that the episode does more or less end with Luvois (and Picard, to a degree) *deciding* to choose to err on the side of granting Data rights. The episode does to some extent frame the decision, at least on Picard's part (in Picard's argument) as being a matter of living up to the Starfleet philosophy of seeking out new life, and of wanting to be clear that they should consider what kind of people they are; whether "they" refers to humans or to the Federation at large is hard to say, because there is still some ambiguity (in both TOS and early TNG, and to some extent extending forward) about whether the subject of the show is *specifically* humanity or of the Federation overall.

To me, I think that Data's story, including this episode, has a lot of resonance even if it is at some point, somehow, conclusively proven that no electronic or positronic created devices could ever have something like consciousness. Whatever else we humans may be, we are also physical beings, who obey physical laws, who, like Data, come with hardware and develop changing software from our learning over time, and whose ability to make our own choices is not entirely easy to understand. Even things like the emotion chip have resonance, given how much easier it is to change one's emotional state with certain drugs or other treatments. Is it possible to find meaning while viewing our identities as intrinsically (perhaps even *exclusively*) tied to the physical reality of our brains? Can we define a soul without recourse to metaphysics? This is not even arguing that there *is* no metaphysical explanation for a soul, but with Data a biological or theological appeal to our humanity is eliminated, for one character at least. I think that this is a lot of what this episode is about.
FlyingSquirrel
Thu, Jun 23, 2016, 2:12pm (UTC -5)
@ Peter G.

"Emergence" was goofy, but wasn't there a scene where they discussed what sort of action to take in light of the fact that they might be dealing with a sentient entity that was trying to communicate? Also, I think the idea was that while a sentient mind seemed to have somehow developed from the ship's computer, the computer in its normal state was not sentient or self-aware.

My own view on AIs, incidentally, is that we probably shouldn't create them if we aren't prepared to grant them individual rights, precisely because we'll end up with these potentially unanswerable questions. I don't know enough about computer science to answer your question about writing and deleting a program, just because I'd need to know more about what would go into the potential process of creating an AI and what kind of testing could be done before activation.

If Data were contained in a box instead of an android body, I actually don't have much trouble saying yes, he should have the same rights. Obviously he wouldn't be able to move around, but I'd impose the same prohibitions against turning him off without his consent or otherwise messing with his programming.
William B
Thu, Jun 23, 2016, 2:23pm (UTC -5)
Incidentally, I like the way Data's status remains somewhat undecided throughout the show. The Federation, I think, *should* actually make up its mind and make a harder determination of what his rights are, but the fuzziness strikes me as very believable. I like, too, that even *Picard* swings between being Data's staunchest advocate and using the threat of treating Data as a defective piece of machinery in something like "Clues." And even Data vacillates. In particular, note that Data's deactivation of Lore in "Descent" goes without much fanfare; certainly Lore is dangerous to the extreme, but I suspect that if Data had killed a human adversary in quite the way he takes Lore out, that there would have been more questions asked about whether he did everything he could. I have been wanting for a while to write about Data's choice in "Inheritance," and how I think his decision not to reveal to Julianna that she is an android reflects a great deal about how Data views himself and his status and his quest for humanity at that point in the series and the tragic connotations thereof. Even though everyone on the show more or less takes the leap of faith that Data is, or has the potential to be, a real boy, it's an act of faith that needs to be regularly renewed and it gets called into question, with characters suddenly reversing themselves because no one is really that sure, even though he's their friend.

I do think that there are some significant problems with the show for not going far enough with Data (and later the Doctor) in following through all the arguments about where exactly the boundaries are supposed to be between personhood/non-personhood, and in allowing other characters to maintain a kind of agnosticism when they should really have to make up their minds definitely. I think that it's very reasonable to object and I don't want to try to come across as saying that one *has* to like the show's overall attitude and the contradictions it runs into. That said, it generally works very well for me with Data (and from what I remember, pretty well with the Doctor). I think that the...emotional dynamics, for lack of a better term, generally work, but there's no question for me that some of the wishy-washiness on the character and what distinguishes him from other AI and what distinguishes him from humans and etc. is the result of writers' backing off from some of the challenges posed by the character rather than *purely* leaving the character's status regularly open for re-review for emotional/mythological reasons. I like the result a lot because what *is* done with Data in the show means a lot to me personally and so I'm willing to overlook a fair amount, but I don't expect everyone to.
Peter G.
Thu, Jun 23, 2016, 2:41pm (UTC -5)
@ FlyingSquirrel,

If you afford sentient rights to Data-as-a-box, then the physical housing becomes irrelevant and what you're really doing is granting sentient status to code. That's fine in a sense, but that opens up, as mentioned, a massive quagmire about who can write this code, delete it, alter it, and even maybe about what kinds of attributes it can be given in the first place. Should it be illegal to write a line of code that makes the program "malevolent"? How about merely selfish and prone to kill for gain as Humans now do? It goes beyond the scope of the episode, but my feeling on the subject is that the episode gives a lot of unspoken weight to the fact that Data has been largely anthropomorphized. Maybe Soong did that on purpose to protect his creation using the sympathy of others to its shape.

@ William B,

There are certainly gradations of biological life, and although we're hazy on whether there are levels of sentience (or any sentience) there are clear differences in, basically, cognitive capacity among animals which lets us categorize them by importance. For the most part we protect intelligent animals the most, and mammals get heavier weight than non-mammals. But it's easy to see why we can do this: we can either identify outright biases (we sympathize with fellow mammals) or else identify clear distinctions in intelligence and give greater weight to those closest to sentience. That makes sense for the time being.

For AI, however, we have no such easy set of distinctions because, frankly, we don't live in a world full of various AI's to study and compare. We basically have a lack of empirical experience with them, but the difference is that while we couldn't have known what a cow was until we saw one, we certainly can know what certain kinds of AI would look on a theoretical level. Maybe not advanced code the likes of which hasn't been invented yet, but certainly anything binary and linear such as we have now (and which I suspect Data is as well; he is not a quantum computer).

And IF it's feasible to differentiate between different *types* of code - one is rigid and preset, one learns but its learning algorithm is preset, one can change its programming, etc - then this would be the determining factor in creating a hierarchy of rights for AI. Again, I see this particular discussion as being the real one to be had about Data. Whether he's 'like a Human' or not is an extremely narcissistic way to approach the topic. The question isn't whether an AI resembles a Human, but how AI contrasts with other AI. Is Data just an extraordinarily complex BASIC program that does exactly what it's told to do, no more or less? Note again that issuing phrases such as "but I want to live" can be written into any software of any simplicity, and thus the expression of such a 'desire' shouldn't be confused with desire. I can write the same thing on a piece of paper but the paper isn't sentient. The court case in the episode seemed to take very seriously Data's 'feelings' for Tasha, even though they failed to address whether those were really 'feelings' or just words issued in common usage to suit a situation.

To be honest, even after having seen the show I'm not quite sure whether Data should have been considered Starfleet property or not. It does seem like an extravagant waste to avoid having androids on every ship seeing as how Data personally saved not only the Enterprise but probably the Federation multiple times. And as for the argument of human lives being saved in favor of risking androids...well...duh? Isn't that a good thing?
Chrome
Thu, Jun 23, 2016, 3:34pm (UTC -5)
@Peter G. and William B

I took Louvois' ruling to mean that if a machine is so life-like that it has the perception of a soul, then it can't be considered property and has the right to chose.

That's the difference between the Enterprise computer and Data. No matter how sophisticated the Enterprise is programmed, it's still missing those very life-like qualities that Data showed in this episode (intimacy, vanity, friendship, a career, etc.).
Peter G.
Thu, Jun 23, 2016, 3:53pm (UTC -5)
@ Chrome,

"I took Louvois' ruling to mean that if a machine is so life-like that it has the perception of a soul, then it can't be considered property and has the right to chose."

This is kind of my problem. The things described in the episode don't show Data to be life-like, but rather Human-like, which is a significant distinction. It means that entities that emulate being a literal Human Being will receive favorable treatment by the Federation. I'm sure plenty of sentient life-forms in Star Trek don't have 'intimacy' or 'friendship' in the ways Humans know it, so I'm not sure how those should matter (but to a judge out of her depth I can see how she could be unaware enough to think it should). And just a quibble, but Data doesn't have vanity; his having a career is also a circular argument because the argument about whether he *should* have a career relies on him having the rights afforded to sentients.

The Enterprise computer wasn't designed to have personality or look like a person, but it could have been. Would the aesthetic alterations in the programming have made it suddenly sentient because it *seemed* more sentient? If that's all it comes down to then I would confidently state that Data is not sentient. But they did dally with giving the Enterprise computer a personality in TOS, and although it was played as comedy (choosing the personality matrix that calls Kirk "Dear" was probably some engineer trolling him) the takeaway from that silly experiment was to show that some Human captains like their machines to sound like machines and not to pretend (poorly) to be like Humans. To pretend in that way could be felt as an insult to Humans. But what about a machine that acknowledges it is a machine, and acts like one, but wants to be more Human? That's the recipe for ego stroking, and again I wouldn't be surprised if Data's entire existential crisis wasn't a pure magic trick played by Soong to get people to like Data (and thus to protect his work).
William B
Thu, Jun 23, 2016, 4:22pm (UTC -5)
@Peter G.,

I agree that intelligence and cognitive ability is the main way in which we distinguish different types of animals, and as you say also the similarity to humans is what tends to grant mammals special rights. Within this episode, I think we are seeing something similar with Data.

Picard asks Maddox to define sentience, and he supplies intelligence, self-awareness and consciousness.

PICARD: Is Commander Data intelligent?
MADDOX: Yes. It has the ability to learn and understand, and to cope with new situations.

I submit that this is the way the episode argues what is different about Data from a toaster, or a piece of paper; it is also what distinguishes animals granted special rights from ones which are not. It is not stated explicitly, but I believe that this ability to "learn and understand, and to cope with new situations" is what is missing from other code. Eventually Voyager's computer has bio-neural gel packs, but for now there is no indication that other systems encountered have a neural net like Data's, which is designed to emulate the functioning of the human brain. I think that Picard is being a little jokey in his statement that Data's statement of where he is and what this court case means for him proves that he is self-aware, because "awareness" to some degree requires consciousness. Really, I think that consciousness is necessary but not sufficient for full self-awareness, and the part that is not covered by consciousness is covered by Data's statement of his situation. That is indeed the same quality that a piece of paper which has "I am a piece of paper!" written on it; if that piece of paper has consciousness and somehow controls its "I am a piece of paper!" statement, then it would be self-aware. In any case, the episode did not argue that the Enterprise computer is not sufficiently intelligent in the sense of adaptability etc. to meet the criterion for sentience; that the ruling is on Data alone rather than all AI is a function of the narrowness of the case.

The combination of intelligence and "self-awareness," which is really the demonstration of the component of self-awareness that is not covered by consciousness, is what makes Data an edge case where consciousness is the essential final component, and "I don't know" becomes sufficient. Animals which are "conscious" but with no evidence of self-awareness or intelligence do not have rights, and thus AI which are not intelligent on the level of Data (who has human-level adaptability) will never have the question of whether they have feelings or consciousness raised at all.

How do you prove that something is or is not conscious? And that is why the human-centrism is important; basically that is the *only* tool that humans have to demonstrate consciousness or internal life, or lack thereof. I know that I have consciousness or internal life, and therefore beings that demonstrate qualities similar to mine, and have a similar construction to mine, are likely to be consciousness. I am not claiming this is great; it is of course narcissistic. But all arguments about consciousness start from human-centricity because the only way we have to identify the existence of an inner life is by our own example, or, at least, I have a hard time imagining any other way. In any case, demonstrating that Data (states that he) values Tasha is a way of demonstrating that Data (states that he) has values, wishes, desires which were not programmed into him directly, which adds weight to Data's stated desire not to be destroyed. It also emphasizes that Data has made his own connections besides those which were specifically and trivially predictable based on his original coding -- again, the ability to learn and adapt etc. To some degree, the idea that animals can be sorted by cognitive ability but that "cognitive ability" and intelligence would not automatically be a sign that computers have some degree of internal life is because of similarities to humans -- animals come from similar building blocks to humans (DNA, etc.) and so it is assumed that their intelligence corresponds to something similar to our own, which we know to value because we experience our own value. Now, obviously by the time of the show, humans have met other species which are sentient...but I think that the sentience is still primarily demonstrated by being similar to humans. How can anyone possibly know if anyone else is sentient? The only possible way is to either take beings at their word, or to build through analogy to one's own experience. The only being I can be sure is sentient is me; everyone else's sentience is accepted based on people being sufficiently close to me. I think that humans should expand outward as much as possible and not rely entirely on chauvinism, but I have no idea how exactly I would determine if a fully alien being which claimed that it was sentient truly experienced sentience or was just able to simulate it.

(Actually, I don't "really" know that my experience of consciousness is real, but I am still experiencing something, so I go with that.)

As to whether his statements that he values his life, or values Tasha, etc., indicate that he actually values them, this is what it means for Picard to ask if Data is conscious. If Data is not conscious, then his statements are just the verbal printouts of an automaton; if he is conscious then they are, on some level, "felt." And here I agree that Picard fails to make much of a case; all he does is ask whether everyone is sure. If I were Picard, I would start arguing that the similarity of Data's brain to a human brain and the complexity of his programming indicate a sufficient similarity in all observables to the human brain for us to conclude that it will likely have other traits in common with human brain, including consciousness. Even without the comparison to humans, though, it may happen that we can never fully assume that any rock or mountain or collection of atoms is *not* conscious, and we must accept a certain level of intelligence and self-awareness as sufficient benchmarks to declare something sentient. This is, of course, very unsatisfying, but it is unsatisfying, too, to begin with the presumption that only beings which are sufficiently similar to humans in physical nature (i.e. made up of cells and DNA) have the possibility of consciousness. To simply suspect that anything in the universe might have consciousness is a simplistic direction for the episode to go in, granted, which is why I prefer to think that the implication *is* that it is Data's similarity to humanoids in terms of systems of value, cognitive ability, adaptability and even in design (neural net which is designed to reproduce the functioning of a brain) which may not be possible without emergent consciousness. I think that most people would agree that it seems *more likely* than something that demonstrates intelligence would also have consciousness than something which demonstrates no intelligence, and so I do think this is one of the implicit elements in Picard's argument, which would make it much stronger, though it is also not entirely necessary.

One troubling question is what it would mean to program androids like Data, with a similar level of cognitive ability and similarity to humans, *without* any desire for self-preservation whatsoever.

I actually do agree, though, that Data is designed for ego-stoking. Actually that is some of the point -- Lore complicates the story because Lore immediately recognized his superiority to humans in physical and mental capacity, immediately came into conflict with people who hated him, and promptly had to be shut down. Data *is* meant to be entirely user-friendly. I think it's also true that Soong intended Data to be a person with internal life, but Data's desire to be human in a nonthreatening way, and the reality that it is basically impossible for him to achieve that, is pretty baked into him, which is tragic if you believe that Data *does* have some sort of internal life, as I tend to.
William B
Thu, Jun 23, 2016, 4:38pm (UTC -5)
@Chrome, Peter G. response:

The Turing Test is not namechecked in the episode, but it does sort of remain here: if there is no airtight argument that Data is less a person than Picard, why should Picard have more rights? This is the essence of Picard asking Maddox to prove that he is sentient; the main arguments that Maddox could supply that Picard is conscious and Data isn't are:

1. Picard is more similar to Maddox (in being a biological life form), and, implicitly, to Louvois;
2. Data was created deliberately, rather than by a random physical process;
3. (MAYBE) These are the things we don't know about the human brain, whereas this is what we know about the android net/software.

With respect to 3, obviously there are aspects of Data's programming and design which are unknown, hence the need to disassemble him. With respect to 2, Picard punctures this by suggesting that parents create their children, though it is an incomplete argument. With respect to 1, well, that is part of the reason I think Picard brings up Data's intimacy etc. One could argue that he is appealing to human biases, but he is perhaps working the opposite direction -- by showing the similarities of Data's behaviour to humans, he is countering the natural bias that he is probably not conscious because he is different in "construction" to humans. I'm not really saying I've knocked down all of these (or other potential ones). But rather than starting with why-is-Data-different-from-a-toaster, if you start with why-is-Data-different-from-a-human then Louvois' ruling makes sense. In that case, it is a significant kind of chauvinism that Picard (and Data, for that matter) do not start heading forth and trying to figure out whether they should liberate the Enterprise computer, or weather patterns, or rocks which no one would even think to wonder about, but the "AI rights" arc is not done; and yes, I do think that there is more evidence for Data's cognitive powers than the computer's, though it also has some degree of adaptability.
William B
Thu, Jun 23, 2016, 7:41pm (UTC -5)
Though, of course, the Enterprise computer can run and possibly create holodeck characters of great sophistication, so, there is a strong case of the Enterprise computer being intelligent, which certainly complicates things. That said, I don't think this means Louvois' ruling (etc.) are wrong; rather, the ruling on Data should ideally open up discussion on other sophisticated AI which have "sentient life form"-level intelligence.
Peter G.
Thu, Jun 23, 2016, 8:00pm (UTC -5)
My main issue with issuing 'sentient status' to any advanced intelligence is because at bottom intelligence is just processing power. When constructing an AI I find it troublesome to consider that the sole factor separating one AI from another might be a superior CPU, and that makes it 'sentient' and thus affords it rights. Does that mean I'm better off with a slower computer than with a more advanced one, because the latter has the right to tell me what it wants to do?

Some people theorize that consciousness is emergent is a sufficiently advanced recursive processing system. Others say something in the biology matter, perhaps as a resonator with some unseen force. Either way giving rights to technology is a big deal.
William B
Thu, Jun 23, 2016, 8:14pm (UTC -5)
I will say though that I do think the human-centrism of the episode is a problem insofar as one would expect that there *is* by this point in the Federation some sort of procedure for talking about sentience in non-human terms. Since the vast majority of species encountered in Trek, and especially TOS and TNG, who are accepted as sentient and having of rights are humanoid and very similar to human beings, this is not exactly a problem for the episode, so much as revealing of one of the major limitations of imagination of Trek. Alternatively, this works to some degree because this *is* a myth which is fundamentally about humanity more so than it is actually about anything to do with aliens, and so it makes sense that arguments about machines end up being human-centred.

It is actually pretty disturbing, thinking about it. I do pretty strongly think that the tactic Picard eventually settled on is correct, which is to argue that it is not possible to conclusively demonstrate that Data is sufficiently fundamentally different from Picard to be classified as a different being. However, the point remains of what happens to entities which are sentient but do not *have* a survival urge. This is potentially the case for the Enterprise computer. Certainly, Soong decided to program Data to "want to live"; it seems from various statements made over the years, including by Soong himself, that he intended to create Data as having consciousness and being something like an actual human, with some adjustments made so as to avoid the mistakes of Lore and perhaps to improve on humans. Assuming for the moment that he succeeded in creating a being which has consciousness (and thus sentience), the possibility remains that he could have programmed Data with similar skills but no "desire" for self-preservation or self-actualization.

However, this is not simply a matter of AI. Eventually genetic engineering on a broader scale should be possible, and what happens then? Could beings of human intelligence with no desire for self-preservation beyond what is convenient for their masters be created through a combination of genetic and behavioural work?

It makes a lot of sense that Soong, who really did see his androids as his children and wanted them to be humanlike, would program them to survive and thrive, and, after the catastrophe of Lore, made Data to survive, thrive, and also be personable enough that he would not have to be shut down. Some of this is obviously Soong's own vanity, but some of it is the same sort of vanity that shows up in many parents' desire for their children to carry on their legacy. I like that Voyager complicates some of the Data material by having the Doctor be quite unpleasant, much of the time; whereas Data is designed for prime likability, the Doctor is abrasive and difficult.
William B
Thu, Jun 23, 2016, 8:52pm (UTC -5)
@Peter G., Fair enough.
Chrome
Fri, Jun 24, 2016, 10:06am (UTC -5)
@Peter G.

"This is kind of my problem. The things described in the episode don't show Data to be life-like, but rather Human-like, which is a significant distinction."

True, and this goes back to what William B mentioned about the writers being limited in describing Starfleet generally because they only have the human experience to draw from. Incidentally, that vanity thing I mentioned is actually a line from this episode when Data curiously decides to pack him medals when he leaves Starfleet.

But you're right, the episode doesn't really describe what criteria Data has which qualifies him as sentient and the computer as non-sentient. I suppose Data seems more self-aware than a computer, but it's hard to tell if he's acting on incredibly complex programming or something greater.
William B
Fri, Jun 24, 2016, 11:08am (UTC -5)
One thing I will still add is that the comparison to animal life still holds in some ways, especially given that certain animals were selectively bred (over millennia) for both intelligence and ability to interact with humans. Putting human intervention aside, if you need a more intelligent animal, ie for a service animal for the blind, you have to have a dog rather than a spider and you have to treat it better. If you want a pet you can pull the legs off with relative impunity, get a spider not a dog. It may end up being that a scale for defining intelligence on computers will be introduced in terms of adaptability etc and that it will be necessary to have less adaptable computers to be able to treat it ethically. Since intelligence (and, really, intelligence as defined by ability to do human-like tasks) is the main measurement for animal life value, I expect it is likely to be one for AI if a sufficiently rigorous theory of consciousness is not forthcoming.

I am troubled, in the end, by the human-centricity of the arguments about Data and the lack of extension to other computers. That said, there are still two directions: if Data is mostly indistinguishable from a humanoid except in the kind of machine he is, Picard's case stands and it is chauvinism to assume that only biology could produce consciousness; if Data is mostly indistinguishable from other machines except in his similarity to humans, then Peter G.'s point stands and it is chauvinism to only grant rights to the most cuddly and human of machines. Both can be true, in which case the failure of imagination on the part of the characters and likely writers is failing to use Data as a launching point to all AI. Even the exocomps, the emergent thing in Emergence, and various holodeck characters are still identified as independent beings whereas the computer itself is not, which reveals a significant bias toward things which resemble human or animal life.

For what it's worth, I continue to have no doubt Data was programmed to value things, have existential crises etc., in conjunction with inputs from his environment, but I continue to believe that this does not necessarily distinguish him from humans, who are created with a DNA blueprint which creates a brain which interacts with said blueprint and the environment. Soong programmed Data in a way to make him likely to continue existing, and humans' observable behaviours are generally consistent with what will lead to the survival of the individual and species. To tie into the first scene in the episode, Data may be an elaborate bluff, but so might we be. Of course that still leaves open the possibility that things very far from human, whether biological, technological, or something else entirely, can also possess this trait. And again it seems like cognitive ability and distance from humans are the things we use now; probably given the similarity of humanoids, cognitive ability and distance from humanoids will be the norm. I would like to believe there is something else that could make the application fairer and less egocentric. But it seems even identifying the root of human consciousness more precisely (in the physical world) would just move the problem one step back, identifying "this particular trait we have" as the thing of value, rather than these *other* traits we have.
Andy's Friend
Sat, Jun 25, 2016, 9:04pm (UTC -5)
@All

You have to go much further. You have to stop talking about artificial intelligence, which is irrelevant, and begin discussing artificial consciousness.

Allow me to copy-paste a couple of my older posts on "Heroes and Demons" (VOY). I recommend the whole discussion there, even Elliott's usual attempts to contadict me (and everyone else; he was the rather contrarian fellow). Do note that "body & brain," as I later explain on that thread, is a stylistic device: it is of course Data's positronic brain that matters.


Fri, Oct 31, 2014, 1:29pm (UTC -5)

"@Elliott, Peremensoe, Robert, Skeptikal, William, and Yanks

Interesting debate, as usual, between some of the most able debaters in here. It would seem that I mostly tend to agree with Robert on this one. I’m not sure, though; my reading may be myopic.

For what it’s worth, here’s my opinion on this most interesting question of "sentience". For the record: Data and the EMH are of course some of my favourite characters of Trek, altough I consider Data to be a considerably more interesting and complex one; the EMH has many good episodes and is wonderfully entertaining ― Picardo does a great job ―, but doesn’t come close to Data otherwise.

I consider Data, but not the EMH, to be sentient.

This has to do with the physical aspect of what is an individual, and sentience. Data has a body. More importantly, Data has a brain. It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.

Peremensoe wrote: ”This is a physiological difference between them, but not a philosophical one, as far as I can see.”

I cannot agree. I’m sure that someday we’ll see machines that can simulate intelligence ― general *artificial intelligence*, or strong AI. But I believe that if we are ever to also achieve true *artificial consciousness* ― what I gather we mean here by ”sentience” ― we need also to create an artificial brain. As Haikonen wrote a decade ago:

”The brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers.”

This is the main difference between Data and the EMH, and why this physiological difference is so important. Data possesess an artificial brain ― artificial neural networks of sorts ―, the EMH does not.

Data’s positronic brain should thus allow him thought processes somehow similar to those of humans that are beyond the EMH’s capabilities. The EMH simply executes Haikonen’s ”programmed strings of commands”.

I don’t claim to be an expert on Soongs positronic brain (is anyone?), and I have no idea about the intricate differences and similarities between it and the human brain (again: does anyone?). But I believe that his artificial brain must somehow allow for some of the same, or similar, thought processes that cause *self-awareness* in humans. Data’s positronic brain is no mere CPU. In spite of his very slow learning curve in some aspects, Data consists of more than his programming.

This again is at the core of the debate. ”Sentience”, as in self-awareness, or *artificial consciousness*, must necessarily imply some sort of non-linear, cognititive processes. Simple *artificial intelligence* ― such as decision-making, adapting and improving, and even the simulation of human behaviour ― must not.

The EMH is a sophisticated program, especially regarding prioritizing and decision-making functions, and even possessing autoprogramming functions allowing him to alter his programming. As far as I remember (correct me if I’m wrong), he doesn’t posses the same self-monitoring and self-maintenance functions that Data ― and any sentient being ― does. Even those, however, might be programmed and simulated. The true matter is the awareness of self. One thing is to simulate autonomous thought; something quite different is actually possessing it. Does the fact that the EMH wonders what to call himself prove that he is sentient?

Data is essentially a child in his understanding of humanity. But he is, in all aspects, a sentient individual. He has a physical body, and a physical brain that processes his thoughts, and he lives with the awareness of being a unique being. Data cannot exist outside his body, or without his positronic brain. If there’s one thing that we learned from the film ”Nemesis”, it’s that it’s his brain, much superior to B-4’s, that makes him what he is. Thanks to his body, and his brain, Data is, in every aspect, an independent individual.

The EMH is not. He has no body, and no brain, but depends ― mainly, but not necessarily ― on the Voyager computer to process his program. But more fundamentally, he depends entirely on that program ― on strings of commands. Unlike Data, he consists of nothing more than the sum of his programming.

The EMH can be rewritten at will, in a manner that Data cannot. He can be relocated at will to any computer system with enough capacity to store and process his program. Data cannot ― when Data transfers his memories to B-4, the latter doesn’t become Data. He can be shaped and modelled and thrown about like a piece of clay. Data cannot. The EMH has, in fact, no true personality or existence.

Because he relies *entirely* on a string of commands, he is, in truth, nothing but that simple execution of commands. Even if his program compels him to mimic human behaviour with extreme precision, that precision merely depends on computational power and lines of programming, not thought process.

Of course, one could argue that the Voyager’s computer *is* the EMH’s brain, and that it is irrelevant that his memories, and his program, can be transferred to any other computer ― even as far as the Alpha Quadrant, as in ”Message in a Bottle” and ”Life Line”.

But that merely further annihilates his individuality. The EMH can, in theory, if the given hardware and power requirements are met, be duplicated at will at any given time, creating several others which might then develop in different ways. However ― unlike say, Will and Thomas Riker, or a copy of Data, or the clone of any true individual ―, these several other EMHs might even be merged again at a later time.

It is even perfectly possible to imagine that several EMHs could be merged, with perhaps the necessary adjustments to the program (deleting certain subroutines any of them might have added independently in the meanwhile, for example), but allowing for multiple memories for certain time periods to be retained. Such is the magic of software.

The EMH is thus not even a true individual, much less sentient. He’s software. Nothing more.

Furthermore, something else and rather important must also be mentioned. Unless our scope is the infinite, that is, God, or the Power Cosmic, to be sentient also means that you can lose that sentience. Humans, for a variety of reasons, can, all by themselves and to various degrees, become demented, or insane, or even vegetative. A computer program cannot.

I’m betting that Data, given his positronic brain, could, given enough time, devolve to something such as B-4 when his brain began to fail. Given enough time (as he clearly evolves much slower than humans, and his positronic brain would presumably last centuries or even millennia before suffering degradation), Data could actually risk losing his sanity, and perhaps his sentience, just like any human.

The EMH cannot. The various attempts in VOY to depict a somewhat deranged EMH, such as ”Darkling”, are all unconvincing, even if interesting or amusing: there should and would always be a set of primary directives and protocols that would override all other programming in cases of internal conflict. Call it the Three Laws, or what you will: such is the very nature of programming. ”Darkling”, and other such instances, is a fraud. It is not the reflex of sentience; it is, at best, the result of inept programming.

So is ”Latent Image”. But symptomatically, what do we see in that episode? Janeway conveniently rewrites the EMH, erasing part of his memory. This is consistent with what we see suggested several times, such as concerning his speech and musical subroutines in ”Virtuoso”. Again, symptomatically, what does Torres tell the EMH in ”Virtuoso”?

― TORRES: “Look, Doc, I don't know anything about this woman or why she doesn't appreciate you, and I may not be an expert on music, but I'm a pretty good engineer. I can expand your musical subroutines all you like. I can even reprogramme you to be a whistling teapot. But, if I do that, it won't be you anymore.”

This is at the core of the nature of the EMH. What is he? A computer program, the sum of lines of programming.

Compare again to Data. Our yellow-eyed android is also the product of incredibly advanced programming. He also is able to write subroutines to add to his nature and his experience; and he can delete those subroutines again. The important difference, however, is that only Soong and Lore can seriously manipulate his behaviour, and then only by triggering Soongs purpose-made devices: the homing device in ”Brothers”, and the emotion chip in ”Descent”. There’s a reason, after all, why Maddox would like to study Data further in ”Measure of a Man”. And this is the difference: Soong is Soong, and Data is Data. But any apt computer programmer could rewrite the EMH as he or she pleased.

(Of course, one could claim than any apt surgeon might be able to lobotomise any human, but that would be equivalent to saying that anyone with a baseball bat might alter the personality of an human. I trust you can see the difference.)

I believe that the EMH, because of this lack of a brain, is incapable of brain activity and complex thought, and thus artificial consciousness. The EMH is by design able to operate from any computer system that meets the minimum requirements, but the program can never be more than the sum of his string of commands. Sentience may be simulated ― it may even be perfectly simulated. But simulated sentience is still a simulation.

I thus believe that the EMH is nothing but an incredibly sophisticated piece of software that mimics sentience, and pretends to wish to grow, and pretends to... and pretends to.... He is, in a way, The Great Pretender. He has no real body, and he has no real mind. As his programming evolves, and the subroutines become ever more complex, the illusion seems increasingly real. But does it ever become more than a simulacrum of sentience?

All this is of course theory; in practical terms, I have no problem admitting that a sufficiently advanced program would be virtually indistinguishable, for most practical purposes, from actual sentience. And therefore, *for most practical purposes*, I would treat the impressive Voyager EMH as an individual. But as much as I am fond of the Doctor, I have a very hard time seeing him as anything but a piece of software, no matter how sophisticated.

So, as you can gather by now, I am not a fan of such thoughts on artificial consciousness that imply that it is all simply a matter of which computations the AI is capable of. A string of commands, however complex, is still nothing but a string of commands. So to conclude: even in a sci-fi context, I side with the ones who believe that artificial consciousness requires some sort of non-linear thought process and brain activity. It requires a physical body and brain of sorts, be it a biological humanoid, a positronic android, the Great Link, the ocean of Solaris, or whatever (I am prepared to discuss non-corporeal entities, but elsewhere).

Finally, I would say that the bio gel idea, as mentioned by Robert, could have been interesting in making the EMH somehow more unique. That could have the further implication that he could not be transferred to a computer without bio gel circuitry, thus further emphasizing some sort of uniqueness, and perhaps providing a plausible explanation for the proverbial ”spark” of consciousness ― which of course would then, as in Data’s case, have been present from the beginning. This would transform the EMH from a piece of software into... perhaps something more, that was interwoven with the ship itself somehow. It could have been interesting ― but then again, it would also have limited the writing for the EMH very severely. Could it have provided enough alternate possibilities to make it worthwhile? I don’t know; but I can understand why the writers chose otherwise"
Andy's Friend
Sat, Jun 25, 2016, 9:11pm (UTC -5)
And:

Sat, Nov 1, 2014, 1:43pm (UTC -5)

"@William B, thanks for your reply, and especially for making me see things in my argumentation I hadn’t thought of myself! :D

@Robert, thanks for the emulator theory. I’m not quite sure that I agree with you: I believe you fail to see an important difference. But we’ll get there :)

This is of course one huge question to try and begin to consider. It is also a very obvious one; there’s a reason ”The Measure of a Man” was written as early as Season 2.

First of all, a note on the Turing test several of you have mentioned: I agree with William, and would be more categorical than him: it is utterly irrelevant for our purposes, most importantly because simulation really is just that. We must let Turing alone with the answers to the questions he asked, and search deeper for answers to our own questions.

Second, a clarification: I’m discussing this mostly as sci-fi, and not as hard science. But it is impossible for me to ignore at least some hard science. The problem with this is that while any Trek writer can simply write that the Doctor is sentient, and explain it with a minimum of ludicrous technobabble, it is quite simply inconsistent with what the majority of experts on artifical consciousness today believes. But...

...on the other hand, the positronic brain I use to argue Data’s artificial consciousness is, in itself, in a way also a piece of that same technobabble. None of us knows what it does; nobody does. However, it is not as implausible a piece of technobabble as say, warp speed, or transporter technology. It may very well be possible one day to create an artificial brain of sorts. And in fact, it is a fundamental piece in what most believe to be necessary to answer our question. I therefore would like to state these fundamental First and Second Sentences:

1. ― DATA HAS AN ARTIFICIAL BRAIN. We know that Data has a ”positronic brain”. It is consistently called a ”brain” throughout the series. But is it an *artificial brain*? I believe it is.

2. ― THE EMH IS A COMPUTER PROGRAM. I don’t belive I need to elaborate on that.

This is of the highest order of importance, because ― unlike what I now see Robert seems to believe ― I think the question of ”sentience”, or artificial consciousness, has little to do with hardware vs software as he puts it, as we shall see.

Now, I’d like to clarify nomenclature and definitions. Feel free to disagree or elaborate:

― By *brain* I mean any actual (human) or fictional (say, the Great Link) living species’ brain, or thought process mechanism(s) that perform functions analogous to those of the human brain, and allow for *non-linear*, cognitive processes. I’m perfectly prepared to accept intelligent, sentient, extra-terrestrial life that is non-humanoid; in fact, I would be very surprised if most were humanoid, and in that respect I am inclined to agree with Stanilaw Łem in “Solaris”. I am perfectly ready to accept radial symmetric lifeforms, or asymmetric, with all the implications to their nervous systems, or even more bizarre and exotic lifeforms, such as the Great Link or Solaris’ ocean. I believe, though, that all self-conscious lifeforms must have some sort of brain, nervous system ― not necessarily a central nervous system ―, or analogues (some highly sophisticated nerve net, for instance) that in some manner or other allows for non-linear cognitive processes. Because non-linearity is what thought, and consciousness ― sentience as we talk about it ― is about.

― By *artificial brain* I don’t mean a brain that faithfully reproduces human neuroanatomy, or human thought processes. I merely mean any artificially created brain of sorts or brain analogue which somehow (insert your favourite Treknobabble here ― although serious, actual research is being conducted in this field) can produce *non-linear* cognitive processes.

― By *non-linear* cognitive process I mean not the strict sense of non-linear computational mechanics, but rather, that ineffable quality of abstract human thought process which is the opposite of *linear* computational process ― which in turn is the simple execution of strings of command, which necessarily must follow as specified by any specific program or subroutine. Non-linear processes are both the amazing strength and the weakness of the human mind. Unlike linear, slavish processes of computers and programs, the incredible wonder of the brain as defined is its capacity to perform that proverbial “quantum leap”, the inexplicable abstractions, non-linear processes that result in our thoughts, both conscious and subconscious ― and in fact, in us having a mind at all, unlike computers and computer programs. Sadly, it is also that non-linear, erratic and unpredictable nature of brain processes that can cause serious psychological disturbances, madness, or even loss of consciousness of self.

These differences are at the core of the issue, and here I would perhaps seem to agree with William, when he writes: ”I don't think that it's at all obvious that sentience or inner life is tied to biology, but it's not at all obvious that it's wholly separate from it, either. MAYBE at some point neurologists and physicists and biologists and so forth will be able to identify some kind of physical process that clearly demarcates consciousness from the lack of consciousness, not just by modeling and reproducing the functioning of the human brain but in some more fundamental way.”

I agree and again, I would go a bit further: I am actually willing to go so far as to admit the possibility of us one day being able to create an *artificial brain* which can reproduce, to a certain degree, some or many of those processes ― and perhaps even others our own human brains are incapable of. Likewise, I am prepared to admit the possibility of sentient life in other forms than carbon-based humanoid. It is as reflections of those possibilities that I see the Founders, and any number of other such outlandish species in Star Trek. And it is as such that I view Data’s positronic brain ― something that somehow allows him many of the same possibilities of conscious thought that we have, and perhaps even others, as yet undiscovered by him. Again, I would even go so far as not only to admit, but to suppose the very real possibility of two identical artificial brains ― say, two copies of Data’s positronic brain ― *not* behaving exactly alike in spite of being exact copies of each other, in a manner similar to (but of course not identical to) how identical twins’ brains will function differently. This analogy is far from perfect, but it is perhaps the easiest one to understand: thoughts and consciousness are more than the sum of the physical, biological brain and DNA. Artificial consciousness must also be more than the sum of a artificial brain and the programming. As such, I, like the researchers whose views I am merely reflecting, not only expect, but require an artificial brain that in this aspect truly equals the fundamental behaviour of sentient biological brains.

It is here, I believe, that Robert’s last thoughts and mine seem to diverge. Robert seems to believe that Data’s positronic brain is merely a highly advanced computer. If this is the case, I wholly agree with his final assessment.

If not, however, if Data’s brain is a true *artificial brain* as defined, what Robert proposes is wholly unacceptable.

IT IS STAR TREK’S FAULT THAT THE QUALITY OF DATA’S BRAIN IS NEVER FULLY ESTABLISHED.

Data’s brain is never established as a true artificial brain. But it is never established a merely highly advanced computer, either. It is once stated, for instance, that his brain is “rated at...” But this means nothing. This is a mere attempt at assessing certain faculties of his capacities, while wholly ignoring others that may as yet be underdeveloped or unexplored. It is in a way similar to saying of a chess player that he is rated at 2450 ELO: it tells you precious little about the man’s capacities outside the realm of chess.

We must therefore clearly understand that brains, including artificial brains, and computers are not the same and don’t work the same way. It is not a matter of orders of magnitude. It is not a matter of speed, or capacity. It is not even a matter of apples and oranges.

I therefore would like to state my Third, Fourth, Fifth and Sixth Sentences:

3. ― A BRAIN IS NOT A COMPUTER, and vice-versa.

4. ― AN ARTIFICIAL BRAIN IS NOT A COMPUTER, and vice versa.

5. ― A COMPUTER IS INCAPABLE OF THOUGHT PROCESSES. It merely executes
programs.
6. ― A PROGRAM IS INCAPABLE OF THOUGHT PROCESSES. It merely consists of linear strings of commands.

Here is finally the matter explained: a computer is merely a toaster, a vacuum-cleaner, a dish-washer: it always performs the same routine function. That function is to run various computer programs. And the computer programs ― any program ― will always be incapable of exceeding themselves. And the combination computer+program is incapable of non-linear, abstract thought process.

To simplify: a computer program must *always* obey its programming, EVEN IN SUCH CASES WHEN THE PROGRAMMING FORCES RANDOMIZATION. In such cases, random events ― actions and decisions, for instance ― are still merely a part of that program, within the chosen parametres. They are therefore only apparently random, and only within the specifications of the program or subroutine. An extremely simplified example:

Imagine that in a given situation involving Subroutine 47 and a A/B Action choice, the programming requires that the EMH must:

― 35% of the cases: wait 3-6 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 10-15 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 20% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the HIGHEST probability of success according to Subroutine 47
― 10% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose RANDOMLY.
― 5% of the cases: wait 60-90 seconds as if considering Actions A and B, then choose RANDOMLY.
― 6% of the cases: wait 20-60 seconds as if considering Actions A and B, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 10-15 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47
― 2% of the cases: wait 3-6 seconds, then choose the action with the LOWEST probability of success according to Subroutine 47

In a situation such as this simple one, any casual long term observer would conclude that the faster the subject/EMH took a decision, the more likely it would be the right one ― something observed in most good professionals. Every now and then, however, even a quick decision might prove to be wrong. Inversely, sometimes the subject might exhibit extreme indecision, considering his options for up to a minute and a half, and then having even chances of success.

A professional observer with the proper means at his disposal, however, and enough time to run a few hundred tests, would notice that this subject never, ever spent 7-9 seconds, or 16-19 seconds before reaching a decision. A careful analysis of the response times given here would show results that could not possibly be random coincidences. If it were “Blade Runner”, Deckard would have no trouble whatsoever in identifying this subject as a Replicant.

We may of course modify the random permutations of sequences, and adjust probabilities and the response times as we wish, in order to give the most accurate impression of realism compared to the specific subroutine: for a doctor, one would expect medical subroutines to be much faster and much more successful than poker and chess subroutines, for example. Someone with no experience in cooking might injure himself in the kitchen; but even professional chefs cut themselves rather often. And of course, no one is an expert at everything. A sufficiently sophisticated program would reflect all such variables, and perfectly mimic the chosen human behaviour. But again, the Turing test is irrelevant:

All this is varying degrees of randomization. None of this is conscious thought: it is merely strings of command to give the impression of doubt, hesitation, failure and success ― in short, to give the impression of humanity.

But it’s all fake. It’s all programmed responses to stimuli.

Now make this model a zillion times more sophisticated, and you have the EMH’s “sentience”: a simple simulation, a computer program unable to exceed its subroutines, run slavishly by a computer unable of any thought processes.

The only way to partially bypass this problem is to introduce FORCED CHAOS: TO RANDOMIZE RANDOMIZATION altogether.

It is highly unlikely, however, that any computer program could long survive operating a true forced chaos generator at the macro-level, as opposed to limited forced chaos to certain, very specific subroutines. One could have forced chaos make the subject hesitate for forty minutes, or two hours, or forever and forfeit the game in a simple position in a game of chess, for example; but a forced chaos decision prompting the doctor to kill his patient with a scalpel would have more serious consequences. And many, many simpler forced chaos outcomes might also have very serious consequences. And what if the forced chaos generator had power over the autoprogramming function? How long would it take before catastrophic failure and cascading systems failure would occur?

And finally, but also importantly: even if the program could somehow survive operating a true forced chaos generator, thus operating extremely erraticly ― which is to say, extremely dangerously, to itself and any systems and people that might depend on it ―, it would still merely be obeying its forced chaos generator ― that is, another piece of strings of command.

So we’re back where we started.

So, to repeat one of my first phrases from a previous comment: “It’s not about how Data and the EMH behave and what they say, it’s a matter of how, or whether, they think.” And the matter is, that the EMH simply *does not think*. The program simulates realistic responses, based on programmed responses to stimuli. That’s all. This is not thought process. This is not having a mind.

So it follows that I don’t agree when Peremensoe writes what Yanks also previously has commented on: "So Doc's mind runs on the ship computer, while Data's runs on his personal computer in his head. This is a physiological difference between them, but not a philosophical one, as far as I can see. The *location* of a being's mind says nothing about its capacity for thought and experience."

The point is that “Doc” doesn’t have a “mind”. There is therefore a deep philosophical divide here. The kind of “mind” the EMH has is one you can simply print on paper ― line by line of programming. That’s all it is. You could, quite literally, print every single line of the EMH programming, and thus literally read everything that it is, and learn and be able to calculate its exact probabilities of response in any given, imaginable situation. You can, quite literally, read the EMH like a book.

Not so with any human. And not so, I argue, with Data. And this is where I see that Robert, in my opinion, misunderstands the question. Robert writes: “Eventually hardware and an OS will come along that's powerful enough to run an emulator that Data could be uploaded into and become a software program”. This only makes sense if you disregard his artificial brain, and the relationship between his original programming and the way it has interacted with, and continues to interact with that brain, ever expanding what Data is ― albeit rather slowly, perhaps as a result of his positronic brain requiring much longer timeframes, but also being able to last much longer than biological brains.

So I’ll say it again: I believe that Data is more than his programming, and his brain. His brain is not just some very advanced computer. Somehow, his data ― sensations and memories ― must be stored and processed in ways we don’t fully understand in that positronic brain of his ― much like the Great Link’s thoughts and memories are stored and processed in ways unknown to us, in that gelatinous state of theirs.

I therefore doubt that Data’s program and brain as such can be extracted and emulated with any satisfactory results, any more than any human’s can. Robert would like to convert Data’s positronic brain into software. But who knows if that is any more possible than converting a human brain into software? Who knows whether Data’s brain, much like our own, can generate thought processes that are inscrutable and inexplicable that surpass its construction?

So while the EMH *program* runs on some *computer*, Data’s *thoughts* somehow flow in his *artificial brain*. This is thus not a matter of location: it’s a matter of essence. We are discussing wholly different things: a program in a computer, and thoughts in a brain. It just doesn’t get much more different. In my opinion, we are qualitatively worlds apart. "

Submit a comment





Notify me about new comments on this page
Hide my e-mail on my post

◄ Season Index

▲Top of Page | Menu | Copyright © 1994-2016 Jamahl Epsicokhan. All rights reserved. Unauthorized duplication or distribution of any content is prohibited. This site is an independent publication and is not affiliated with or authorized by any entity or company referenced herein. See site policies.